Graphics programming with directx 9 module i

1,060 60 0
  • Loading ...
    Loading ...
    Loading ...

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Tài liệu liên quan

Thông tin tài liệu

Ngày đăng: 23/10/2019, 16:43

TeamLRN TeamLRN Graphics Programming with Direct X Part I (12 Week Lesson Plan) TeamLRN Lesson 1: 3D Graphics Fundamentals Textbook: Chapter One (pgs – 32) Goals: We begin the course by introducing the student to the fundamental mathematics necessary when developing 3D games Essentially we will be talking about how 3D objects in games are represented as polygonal geometric models and how those models are ultimately drawn It is especially important that students are familiar with the mathematics of the transformation pipeline since it plays an important role in getting this 3D geometry into a displayable 2D format In that regard we will look at the entire geometry transformation pipeline from model space all the way through to screen space and discuss the various operations that are necessary to make this happen This will include discussion of transformations such as scaling, rotations, and translation, as well as the conceptual idea of moving from one coordinate space to another and remapping clip space coordinates to final screen space pixel positions Key Topics: • • Geometric Modeling o 2D/3D Coordinate Systems o Meshes ƒ Vertices ƒ Winding Order The Transformation Pipeline o Translation o Rotation o Viewing Transformations o Perspective Projection o Screen Space Mapping Projects: NONE Exams/Quizzes: NONE Recommended Study Time (hours): - TeamLRN Lesson 2: 3D Graphics Fundamentals II Textbook: Chapter One (pgs 32 – 92) Goals: Picking up where the last lesson left off, we will now look at the specific mathematics operations and data types that we will use throughout the course to affect the goals discussed previously regarding the transformation pipeline We will examine three fundamental mathematical entities: vectors, planes and matrices and look at the role of each in the transformation pipeline as well as discussing other common uses Core operations such as the dot and cross product, normalization and matrix and vector multiplication will also be discussed in detail We then look at the D3DX equivalent data types and functions that we can use to carry out the operations discussed Finally we will conclude with a detailed analysis of the perspective projection operation and see how the matrix is constructed and how arbitrary fields of view can be created to model different camera settings Key Topics: • • 3D Mathematics Primer o Vectors ƒ Magnitude ƒ Addition/ Subtraction ƒ Scalar Multiplication ƒ Normalization ƒ Cross Product ƒ Dot Product o Planes o Matrices ƒ Matrix/Matrix Multiplication ƒ Vector/Matrix Multiplication ƒ 3D Rotation Matrices ƒ Identity Matrices ƒ Scaling and Shearing ƒ Concatenation ƒ Homogenous Coordinates D3DX Math o Data Types ƒ D3DXMATRIX ƒ D3DXVECTOR ƒ D3DXPLANE o Matrix and Transformation Functions ƒ D3DXMatrixMultiply ƒ D3DXMatrixRotation{XYZ} ƒ D3DXMatrixTranslation ƒ D3DXMatrixRotationYawPitchRoll TeamLRN • ƒ D3DXVecTransform{…} o Vector Functions ƒ Cross Product ƒ Dot Product ƒ Magnitude ƒ Normalization The Transformation Pipeline II o The World Matrix o The View Matrix o The Perspective Projection Matrix ƒ Field of View ƒ Aspect Ratio Projects: Lab Project 1.1: Wireframe Renderer Exams/Quizzes: NONE Recommended Study Time (hours): - 10 TeamLRN Lesson 3: DirectX Graphics Fundamentals I Textbook: Chapter Two (pgs 94 – 132) Goals: In this lesson our goal will be to start to get an overview of the DirectX Graphics pipeline and see how the different pieces relate to what we have already learned A brief introduction to the COM programming model introduces the lesson as a means for understanding the low level processes involved when working with the DirectX API Then, our ultimate goal is to be able to properly initialize the DirectX environment and create a rendering device for output We will this during this lesson and the next This will require an understanding of the different resources that are associated with device management including window settings, front and back buffers, depth buffering, and swap chains Key Topics: • • • The Component Object Model (COM) o Interfaces/IUnknown o GUIDS o COM and DirectX Graphics Initializing DirectX Graphics The Direct3D Device o Pipeline Overview o Device Memory ƒ The Front/Back Buffer(s) ƒ Swap Chains o Window Settings ƒ Fullscreen/Windowed Mode o Depth Buffers ƒ The Z-Buffer / W-Buffer Projects: Lab Project 2.1: DirectX Graphics Initialization Exams/Quizzes: NONE Recommended Study Time (hours): – 10 TeamLRN Lesson 4: DirectX Graphics Fundamentals II Textbook: Chapter Two (pgs 132 – 155) Goals: Continuing our environment setup discussion, in this lesson our goal will be to create a rendering device for graphics output Before we explore setting up the device, we will look at the various surface formats that we must understand for management of depth and color buffers We will conclude the lesson with a look at configuring presentation parameters for device setup and then talk about how to write code to handle lost devices Key Topics: • • Surface Formats o Adapter Formats o Frame Buffer Formats Device Creation o Presentation Parameters o Lost Devices Projects: Lab Project 2.2: Device Enumeration Exams/Quizzes: NONE Recommended Study Time (hours): - 10 TeamLRN Lesson 5: Primitive Rendering I Textbook: Chapter Two (pgs 156 – 191) Goals: Now that we have a rendering device properly configured, we are ready to begin drawing 3D objects using DirectX Graphics In this lesson we will examine some of the important device settings (states) that will be necessary to make this happen We will see how to render 3D objects as wireframe or solid objects and also talk about how to affect various forms of shading Our discussion will also include flexible vertex formats, triangle data, and the DrawPrimitive function call Once these preliminary topics are out of the way we will look at the core device render states that are used when drawing – depth buffering, lighting and shading, back face culling, etc We will also talk about transformation states and how to pass the matrices we learned about in prior lessons up to the device for use in the transformation pipeline We will conclude the lesson with discussion of scene rendering and presentation (clearing the buffers, beginning and ending the scene and presenting the results to the viewer) Key Topics: • • • Primitive Rendering o Fill Modes o Shading Modes o Vertex Data and the FVF o DrawPrimitiveUP Device States o Render States ƒ Z – Buffering ƒ Lighting/Shading/Dithering ƒ Backface Culling o Transformation States ƒ World/View/Projection Matrices Scene Rendering o Frame/Depth Buffer Clearing o Begin/End Scene o Presenting the Frame Projects: Exams/Quizzes: NONE Recommended Study Time (hours): – TeamLRN Lesson 6: Primitive Rendering II Textbook: Chapter Three (pgs 194 – 235) Goals: In this lesson we will begin to examine more optimal rendering strategies in DirectX Primarily the goal is to get the student comfortable with creating, filling and drawing with both vertex and index buffers This means that we will look at both indexed and non-indexed mesh rendering for both static geometry and dynamic (animated) geometry To that end it will be important to understand the various device memory pools that are available for our use and see which ones are appropriate for a given job We will conclude the lesson with a discussion of indexed triangle strip generation and see how degenerate triangles play a role in that process Key Topics: • • • Device Memory Pools and Resources o Video/AGP/System Memory Vertex Buffers o Creating Vertex Buffers o Vertex Buffer Memory Pools o Vertex Buffer Performance o Filling Vertex Buffers o Vertex Stream Sources o DrawPrimitive Index Buffers o Creating Index Buffers o DrawIndexedPrimitive/DrawIndexedPrimitiveUP o Indexed Triangle Strips/Degenerate Triangles Projects: Lab Project 3.1: Static Vertex Buffers Lab Project 3.2: Simple Terrain Renderer Lab Project 3.3: Dynamic Vertex Buffers Exams/Quizzes: NONE Recommended Study Time (hours): – 10 TeamLRN Mid-Term Examination The midterm examination in this course will consist of 40 multiple-choice and true/false questions pulled from the first three textbook chapters Students are encouraged to use the lecture presentation slides as a means for reviewing the key material prior to the examination The exam should take no more than 1.5 hours to complete It is worth 35% of student final grade TeamLRN // stage coloring : get color from texture0*diffuse m_pD3DDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_MODULATE ); m_pD3DDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); m_pD3DDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); We will not need to sample alpha from texture stage However we must be careful not to disable the alpha operations for a stage since this will cut off the alpha operations in higher texture stages So we will set the alpha operation for stage to D3DTA_CURRENT (the equivalent of D3DTA_DIFFUSE) and the interpolated alpha value of the vertex will be used As our vertices have fully opaque diffuse colors, this will equate to an alpha value of 255 being passed to the second stage This value is ignored and we will sample the alpha from the texture stored there instead // stage alpha : nada m_pD3DDevice->SetTextureStageState( 0, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1 ); m_pD3DDevice->SetTextureStageState( 0, D3DTSS_ALPHAARG1, D3DTA_CURRENT ); In the second stage we simply select the color from the first stage and output it unaltered // stage coloring : nada m_pD3DDevice->SetTextureStageState( 1, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); m_pD3DDevice->SetTextureStageState( 1, D3DTSS_COLORARG1, D3DTA_CURRENT ); The alpha operations in texture stage sample the alpha value from the texture assigned to that stage This of course is our blend texture // stage alpha : get alpha from texture1 m_pD3DDevice->SetTextureStageState( 1, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1 ); m_pD3DDevice->SetTextureStageState( 1, D3DTSS_ALPHAARG1, D3DTA_TEXTURE ); Our next task is to setup texture stage (recall that it holds the base texture) to handle texture transformations Each layer has a texture matrix that will be used to transform the first set of UV coordinates to control base texture tiling Therefore, we set the D3DTSS_TEXTURETRANSFORMFLAGS texture stage state to D3DTFF_COUNT2 to inform the pipeline that we will be requiring our first set of texture coordinates to be multipled by the texture matrix for stage and that we are using 2D coordinates We not enable texture transforms for stage because the alpha blend texture in that stage is mapped to the four corners of each terrain block This must not be changed // Enable Stage Texture Transforms m_pD3DDevice->SetTextureStageState( 0, D3DTSS_TEXTURETRANSFORMFLAGS, D3DTTFF_COUNT2 ); Next we inform the pipeline of the vertex types we will be using so that it knows which components each vertex of our terrain will contain In our case this will be a position, a diffuse color, and two sets of 2D texture coordinates // Setup our terrain vertex FVF code m_pD3DDevice->SetFVF( VERTEX_FVF ); Finally, we loop through each terrain block in our terrain block array, assign its vertex buffer to stream 0, and then traverse each layer If the current terrain block uses the current layer we set the layer’s base TeamLRN texture to texture stage 0, assign the texture matrix, and call CTerrainBlock::Render to draw the block We pass the index of the layer we are currently rendering because CTerrainBlock::Render renders an individual splat level // Loop through blocks and signal a render for ( j = 0; j < 1/* m_nBlockCount*/; j++ ) { m_pD3DDevice->SetStreamSource(0, m_pBlock[j]->m_pVertexBuffer, 0, sizeof(CVertex)); // Loop through all active layers for ( i = 0; i < m_nLayerCount; i++ ) { // Skip if this layer is disabled if ( GetGameApp()->GetRenderLayer( i ) == false ) continue; CTerrainLayer * pLayer = m_pLayer[i]; if ( !m_pBlock[j]->m_pLayerUsage[ i ] ) continue; // Set our texturing information m_pD3DDevice->SetTexture( 0, m_pTexture[pLayer->m_nTextureIndex] ); m_pD3DDevice->SetTransform( D3DTS_TEXTURE0, &pLayer->m_mtxTexture ); m_pBlock[j]->Render( m_pD3DDevice, i ); } // Next Block } // Next Layer } CTerrainBlock::Render This function is called for each layer in each terrain block to render the indicated splat The function sets the splat index buffer as the current device index buffer and assigns the splat blend texture to texture stage It concludes with a call to DrawIndexedPrimitive to render the quads void CTerrainBlock::Render( LPDIRECT3DDEVICE9 pD3DDevice, USHORT LayerIndex ) { // Bail if this layer is not in use if ( !m_pSplatLevel[LayerIndex] ) return; // Set up vertex streams & Textures pD3DDevice->SetIndices( m_pSplatLevel[LayerIndex]->m_pIndexBuffer ); pD3DDevice->SetTexture( 1, m_pSplatLevel[LayerIndex]->m_pBlendTexture ); // Render the vertex buffer if ( m_pSplatLevel[LayerIndex]->m_nPrimitiveCount == ) return; pD3DDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, 0, 0, (m_nBlockWidth * m_nBlockHeight), 0, m_pSplatLevel[LayerIndex]->m_nPrimitiveCount ); } TeamLRN Questions and Exercises Can we use vertex and texture alpha simultaneously when performing alpha blending? What does it mean if a texture format is said to have an alpha channel? If a texture uses the format X8R8G8B8, does it contain per-pixel alpha information? List four locations/sources where alpha values can be stored and retrieved by the texture blending cascade If we use alpha values stored in materials, is this alpha information described as per-vertex alpha, per- pixel or per-triangle/face? How does the D3DRS_TEXTUREFACTOR render state allow us to make a constant alpha value available to all texture stages? How does a texture stage select this alpha value as an alpha argument? What does the D3DTSS_CONSTANT texture stage state allow us to do? Describe how texture stage would retrieve its color and alpha information using the following texture stage states pDevice->SetTextureStageState ( , D3DTSS_COLORARG1 , pDevice->SetTextureStageState ( , D3DTSS_COLOROP , pDevice->SetTextureStageState ( , D3DTSS_ALPHAARG1 , pDevice->SetTextureStageState ( , D3DTSS_ALPHAOP , D3DTA_TEXTURE); D3DTOP_SELECTARG1); D3DTA_DIFFUSE); D3DTA_SELECTARG1); Describe the color and alpha values output from the texture cascade using the following render states for stage and stage pDevice->SetRenderState( D3DRS_TEXTUREFACTOR , 0x400000FF ); pDevice->SetTextureStageState pDevice->SetTextureStageState pDevice->SetTextureStageState pDevice->SetTextureStageState pDevice->SetTextureStageState pDevice->SetTextureStageState ( ( ( ( ( ( 0 1 , , , , , , D3DTSS_COLORARG1 D3DTSS_COLORARG1 D3DTSS_COLOROP D3DTSS_COLORARG1 D3DTSS_COLORARG1 D3DTSS_COLOROP , D3DTA_TEXTURE); , D3DTA_DIFFUSE); , D3DTOP_MODULATE; , D3DTA_CURRENT); , D3DTA_TFACTOR); ,D3DTOP_ADD ); pDevice->SetTextureStageState ( , D3DTSS_ALPHAARG1 , D3DTA_TEXTURE); pDevice->SetTextureStageState ( , D3DTSS_ALPHAOP , D3DTA_SELECTARG1); pDevice->SetTextureStageState ( , D3DTSS_ALPHAARG1 , D3DTA_CURRENT); pDevice->SetTextureStageState ( , D3DTSS_ALPHAARG2 , D3DTA_TFACTOR); pDevice->SetTextureStageState ( , D3DTSS_ALPHAOP , D3DTA_MODULATE); 10 Why should the following equation be familiar to us and considered significant? SourceColor * SrcBlendMode + DestColor * DestBlendMode 11 What is alpha testing and when can it be useful? 12 When polygons are partially transparent, why we need to render the alpha polygons in a second pass? TeamLRN 13 Why would we ever need to sort alpha polygons, even when rendering them in a second pass? 14 Which is the better sorting algorithm to use when many alpha polygons need to be sorted prior to rendering: a bubble sort or a quick sort? 15 What is a hash table and how does it enable us to quickly sort polygons prior to rendering? 16 Do we need to sort polygons if we are performing additive color blending? 17 What is a pure alpha surface? 18 DirectX graphics provides two fog modes, what are they? 19 Excluding the lack of any fog as a fog model, how many fog models are available for each fog mode? 20 What is the Fog Factor? 21 If you were not using the transformation pipeline but still wanted vertex fog, you could enable fog and calculate your own vertex fog factors Where would you store these per-vertex fog factors in order for them be accessed and used for fogging by the pipeline? 22 Why is pixel fog often referred to as table fog? 23 Do we need to set a different fog color for both vertex fog mode and table fog mode or they share the same fog color render state? 24 What are the differences between setting up the linear fog model and setting up either of the exponential fog models for a given fog mode? 25 Do you need to set a fog density value when using linear fog? 26 When using vertex fog mode and the linear fog model, we specify the fog start and fog end distances as device coordinates in the 0.0 – 1.0 range True or False? 27 What is a W-friendly projection matrix? 28 When using vertex fog, what causes rotation artifacts and how can we potentially avoid it? 29 Regardless of whether we are using vertex fog mode or pixel fog mode, we set up all fog parameters by setting render states True or False? TeamLRN Appendix A: Texture Stage States, Render States and Sampler States Below is a list of texture stage states, render states and sampler states introduced in this chapter RenderState Parameters Description D3DRS_ALPHABLENDENABLE True or False D3DRS_SRCBLEND A member of eumerated type the D3DBLEND D3DRS_DESTBLEND A member of the enumerated type D3DBLEND D3DRS_TEXTUREFACTOR A D3DCOLOR value in the form 0xAARRGGBB The default state is opaque white (0xFFFFFFFF) Enable alpha blending in the pipeline When enabled, the color and alpha values output from the texture stage cascade are used in a blending operation with the frame buffer to generate the pixel color When disabled, the alpha output from the texture stage is discarded and the color output from the texture stage is used as the new frame buffer pixel color When alpha blending is enabled this state is used to set how the source color that is about to be rendered is blended with the frame buffer This allows us to specify an input that is used to multiply the source color and control its weight in the final color calculated When alpha blending is enabled this state is used to set how the source color that is about to be rendered is blended with the frame buffer This allows us to specify an input that is used to multiply the current frame buffer color and control its weight in the final color calculated This state can be used to set a constant color or alpha that can be accessed by the texture stage states during color and alpha blending in a texture stage If a texture stage input argument is TeamLRN set to D3DTA_TFACTOR this color will be used If the state is blending two colors using the D3DTOP_BLENDFACTORALPHA D3DRS_ALPHATESTENABLE True or False A member of the D3DCMPFUNC enumerated type This can be one of the following: D3DRS_ALPHAFUNC D3DCMP_NEVER, D3DCMP_LESS, D3DCMP_EQUAL, D3DCMP_LESSEQUAL, D3DCMP_GREATER, D3DCMP_NOTEQUAL, D3DCMP_GREATEREQUAL, D3DCMP_ALWAYS The default is D3DCMP_ALWAYS in which a pixel is never rejected based on its alpha value because it always passes the comparison test D3DRS_ALPHAREF DWORD color operation, the alpha component of this color is used to blend the two input colors If set to true, before a pixel is rendered its alpha value is tested against a reference value (set by the D3DRS_ALPHAREF) render state using a comparison function selected by the D3DRS_ALPHAFUNC renderstate If the alpha value for a pixel fails the comparison test then it is rejected and will not be rendered When alpha testing is enabled this render state allows us to choose the comparison performed against the alpha reference value For example, if we set the reference function D3DCMP_LESS with alpha testing enabled, the pixel will only pass the test and not be rejected if its alpha value is less than the reference value set by the D3DRS_ALPHAREF function This is useful for rejecting completely transparent pixels so that they not have their depth values written to the depth buffer Values can range from 0x00000000 through 0x000000FF (0 to 255) TeamLRN New Render States Table RenderState Parameters D3DRS_FOGENABLE TRUE or FALSE D3DRS_FOGCOLOR DWORD D3DRS_FOGVERTEXMODE D3DFOG_NONE D3DFOG_LINEAR D3DFOG_EXP D3DFOG_EXP2 D3DRS_FOGTABLEMODE D3DFOG_NONE D3DFOG_LINEAR D3DFOG_EXP D3DFOG_EXP2 D3DRS_FOGSTART Float (must be past as a DWORD) D3DRS_FOGEND Float (must be past as a DWORD) Description Enables fog color blending This needs to be enabled even if you are not using the transformation pipeline and are calculating the per vertex fog factors yourself as the fog factors will still need to be interpolated and the color blending performed using these fog factors Enables us to set the color of the fog as an ARGB DWORD The A component of this color is not used and can be ignored Therefore, to set a red fog color for example, we could set the fog color to color 0xFF0000 Sets the fog model used for vertex fog mode, or disables vertex fog if set to D3DFOG_NONE Sets the fog model used for pixel fog mode, or disables vertex fog if set to D3DFOG_NONE The distance or depth at which fog color will start to be blended with our pixel or vertex when using the linear fog model If using a vertex fog mode or pixel fog mode where ‘W’ based fog is being used, this should be a view space distance If using pixel fog where ‘W’ based fog is NOT being used, this should be a device depth distance in the range of 0.0 to 1.0 The distance or depth at which fog color will be blended with our pixel or vertex at full intensity when using the linear fog model If using vertex fog TeamLRN D3DRS_FOGDENSITY Float (must be past as a DWORD) D3DRS_FOGRANGEENABLE TRUE OR FALSE mode or pixel fog mode where ‘W’ based fog is being used, this should be a view space distance If using pixel fog where ‘W’ based fog is NOT being used, this should be a device depth distance in the range of 0.0 to 1.0 A floating point value between 0.0 and 1.0 that is used to set the fog density value for the exponential and squared exponential fog models Not used by the linear fog model Available only for vertex fog mode and then only if the hardware supports ‘range based’ vertex fog When enabled, the true distance from the vertex to the camera is used in the fog factor calculations eliminating rotational artefacts If set to false, which is the default state (or if range based fog is not supported), the view space Z component of the vertex will be used instead ‘Range Based’ vertex fog is more computationally expensive TeamLRN Appendix B: A Quick Guide to Creating Alpha Channels in Paint Shop Pro ™ The following is a quick guide which demonstrates creating an image which contains an alpha channel in Paint Shop Pro and above (including the evaluation version) Before we begin, we need to pick an image that we wish to generate an alpha channel for In this example we have chosen a window pane with separate segments (shown to the left) We will create individual areas of translucency relatively easily by masking off the separated areas, and filling them in as needed After starting Paint Shop Pro you can either load the image in using traditional means (via the file / open menu), or simply drag and drop the image onto the main application work area Once the image has loaded, we can start working on it In this example we will not be making any adjustments to the image itself, but instead we will be working on the alpha information only Once the image has been loaded (assuming we simply loaded up a single layer file such as a bitmap, etc.) you can pop open the Layer Palette window At this point, you should notice that we have a single Background layer, as shown below Paint Shop Pro Masks Paint Shop Pro™ does not use alpha channels in the traditional sense Instead, it adopts the concepts of masks These masks can be applied to each layer individually, and can be saved out as an alpha channel in the resulting image The first thing we need to in order to apply alpha information to our image is to create a mask To create a mask, we need to select the ‘From Image’ item from within the ‘Masks / New’ menu item After selecting this item, we are presented with an options dialog as shown below This dialog allows us to specify how we would like our alpha mask to be set up initially For now, we just want a completely opaque mask, so we can just choose to create the mask from our source image’s current opacity levels, making sure that the ‘Invert Mask Data’ check box is currently unchecked For your reference, you can use TeamLRN settings similar to the following to create an opaque mask from your image: Once you have decided how your mask will be defaulted, select OK, and the mask for the currently selected layer will be generated In this case, we had the single layer named ‘Background’ selected These background layers are special types of raster layers which cannot be made translucent and are, as their name suggests, used as a background which will show through any translucent areas of any layers above it Because of the fact that background layers cannot contain alpha information, this layer is automatically “promoted” to become a standard raster layer It should look something like the following in the layer palette: We can see that the icon for the layer changes to demonstrate the fact that it is no longer a background layer In addition, its name is changed to, for example, ‘Layer 1’ You can rename this layer at this point to give it a more meaningful label, but this is merely for your own benefit and plays no part in the actual process One other important point to notice is that an additional icon has been added to the right of the layer name, which looks somewhat like a small mask This is provided to inform you that this layer now contains a mask which can be modified, which is exactly what we will be doing next TeamLRN Editing the Layer Mask Now that we have created an empty mask, we can edit it to provide the alpha information required for our individual window panes to show through any image data rendered underneath To this we need to put the editor into ‘Mask Edit Mode’ First of all, make sure that the layer containing the mask you want to edit is your current selection within the layer palette Then, from the ‘Mask’ menu, select the ‘Edit’ menu item Once you have done this, you will notice that the title bar of both the layer palette and the image itself are appended with the text ‘*MASK*’, and that the application’s color palette changes to a simple grayscale palette as shown below: The palette shown to the left (the full color palette) is the traditional layer editing palette The one on the right is used for editing the layer mask, and depicts the 256 levels of translucency as black (0 = Fully Translucent) through white (255 = Fully Opaque) It is worth noting however that it is often a little tricky to select the exact alpha level you want from this small quick palette For this reason you may want to select the value from the main color palette, available by clicking in the middle of one of the ‘Style’ color blocks found directly underneath this quick entry palette on the ‘Color’ control bar Now we are ready to edit the mask First we must select our alpha value, but before we that we must make sure we are in solid color mode To this you can click on the black arrow contained within the ‘Styles’ color block underneath the palette A small selection box will pop up allowing you to choose between ‘Solid’, ‘Gradient’, ‘Pattern’ or ‘Null’ modes For now we want to select the solid mode as shown in the inset image Once you are sure you are in solid mode, you can select your color from the small palette above it, or by click in the centre of that same foreground style box to select the color from the main palette We will choose a mid-range color for now, in our example palette index ‘151’ (which has the color RGB(151, 151, 151)) By choosing an alpha value which is not totally transparent, we will be able to retain some of the original detail in each window pane segment when it is rendered As we know, we want to leave the horizontal / vertical bars of our window pane totally opaque This may be a little difficult to achieve, or at least a little laborious, if we were to avoid / adjust these areas by hand We can solve this problem by using the selection tool: TeamLRN As you can see, we can use the selection tool to mask out areas of the image Similar to many other applications of any type, you can multi-select by holding the shift key, and deselect individual areas using the ctrl key whilst in the selection mode These are depicted by a little + or – sign being displayed next to the tool’s cursor so that you can easily see which mode you are currently in Now that we have masked off the nine individual areas of the image, when we make any modifications, the changes will only occur inside those parts contained within the selected region(s), leaving our window pane separators completely intact Of course, we are not currently editing the image itself, but the same applies when in mask edit mode, leaving the areas between the glass panes totally opaque We are now ready to modify our mask When in mask edit mode we can treat it just as if it was a simple palletized image, and can perform many of the same color based operations with it (ex brightness, gamma, noise, etc.) in exactly the same way With our alpha level “color” chosen, we can now pick an editing tool For this job, we are going to pick the airbrush with the options shown in the next image Which tool you use for editing the alpha mask is image specific, but the airbrush suits us well for the current task TeamLRN We have chosen the airbrush, rather than simply adjusting the ‘Brightness’ of the alpha mask values, because this allows us to be a little inaccurate, and to go a little wild when spraying on our alpha values This lets us leave behind dirty smudges, or shaded areas at will Feel free to experiment with the airbrush because remember, only the areas inside your selection will be modified Once you are completely satisfied with the results of your haphazard airbrushing, you should end up with something a little like the image to the right, Make sure you leave your current selected areas intact You should notice that, to help you visualize the translucent areas, paint shop has rendered a checkered pattern behind, which shows through the now translucent glass Tip: This pattern can be altered on the ‘Transparency’ tab found via the ‘File / Preferences / General Program Preferences’ menu item, to allow for easier viewing in certain circumstances Now that we have our general alpha values set up, with our selection still intact (and still in the mask editing mode) we can touch up this image a little bit We could for instance use the ‘Noise’ effect to add a little uniform noise (say 15%) which gives us a little variation in the alpha values This can help improve the look of compressed alpha textures, or you could add texture effects to add cracks, or to allow for the distortion of rain drops Saving the Alpha Information As mentioned, Paint Shop Pro™ stores its alpha information a little differently than a file would store an alpha channel, primarily because it requires per-layer alpha information So, what we now need to is to save our mask into the image’s alpha channel To this we simply need to select ‘Save to TeamLRN Alpha Channel’ from the ‘Masks’ menu After selecting this item, we are presented with the following dialog: This dialog displays a list of all the alpha channels currently stored within the image (in memory) Most file formats support only a single alpha channel, but for the moment you will not have any listed In this case, simply select ‘New Channel’ from the list and press ‘OK’ You will then be prompted to provide a name for this alpha channel; this is merely a description After entering the name, the alpha mask from the selected image layer will be saved to a new alpha channel (a preview of which is displayed on the right hand side of that same dialog) Note: Alpha channels are stored separately from the masks themselves Therefore, if at any point after saving the mask to an alpha channel, you modify that mask, you will need to save it once again to the alpha channel in the same way, overwriting the original which will be displayed in this dialog Before we can save our final image, it is best to save out the image as a standard PSP file (Paint Shop Pro’s™ own internal file format) so that we have a workable copy of the original image, and then to delete the alpha mask we created earlier using the ‘Masks / Delete’ menu item, choosing not to merge the mask with the selected layer This important step needs to be performed first because otherwise, when we save to our final image format, the mask will be merged with the color layer, altering the actual color data itself This means that if we were to render the texture, using our alpha channel, we would actually be alpha blending using the altered color data Do not forget to remove the mask before saving as anything other than PSP We are now free to save the image out to disk using the standard ‘File / Save As’ method, but it is important that you select a file format which is capable of storing the alpha information Your best option here, if you are planning to load the texture and its alpha information back into Direct3D, is either TGA or PNG TeamLRN Loading an Image with an Existing Alpha Channel If you want to load an image with an existing alpha channel into Paint Shop Pro, you should first load the image in the usual way You should notice however that the alpha information is not applied to the loaded image Remember that alpha channel information and alpha masks are separate entities To recreate the layer mask from your alpha channel information, simply select the ‘Masks / Load From Alpha Channel’ menu item You will be presented with the following dialog: Under normal circumstances you will see only one alpha channel listed Selecting this channel and pressing OK will result in the re-creation of the mask, and the ability to edit it once again Remember though, that once you have edited the mask, you must save the alpha channel back out (overwriting the one in the above list) using exactly the same methods outlined in the previous section This includes removing the mask again before you save the resulting image file After following these steps, you should now have a texture, available for loading into Direct3D (or any other API for that matter) Take a look at the image below which demonstrates the result of all our hard work Stonehenge through a dirty window ☺ ... be introduced in Part II of this course series Key Topics: • • • • Lighting Models o Indirect Lighting ƒ Emissive/Ambient Illumination o Direct Lighting ƒ Diffuse/Specular Light The Lighting Pipeline... (pgs 298 – 344) Goals: In this lesson we will introduce the lighting model used in the fixed function DirectX Graphics pipeline We begin with an overview of the different types of lighting (ambient,... Key Topics: • • • The Component Object Model (COM) o Interfaces/IUnknown o GUIDS o COM and DirectX Graphics Initializing DirectX Graphics The Direct3D Device o Pipeline Overview o Device Memory
- Xem thêm -

Xem thêm: Graphics programming with directx 9 module i , Graphics programming with directx 9 module i , ScreenY = -projVertex.y * ScreenHeight / 2 + ScreenHeight / 2, ScreenY = -Vector.y * ViewportHeight / 2 + ViewportTop + ViewportHeight / 2, ScreenY = -Vector.y * m_nViewportHeight / 2 + m_nViewportY + m_nViewportHeight / 2, ScreenY = -projVertex.y * ScreenHeight / 2 + ScreenHeight / 2

Từ khóa liên quan