강의 내용(아홉 번째) 오늘 강의 내용 (11월 12일) 예습 : Reading Assignment 숙제 버퍼와 사상

Post on 08-Jan-2018

235 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

이산적 기법들 Until recently, application programmers had no functions in the API that allow them to read/write individual pixels. Frame Buffer나 이산적 버퍼에 직접 작용하는 방법들 Texture Mapping (무늬사상) Compositing (합성) Alpa Blending Antialiasing(앤티엘리어징)

Transcript

강의 내용 ( 아홉 번째 ) • 오늘 강의 내용 (11 월 12 일 )

– 버퍼와 사상– 텍스처 사상 (Texture mapping)– 환경 사상 (Environmental maps)

• 예습 : Reading Assignment– 융기 사상 (Bump maps)– 버퍼에 쓰기 (Writes into Buffers)– OpenGL 의 비트와 화소 연산

• 숙제– Report #2 ( 홈페이지 참조 )– Report #3 예정

이산적 기법들• Until recently, application programmers had n

o functions in the API that allow them to read/write individual pixels.

• Frame Buffer 나 이산적 버퍼에 직접 작용하는 방법들– Texture Mapping ( 무늬사상 )– Compositing ( 합성 )– Alpa Blending– Antialiasing( 앤티엘리어징 )

9.1 Buffers and Mapping

그림 10.1 Buffer

Bitplane

Buffers • Buffer : inherently discrete.

- Frame buffer and Depth buffer are already introduced.

- A block of memory with a spatial resolution of

n m k bit elements

- Frame buffer

n,m match the resolution of the screen

k is determined by how many colors the system display

- Depth buffer

k is determined by the depth resolution

• Bitplane ( 비트평면 )

- Any of the k n m planes in a buffer

• Pixel

- All k of the elements at a particular spatial resolution

- A pixel can be a byte, an integer, or even a floating point

number

Terminology

• Surface Rendering : The process of modeling an object by a set of geometric primitives and then rendering these primitives has its limitations.

• 렌더링하는 처리 과정에 한계가 있음 .

Rendering 의 한계

(1) Start with a sphere : sphere.exe. We can build an approximate sphere out of triangles, and can render these triangles that match those of a real orange. (6 장 )

(2) Try to model the orange with some sort of parametric surface and to render the surface by subdivision. Although it might have the correct overall properties such as shape and color, it would lack the fine surface detail of the real orange. (10 장 )

(3) Procedural methods of Chapter 11 (modeling). At the cost of generating as model with more polygons than we would like to render

Virtual Orange 의 예

• Alternatives

-Build a simple model and to add detail as part of the rendering process

- As implementation renders a surface, be it a polygon or a curved surface, it breaks the surface up into small pieces called fragments.

- Each of fragment, when projected, is at most the size of one pixel.

• Mapping Algorithm

- As part of the rendering process, we must assign a shade or color to each fragment.

- Either modify the shading algorithm based on the map (2 dimensional array)

- Or modify the shading by using the map to alter surface parameters, such as material properties and normals.

• Mapping : 3 major approaches

(1) Texture mapping

(2) Bump mapping

(3) Environment mapping

Mappings

• Texture Mapping

- Use a pattern (or texture) to determine the color of a fragment.

- How to get the pattern

(1) A fixed pattern (often used to fill polygons)

(2) By a procedural texture-generation method

(3) Through a digitized image ( 디지털화된 영상 )

- The image produced by a mapping of a texture to the surface as part of the rendering of the surface; 그림 10.2

Texture Mapping

그림 10.2 Texture Mapping

Texture Mapping

Texture image

Flat Mapping Cylindrical/Spherical mapping

• 2D texture image =>3D object : mapping method

• Bump mapping

- Distort the shape of the surface to create variations such as the bumps on a real orange ( 표면의 형상을 왜곡 )

Bump and Environment Mapping

Bump Map

Bump Mapping Chrome/Reflection Mapping

• Reflection or Environment maps

- Allows us to create images with the appearance of ray-traced images without having to trace reflected rays.

- An image of the environment is painted onto the surface as the surface is being rendered.

Bump and Environment Mapping

반사 Map

Refraction Mapping Environment Mapping

From Advanced animation and Rendering techniquesAlan watt, Mark watt 시선광선

Mipmap cube

교점

1 2 3 4

5

6

• 3 methods in common

- All three alter the shading of individual pixels and are implemented as part of the shading process

- All rely on the map being stored as a one-, two-, or three dimensional digital image

- All are also subject to aliasing errors

Commons

9.2 Texture Mapping• Textures are patterns : from regular patterns, such as

stripes, checkerboards, to the complex patterns that characterize natural materials

• We can distinguish among objects of similar size and shape by their textures. Thus, we can extend our present capabilities by placing or mapping a texture to the objects.

• We shall consider only two-dimensional textures.• Examples

– Color plate 6, Color plate 23 : the surface of the table, etc.• With hardware texture mapping, it allows the detail to

be added without degrading the rendering time

9.2.1 Two-Dimensional Texture mapping

그림 10.3 Texture maps for a parametric surface

• A sequence of steps involved in texture mapping

(1) Associate a point of T with each point on a geometric object.

(2) Each point on a geometric object is mapped to screen coordinates for display.

• Texture Pattern

- Two dimensional texture pattern T(s,t),

- s and t are texture coordinates, 0 ≤ s,t ≤ 1

- It will be stored in texture memory as an nm array of texture elements, texels

- Scale our texture coordinates to vary over the interval (0,1)

- maps from texture coordinates, (s,t) to geometric coordinates, (x,y,z,w).

- maps from geometric coordinates, (x,y,z,w) to screen coordinates, (xs, ys).

- If we define the geometric object using parametric (u,v) surfaces, there is additional mapping function: maps from parametric coordinates, (u,v) to geometric coordinates, (x,y,z,w). ( 그림 10.3 참조 )

Texture map

• Texture mapping process is simple.

- A small area of the texture pattern maps to the area of the geometric surface, corresponding to a pixel in the final image.

- If we assume that the values of T are RGB color values, we use these values to modify the color of the surface. ( part of the shading calculation)

Texture Mapping Process

(1) Text 는 2 차원 사각형이다 . 이러한 사각형을 3 차원 공간에 사상하게 되면 왜곡현상 ( 모양이나 거리상 ) 이 발생한다 . 예를 들어서 , 구 (sphere) 에 사상하는 경우 .

(2) Rendering 과정이 pixel by pixel 기초로 동작되므로 , pixel을 색깔을 정할 때 , texture image 의 어느 점이 사용될 것인지를 알아야 한다 . 따라서 , screen coordinates 에서 texture coordinates 로의 inverse map 이 종종 필요하다 .

(3) 점들간의 사상이 아니라 면적간의 사상이므로 자연스럽게 aliasing 문제가 발생된다 . ( 그림 9.4 를 참조 ).

Difficulties

그림 9.4 Preimages of pixel

Suppose that we are computing a color for square pixel centered at screen coordinates (xs, ys). The center (xs, ys). corresponds to a point (x,y,z) in object space, but, if the object is curved, the projection of the corners of the pixel backward into object space yields a curved preimage.

그림 9.4 Preimages of pixel

In terms of the texture image T(s,t), projecting the pixel back a preimage in the texture space that is the area of the texture that should contribute the shading of the pixel

그림 9.5 Aliasing in texture generation• Serious Aliasing 문제-그림 9.5 의 경우에 줄무늬가 나타나는 것이 아니라 밝은 면만 나타난다 .

• One simple method is to use the point that we get by back projection of the pixel center to find texture value.

(1) 간단한 방법

• A better strategy is to assign a texture value based on averaging of the texture map over the preimage.

• This method is imperfect.

-For the example in Figure 9.5, we would assign an average shade, but we would still not get the striped pattern of the texture.

- We still have aliasing defects due to the limited resolution of both the frame buffer and the texture map.

(2) 낫지만 충분히 좋지는 않은 방법

)(

),(

minmaxminmax

minmin

minmaxminmax

minmin

vvtt

ttvv

uuss

ssuu

• 구현하기는 쉬우나 , 대상물의 곡면을 고려할 수 없음

(1) 선형사상 (Linear texture mapping)

Another approach using a two-part mapping

1st step : Maps the texture to a simple 3-dimensional intermediate surface such as sphere, cylinder or cube.

2nd step : the intermediate surface containing the mapped process can be applied to surface being rendered

This method can be applied to surfaces defined in either geometric or parametric coordinates.

(2) 2 단계 접근 방식

(2.1) The 1st Step

1,0 ts

1,0 ts 높이 h 이고 , 반지름이 r인 원기둥

그림 10.7 Texture mapping with a cylinder

그림 9.7 Texture mapping with a cylinder

hvzuryurx

/)2sin(),2cos(

Points on the cylinder are given by the parametric equations as u and v vary over (0,1):

We can use the mapping

vtus

• If we use a sphere of radius r as the intermediate surface, a possible mapping is

)2sin()2sin(),2cos()2sin(

),2cos(

vurzvury

urx

Texture mapping with sphere

• 실린더의 밑면과 윗면은 제외하면 모양의 왜곡 없이 mapping 이 가능하나 , 패쇄된 공간에 mapping 은 왜곡을 필연적으로 발생시킴 . 예 ) 세계 지도 (Mercator projection)

그림 9.8 Texture mapping with box

We map the texture to a box that can be unraveled, like a cardboard packing box.

(2.2) The 2nd Step• Map the texture values on the

intermediate object to the desired surface (object)

• Figure 9.9 shows 3 possible different mapping strategies.

(a) Using the normal from the intermediate surface

(b) Using the normal from the object surface

(c) Using the center of the object.

그림 9.9 2nd mappingobject

9.2.2 OpenGL 에서의 텍스처 사상

• 1 차원 , 2 차원 텍스처를 1,2,3 차원까지의 그래픽 객체에 사상한다 .– 최근에 제공되는 OpenGL 1.5 에서는 3 차원

텍스처를 지원하기도 함 .– 본 논의는 2 차원 텍스처를 표면에 입히는 것으로

제한함 .• 기하학적 파이프라인 구조와 병행인 화소를

제어하는 파이프 라인이 있다 . ( 그림 9.10)– 텍스처의 사상은 기본 요소들이 래스터화될 때

이루어진다 . 따라서 , 텍스처 사상을 음영처리의 한 과정으로 생각할 수 있다 .

9.2.2 OpenGL 에서의 텍스처 사상

그림 9.10 화소 파이프라인과 기하학적 파이프라인

사용 예• 텍셀 배열의 선언

GLubyte my_texels[512][512];• 이차원 배열이 텍스처로 사용됨을 선언 glTextImage2

D(GL_TEXTURE_2D, 0, 3, 3, 512, 512, 0, GL_RGB,GL_UNSIGNED_BYTE, my_textels);

• 텍스처 사상을 작동시킴 glEnable(GL_TEXTURE_2D);

Texture Mapping in OpenGL

glTexImage2D(GL_TEXTURE_2D, level, components, width, height, border, format, type, image);– The texture pattern is stored in the widthheight array image– The value components is the number 1 to 4 of color components

(RGBA).– The parameter level and border gives us fine control over how text

ure is handled; multiple texture maps (multiple resolutions)– glTexImage2D(GL_TEXTURE_2D, 0,3, 512, 512, 0, GL_RGB, GL

_UNSIGNED_BYTE,my_texels);

level border

기하학적 객체에 텍스트가 어떻게 사상되는가 ?

• Assign texture coordinates to each vertex, just as we assign colors and normal vectors.– Texture coordinates are then mapped to

fragments by interpolation– The function glTexCoord2f(s,t); is used

그림 9.11 텍스처 좌표로의 사상

(0.0, 0.0) 은 my_texels[0][0] 에 해당하고 , (1.0,1.0) 은 my_texels[511][511] 에 해당한다 .

glBegin(GL_QUAD);glTextCoord2f(0.0, 0.0);glVertex2f(x1,y1,z1);glTextCoord2f(1.0, 0.0);glVertex2f(x2,y2,z2);glTextCoord2f(1.0, 1.0);glVertex2f(x3,y3,z3);glTextCoord2f(0.0, 1.0);glVertex2f(x4,y4,z4);

glEnd();

만일 텍스처를 사변형에 적용하고자 한다면 위과 같은 코드를 사용해야 한다 .

그림 9.12 체커보드 텍스처를 사변형에 사상

(a) 전체 텍셀 배열을 사용하여 사변형에 사상한 경우

(b) 텍셀 배열의 일부분만을 사용 : (0.0, 0.5) 와 같이 s,t의 일부분만 사용한 경우

그림 9.13 텍스처를 다각형에 사상(a)(b) 삼각형에 사상 (C) 사다리꼴에 사상

• 사변형 예제에서는 텍스처 좌표에서 정점으로의 명확한 사상이 존재함 .

•그러나 , 일반 다각형에 대해서는 응용 프로그래머가 텍스처 좌표를 결정하는 것에 따라서 달라진다 .

텍스처 사상• 텍스처 사상은 매우 간단하다 .

(1) 텍스처 값들을 위한 배열을 지정하고(2) 그 다음에 텍스처 좌표를 할당한다 .

텍스처

다소 귀찮은 세부 사항(1) color image 를 위한 많은 format 이 있다 .

각 색에 대한 비트 패턴을 저장하는 많은 방식이 있다 . 이를 이미지의 질과 효율간의 균형을 맞추어 선택하는 문제 (9.5절 )

(2) (0.0,1.0) 의 범위를 넘어서는 s,또는 t값을 지정하면 어떻게 처리해야 하는가 ?

- 반복되는 텍스처를 위해서는 GL_REPEAT- 잘라 버리기 위해서는 GL_CLAMPglTexParameteri(GL_TEXTURE_WRAP_S, GL_REPEAT);

GL_TEXTURE_WRAP_T

GL_CLAMP & GL_REPEAT

다소 귀찮은 세부 사항(3) Aliasing( 엘리어싱 )• 첫번째 경우는 텍셀이 한 화소보다 큰 경우이고 , (

텍스처의 확대 , magnification) glTexParameteri(GL_TEXTURE_2D, Gl_TEXTURE_MAG_FILTER,GL_NEAREST);

• 두번째의 경우는 텍셀이 한 화소보다 작은 경우이다 .(텍스처의 축소 , minification)glTexParameteri(GL_TEXTURE_2D, Gl_TEXTURE_MIN_FILTER,GL_NEAREST);

• Filter 들의 종류– GL_NEAREST : 픽셀의 중심에 대응되는 텍스처 공간의 지점으로 텍스처

색깔을 정한다 .– GL_LINEAR : 픽셀의 중심에 대응이되는 지점을 둘러싼 네 개의 텍셀

중심을 찾아서 선형 보간을 통하여 가중 평균을 구하는 방법

다소 귀찮은 세부 사항밉매핑 (mipmapping) : OpenGL 은 또 다른

방식으로 축소 문제를 다룬다 .1) GLU 함수를 이용하는 경우 gluBuild2DMipmaps(GL_TE

XTURE_2D,3,64,64,GL_RGB,GL_UNSIGNED_BYTE,my_texels);

2) GL 함수를 통해서 설정하는 경우 glTexparameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST_MIPMAP_NEAREST);

그림 9.14 텍셀을 화소로 사상 (a) 확대 (b) 축소

9.2.3 Texture Generation• 사진 이미지를 스캔닝하여 디지털 이미지를 얻는 방법

-Provide detail without generating numerous geometric objects

-If we want to simulate grass in a scene, we can texture map an image of grass (obtained by scanning a photograph) faster than we can generate two- or three-dimensional objects that look like grass.

-Rather than generating realistic surface detail for terrain, we can digitize a real map, and can paint it on a three-dimensional surface model by texture mapping.

9.2.3 Texture Generation

• Procedural methods for determining texture patterns

-Such as the texture of sand, grass, or minerals

-These textures show both structure (regular pattern) and randomness.

-An ideal random number generator produces a sequence of values (white noise). Statistically uncorrelated.

- The filter correlate successive noise values, and by controlling the filter, we can simulate various patterns.

3D Texture• Three dimension texture• By associating each (s,t,r) value directly

with an (x,y,z) point, we can avoid the mapping problem entirely.

• Conceptually, this process is similar to sculpting the three-dimensional object from a solid block whose volume is colored by the specified texture.

9.3 Environmental Maps• Consider a shiny metal ball that is in the middle of a room. We can see the contents of the room, in a distorted form on the surface of the ball

-Ray-tracing, Too time-consuming to be practical

- Environmental or reflection mapping. We can extend our mapping techniques to obtain an image that approximates the desired reflection by extending texture map to an environmental mapping

9.3 Environmental Maps• Two step procedure for mapping environments to the surface

of the objects.

(1) Obtain an image of the environment on an intermediate projection surface. The center of projection is located at the center of the reflective object, but the projections are computed with the object removed from the scene. Refer to Figure 10.11.

(2) Treat the environmental map as a texture map. Place the object back in the scene, and transfer the texture map to its surface. Figure 10.12

Figure 10.11 Mapping of the environment

9.3 Environmental Maps

Figure 10.12 Mapping from the intermediate surface

Ray Tracing(1/4)– Suggested by Appel in 1968 : called by ray casting

process•관찰자의 눈에 들어오는 광선을 추적하여 광선의 색을

화면에 표시해 주는 방법• ray 의 방향 : eye light source• 각 ray 를 추적하면서 simulation 많은 시간 소모• 정반사에 중점을 둠 , 난반사 표현에는 한계

eyeray

screenrefraction

reflection

Ray Tracing(2/4)• Reflection/Refraction

Ray Tracing(3/4)• Backward Ray Tracing

Ray Tracing(4/4)• DOF, Motion Blur(Distributed Ray

Tracing)

9.4 Bump Maps• Texture mapping 을 이용하여 얻은 오렌지는 만일 조명을 옮기거나 객체를 이동하면 , 실제 오렌지가 아니라는 것을 즉시 눈치 챌 수 있을 것이다 .

• The technique of bump mapping varies the apparent shape of the surface by perturbing the normal vectors as the surface is rendered; the color that are generated by shading then show a variation in the surface properties.

9.4 Bump Maps

vu

vu

PPPPn

vzvyvx

p

uzuyux

p vu

• Let P(u,v) be a point on a parametric surface. The unit normal is :

where

.),(' nvudpp

''' vu ppn

vvv

uuu

nvudnvdpp

nvudnudpp

),(

,),(

'

'

• Suppose that we displace the surface in the normal direction by a function called the bump function, d(u,v), that is assumed known and small.

•We alter the normal n, instead of p. The normal at the perturbed point p’ is given by the cross product .

• We can compute the two partial derivatives by differentiating the equation for p’, obtaining

uv pnvdpn

udnn

'

•If d is small, we can neglect the terms on the right of these equations to obtain the perturbed normal:

•To apply the bump map we need two arrays that contain the values and . These arrays can be precomputed.

• The normal can then be perturbed during the shading process.ud

vd

top related