Guide to Porting an OpenGL* ES 2.0 Application from iOS* to Windows* 8 June 19 2013 iOS* continues to be a popular platform for application developers. Many iOS applications utilize OpenGL* ES. With the wide availability of Windows* 8, what does it take to move an iOS application utilizing OpenGL ES 2.0 to a Windows 8 platform? This document walks through a simple OpenGL ES 2.0 application and discusses the in’s and out’s of moving from one operating system to another. Documentation for developers moving OpenGL* ES applications from iOS* to Windows* 8
30
Embed
Guide to Porting an OpenGL* ES 2.0 2013 Application · PDF file2 Introduction iOS continues to be a popular platform for application developers, and many iOS applications utilize OpenGL
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Guide to Porting an OpenGL* ES 2.0 Application from iOS* to Windows* 8
June 19
2013
iOS* continues to be a popular platform for application developers. Many iOS applications utilize OpenGL* ES. With the wide availability of Windows* 8, what does it take to move an iOS application utilizing OpenGL ES 2.0 to a Windows 8 platform? This document walks through a simple OpenGL ES 2.0 application and discusses the in’s and out’s of moving from one operating system to another.
Documentation for developers moving OpenGL* ES applications from iOS* to Windows* 8
The Demonstration Application .................................................................................................................... 2
Demonstrated OpenGL ES concepts ......................................................................................................... 3
Development Environments ......................................................................................................................... 3
OpenGL ES ..................................................................................................................................................... 4
Initializing OpenGL window and Context ...................................................................................................... 4
Windows 8 ................................................................................................................................................ 5
Windows 8 ................................................................................................................................................ 7
Windows 8 .............................................................................................................................................. 12
Windows 8 .............................................................................................................................................. 16
The draw call ............................................................................................................................................... 20
Windows 8 .............................................................................................................................................. 22
Windows 8 .............................................................................................................................................. 24
Vertex, Fragment Shaders and Scene Lighting ........................................................................................... 27
iOS continues to be a popular platform for application developers, and many iOS applications utilize OpenGL ES to handle their 3-D graphic chores. OpenGL ES, or OpenGL for embedded systems, is a subset of the OpenGL 3D graphics API designed for embedded devices such as mobile phones. OpenGL is also available on Windows 8. But just how easy is it to move an OpenGL ES application, written in Objective-C* for the iOS platform to the ever popular Windows 8 where the dominant language for implementation of native applications is C#?
This document walks through a simple OpenGL ES 2.0 application and discusses the in’s and out’s of porting an app running on iOS to Windows 8 desktop.
The Demonstration Application
To show the basic structure and components a typical OpenGL ES application utilizes, we provide a simple application that uses OpenGL ES 2.0 to draw a three-dimensional cube with a texture image mapped onto each surface. The demo app functionality also incorporates a simple single source lighting model for the cube.
Figure 1: iOS* version of simple OpenGL* ES application
3
Users can manipulate the cube using common gestures:
Pinch to zoom in on the cube
Stretch to zoom out of the cube
Use a single finger (or mouse) to manipulate the cube using a virtual track ball.1
We will use this application to highlight the differences between working with OpenGL ES 2.0 on iOS vs. Windows 8.
Demonstrated OpenGL ES concepts The application demonstrates the following OpenGL features:
WPF and OpenGL interoperability (Windows 8 version) o Creating an OpenGL context inside a WPF application o Rendering to the WPF-provided window surface
How to manage the OpenGL viewport and projection matrix inside WPF
Geometry definition and vertex specification
o How to prepare vertex data for rendering. Including vertices, surface normals, and texture coordinates
o Set up vertex parameters for rendering
Working with the programmable pipeline
o Compiling and linking shader source into shader program o Setting up shader input attributes and uniforms for rendering o Working with textures
Basic ambient and diffuse components from the ADS light model
Supporting touch manipulation
o Pinch object scaling
o Manipulating the object’s rotation using an arc ball implementation
Development Environments
The iOS application described in this document was developed using the standard iOS development environment from Apple, XCode*. The application was entirely written in Objective-C and uses the native iOS OpenGL implementation and supporting frameworks included with the iOS SDK.
Our Windows 8 development was done using Visual Studio* Express 2012 for desktop apps. The application was written in C# using Windows Presentation Foundation and OpenTK library (http://www.opentk.com/). The OpenTK toolkit wraps OpenGL, OpenCL2*, and OpenAL APIs for the C# language, thus providing a convenient way to use them from .NET and applications written with WPF.
1 A virtual hemisphere, centered in the middle of the screen. See http://www.opengl.org/wiki/Trackball
2 OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.
Our OpenGL ES application consists of the following basic steps:
1. Context and window initialization
2. Viewport setup
3. Setting up vertex and fragment shaders 4. Creating geometry buffers 5. Draw call
Initializing OpenGL window and Context
iOS The portion responsible for window handling in iOS is provided by the GLK framework. The view class responsible for window presentation is GLKView, and its backend operations are implemented by extending GLKViewController.
Following a typical iOS Model-View-Controller pattern, an Instance of GLKView is defined on a single
window inside a storyboard and supported by the ViewController class, which extends GLKViewController. Our controller, therefore, must implement the following:
After the runtime finishes loading the view, the viewDidLoad function is called, which we re-implemented as part of our ViewController. Here the OpenGL context is initialized by instantiating a
EAGLContext object and then selecting the desired OpenGL context, in this case kEAGLRenderingAPIOpenGLES2, for OpenGL ES 2.0. We also created an additional
convenience function, setupGL, which is not a part of the original GLKViewController API, that contains the scene initialization code.
Finally, in iOS it is important to explicitly set the desired frame rate for the application. By default the window redraw is triggered by an event, but setting this property will force a redraw with a constant frame rate.
Windows 8 To properly initialize an OpenGL context on Windows 8, which we can then use for rendering inside
a WPF window control, we need to register a Window_Loaded event handler and use it to get the window’s handle.
m_context = new OpenTK.Graphics.GraphicsContext(OpenTK.Graphics.GraphicsMode.Default, m_windowInfo);
m_context.MakeCurrent(m_windowInfo);
//Load all OpenGL entry points
(m_context as OpenTK.Graphics.IGraphicsContextInternal).LoadAll();
...
}
Since we don’t want to use DirectX* for WPF hardware-accelerated rendering, we need to turn it off; after all, we want to use OpenGL to render the application’s contents.
Next, we create a Window Information object and use that to create a GraphicsContext instance. The last thing we need to do is make sure to load all OpenGL entry points; otherwise, we will not be able to call any of the newer OpenGL functions, for example, to create shader programs.
Application porting guidelines While the initialization portions of the iOS and WPF applications use platform-dependent
functionality, the steps themselves are very similar. Both Window_Loaded and viewDidLoad are event handlers called when the respective window system finishes loading the window we will use for our OpenGL scene. Within the event handlers, an OpenGL context is created and stored for later use. Note that under Windows 8 the OpenTK library provides convenience functions that are native to iOS.
7
Viewport handling
iOS Our iOS example application operates in full screen mode; therefore, the viewport is only resized by a change in device orientation. This is handled nicely by the iOS internal framework—GLKView, derived from NSView, which sets the new OpenGL viewport automatically. However, developers
can still implement functions such as didRotateFromInterfaceOrientation if it is necessary to adjust any internal application states, such as calculations related to perspective.
Windows 8 Unlike a device running iOS, it is possible for the user to change the window’s size, so we need to handle this accordingly; otherwise, our scene will just get clipped.
After calling the base implementation we first have our arcball helper-class adjust its size. We then check to see if we have a valid OpenGL context and use the window’s new size to produce a bounding rectangle.
When looking at the call to set the new viewport, notice that, instead of having the familiar glFunctionName format, all OpenGL calls done with the help of OpenTK are in the form of GL.FunctionName. Finally, we update our scene’s projection with the help of an OpenTK convenience function that calculates a view frustum and that is provided by OpenTK’s 4D matrix implementation:
8
//A helper used to update the scene's projection matrix
private void UpdateProjection()
{
//Creat a new projection matrix using the windows size.
Application porting guidelines Because it is possible to resize windows on a Windows-based platform you must account for changes in the viewport and projections.
Working with the programmable pipeline
To simplify things a bit, all shader handling code was moved into a helper class called ShaderProgram. This means setting up a shader program is as simple as calling the class’s constructor, which takes care of abstracting all the OpenGL calls necessary to compile and link a shader program.
iOS The iOS version of the code responsible for preparing the shader program is embedded inside the init method, which is the overloaded constructor for the ShaderProgram class.
In this implementation the object is bound to a specific shader source and exposed to the input parameters used in the program. The vertex and fragment shader sources are compiled from Shader.fsh and Shader.vsh files attached to the project. Before the program is linked we assigned predefined attribute indexes for convenience. The GLK framework provides the handy predefined
constants: GLKVertexAttribPosition, GLKVertexAttribNormal and GLKVertexAttribTexCoord0. Once successfully linked, we extract all uniform
application indexes and expose them via two structures: ShaderMaterialParam and ShaderSpaceTransformationsParam.
Windows 8 Setting up shaders is done by calling the class constructor, passing the shader’s source code.
public ShaderProgram(string vertexSource, string fragmentSource)
{
//A variable used to store shader program state check results
The first order of business is to create our two shader stages by calling GL.CreateShader with ShaderType.VertexShader and ShaderType.FragmentShader, respectively. With the two empty shader objects created, it is now time to upload their source code using a
GL.ShaderSource call and compile them using GL.CompileShader. We also make sure to retrieve the shaders’ compilation state for debugging purposes.
Once the shader stages are compiled and in place, we can create a shader program object using a
GL.CreateProgram call. Attach the shader stages using GL.AttachShader and call GL.LinkProgram to link the program. If there are no shader compile or link errors, we now have ourselves a shiny new shader program instance that we can use to render our scene.
Application porting guidelines The procedure is generally the same for both versions. The differences may be in the resource loading or the fact that OpenTK is wrapping the OpenGL C API into C# calls.
Geometry definition, vertex specification, and textures
iOS The geometry in the example is a cube represented by the Cube object. Internally, the data is stored in the vertex buffer object as a 1-dimensional array in the following format:
To speed up the rendering, we store the whole vertex definition state in the Vertex Array Object (VAO). This greatly improves rendering performance by eliminating the need for multiple binding
14
calls for every element switch. It also reduces the vertex setup during the render call to just selecting the proper VAO. This translates to a single function call on the CPU side.
1. We create a vertex array object and bind it; this causes all of the following calls to be saved as a state in the VAO.
2. We create a vertex buffer object and bind it to make it current; we also bind it to the context as active. Doing this means all buffer-related operations are performed on the currently bound one.
3. The vertex data is transferred from main memory into graphics ram using the
glBufferData function.
4. We enable and set the vertex parameters, which are “unique per single shader invocation”
chunks of data. For example, we set GLKVertexAttribPosition by calling glEnableVertexAttribArray with a pointer to the data set using glVertexAttribPointer. In our case the call looks like:
- How many values of the given type are passed in this single chunk
- The type of the value
- Whether fixed-point data values should be normalized
- Stride, roughly, the spacing between the vertex data in the data array. In our case the data is tightly packed, meaning the vertex data is followed by information about normals and texture coordinates, which we want ignored. By specifying the gap between consecutive vertex parameters, the API can take this into account when parsing the data.
- The last parameter is the offset from the beginning of the buffer to the location of the first parameter element. In the case of vertex data, the offset is 0 because the vertex is the first parameter. If we were passing in the normals, we need to jump (3 * sizeof(GLfloat)) from the beginning, skipping the vertex data.
5. After the vertex data is properly set up, we unbind the VAO. From now on it can be reused when needed to restore the vertex state.
16
The textures themselves are one of the biggest and most complex topics in OpenGL, but iOS provides a set of classes that abstract most of the required steps, such as loading the image to main memory or uploading it to the graphics memory. In our implementation we load the texture from a .png file and get the instance of the GLKTextureInfo class. This reduces the task to a single line of code.
NSLog(@"Problem with loading texture: %@", [err localizedDescription]);
return nil;
}
return self;
}
The texture is now ready to bind to a texture unit during rendering. We will cover that topic in the rendering call section.
Windows 8 Geometry management is just plain old OpenGL code. To keep things simple, we work with the regular vertices, normals, texture coordinates, and index buffers.
First, we create new buffer objects using the GL.GenBuffers call and go through all the buffers, binding them and setting their data using GL.BufferBind and GL.BufferData. Finally, we simply clear any buffer bindings.
We also use a vertex array object to make the draw simpler by reducing repetitive GL function calls. Here is the code we use to generate and set up a vertex array object.
//The object's vao creation method
private void makeVao()
{
if (m_vao == 0)
18
{
GL.GenVertexArrays(1, out m_vao);
if (m_vao != 0)
{
GL.BindVertexArray(m_vao);
//Get the vbo attribute locations
int normalAttrLocation = m_program.GetAttributeLocation(m_program.NormalAttrName);
int vertexAttrLocation = m_program.GetAttributeLocation(m_program.VertexAttrName);
int textureCoordAttrLocation = m_program.GetAttributeLocation(m_program.TexCoordsAttrName);
We first create a new vertex array object by calling GL.GenVertexArrays and then bind the new vertex array object using the GL.BindVertexArray function, proceeding to set up the rendering pipeline as we would without a vertex array object.
We then retrieve the shader program’s attribute locations using the wrapper’s GetAttributeLocation method. Once known, we can bind the respective buffers, enabling their attribute locations using a GL.EnableVertexAttribArray function call and setting that attribute’s data using a GL.VertexAttribPointer function call. When complete, we have a vertex array object we can use to restore our object’s rendering state with just a single OpenGL function call.
The material definition, including textures, for the Windows 8 version is encapsulated in the material class. Unlike the iOS version, the texture is loaded using the wrappers of the native OpenGL function calls, except for the Bitmap and BitmapData classes, which abstract loading the image from the hard drive into main memory.
private static int LoadTexture(string filename)
{
//A variable used to store the texture's id
int id = -1;
//Check if we have a file name
if (filename != "")
{
//Generate a new texture
id = GL.GenTexture();
//Check if we got a valid texture id
if (id >= 0)
{
//Bind the new texture
GL.BindTexture(TextureTarget.Texture2D, id);
//Open the image file using the Bitmap class
20
Bitmap bmp = new Bitmap(filename);
//Get the bitmaps data
BitmapData data = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadOnly,
iOS The drawing procedure in iOS is divided into two main parts that originate from the GLKViewController:
- The update function, which is called before the draw call and is used to update any scene data and for non-visual tasks.
- The glkView:(GLKView*) drawInRect:(CGRect) function, which handles the drawing.
In the iOS implementation the update function gathers the cube-position-parameters generated by user interaction events and produces a world-transformation matrix. This matrix is passed in the second call as a uniform to our shader program.
Once the update function has completed, the rendering function is done. In this implementation, we render a single object on the scene that has a single source of light. The object itself contains the functionality to render itself using information passed to it, in particular, four transformation matrices (model, world, view, and perspective), plus the light position and color vectors.
Thanks to the VAO, the draw function is quite simple. It boils down to:
1. Bind the VAO, which has been set up with saved vertex state
2. Bind texture to the current shader program
3. Upload the uniform’s data
4. Call the draw function. In this case, the primitives drawn are triangles (GL_TRIANGLES). To construct those triangles, we pass information for each of the 36 vertices as a single collection of positions, normals, and texture coordinates passed in the buffer object for each vertex.
Windows 8 To render our scene, we simply call the render method on every object present, providing it with the current projection matrix, light location, and light intensity vectors. The cube object’s rendering method looks like this.
//The object's render method
public override void Render(OpenTK.Matrix4 projection, OpenTK.Vector3 lightPosition, OpenTK.Vector3 lightIntensity)
System.Diagnostics.Debug.WriteLine("Cube not initialized");
}
}
We start by checking if the object was initialized (are its vertex buffers set up, etc.). If so, we simply use the shader program wrapper object to bind the object’s shader program instance. We make sure the object’s vertex array object is in place by calling the makeVao helper method, verifying the vertex array object’s instance and binding.
Next we set the shader program’s uniforms using the SetUniformType methods and set up the material instance. The material object is responsible for setting up the object’s texture and other uniforms values that might be needed to define a specific material.
The last thing we need to do is make the OpenGL draw call to have the geometry rendered; we do this with a simple call to GL.DrawElements. Finally, we need to make sure to unbind the vertex array object and release the shader program.
Touch Input
iOS In the iOS version we have two possible ways for the user to interact with the application: pinch to adjust the zoom of the cube and drag to rotate it.
The pinch is implemented using the pinch gesture recognizer. The gesture recognizer is placed on top of the view in the xib file, and the callback function is implemented in the ViewController that is bound to its selector action.
The touchesBegan function is always called first and is used to initialize all touch-related data. The second function touchesMoved is called on every single finger movement, and we use it to perform the rotation calculations. It sets a Quaternion that describes the current object rotation. The Quaternion is later used in the update function as well to construct the rotation matrix.
Windows 8 For touch input we simply use the built-in WPF touch events, TouchDown, TouchMove, and TouchUp.
The actual shader code is written entirely in GLSL, which is fully portable between platforms running OpenGL. For the interested reader and completeness, we will briefly describe our pipeline implementation. For detailed information, refer to reference volumes such as “OpenGL Programming Guide, 4th edition” (aka the Red Book).
The OpenGL ES 2.0 programmable pipeline contains two types of shader programs: vertex and fragment shaders.
- Vertex shaders operate on a single vertex and handle tasks like transforming the vertex position in space
- Fragment shaders are invoked right after the rasterizer stage, which follows the vertex shader stage. Fragment shaders produce the output color of a single pixel usually accounting for texturing, lightning, and so forth.
The vertex shader takes as its input the vertex position, normal vector, and texture coordinates. The shader calculates the normal vector under the current projection and stores the value as output to the fragment shader. Texture coordinates are simply handed through. Finally, world and view matrices are combined with the current projection matrix, and the transformed vertex position is calculated.
The fragment shader fetches the pixel color from the texture and calculates ambient and diffuse light. It uses parameters passed from the vertex shader, which are now interpolated over all three vertices composing the triangle.
Closing
OpenGL ES 2.0 is a powerful tool for rendering 2- and 3-dimensional objects on a variety of computing platforms, and it is widely used on the iOS platform. Fortunately, this common graphics platform forms a bridge for developers looking to move OpenGL-based applications from iOS to Windows 8.
While the application used in this white paper is fairly simple, it illustrates the key concepts in working with OpenGL ES 2.0 and how those concepts are implemented on iOS and Windows 8. As demonstrated, the concepts are consistent between iOS and Windows 8, and moving from one to the other can be done in a straightforward manner.