OpenGL Insights
Cover Table of Contents Pipeline Map Tips Contributors Reviews BibTeX Errata Code Blog Buy

I Discovering

In this section, we discover many facets of OpenGL: teaching modern OpenGL in academia; using OpenGL on the web with WebGL; tessellation shaders in OpenGL 4.0; procedural textures; the safety critical variant, OpenGL SC; and multi-GPU OpenGL and CUDA interop.

OpenGL enjoys widespread use in computer graphics courses around the world. Now-depreciated OpenGL features such as fixed-function lighting, immediate mode, and built-in transforms made the barrier to entry low. However, modern OpenGL has removed many of these features, resulting in a lean API that exposes the functionality of the underlying hardware. Academia has taken these changes in stride, updating their graphics courses to modern OpenGL. In Chapter 1, "Teaching Computer Graphics Starting With Shader-Based OpenGL," Edward Angel discusses how an introductory computer graphics course can be taught using modern OpenGL. In Chapter 2, "Transitioning Students to Post-Deprecation OpenGL," Mike Bailey presents C++ abstractions and GLSL naming conventions to bridge the gap between depreciated and modern OpenGL for use in course assignments.

When we announced our call for authors for OpenGL Insights in May 2011, we included WebGL as a desired topic. Since then, WebGL has gained such traction that an entire book could easily be justified. In Chapter 3, "WebGL for OpenGL Developers," Patrick Cozzi and Scott Hunter present WebGL for those who already know OpenGL. In the following chapter, "Porting Mobile Apps to WebGL," Ashraf Samy Hegab shows the benefits, differences, and trade-offs of using WebGL for mobile applications. Several chapters in later sections continue our WebGL exploration.

Christophe Riccio takes a rigorous look at communication between the OpenGL API and GLSL and different shader stages in Chapter 5, "The GLSL Shader Interfaces." He carefully examines using varying blocks; attribute, varying, and fragment output variable locations; linked and separated programs; using semantics in our designs; and more.

Today, one of the differences between movie-quality rendering and real-time rendering is geometric complexity; movies generally have much higher geometric detail. To improve geometric detail in real-time rendering, tessellation can be done in hardware. Although this has been available on ATI cards since the ATI Radeon 8500 in 2001, tessellation shaders were recently standardized and made part of OpenGL 4.0. In Chapter 6, "An Introduction to Tessellation Shaders," Philip Rideout and Dirk Van Gelder introduce the new fixed and programmable tessellation stages.

As the gap between compute power and memory bandwidth continues to widen, procedural techniques become increasingly important. Small size is speed. Procedural textures not only have trivial memory requirements, but can also have excellent visual quality, allowing for analytic derivatives and anisotropic antialiasing. Stefan Gustavson introduces procedural textures, including antialiasing and using Perlin and Worley noise in Chapter 7, "Procedural Textures in GLSL." Best of all, he provides GLSL noise functions for OpenGL, OpenGL ES, and WebGL.

OpenGL SC, for safety critical, may be one of the lesser-known OpenGL variants. In Chapter 8, "OpenGL SC Emulation Based on OpenGL and OpenGL ES," Hwanyong Lee and Nakhoon Baek explain the motivation for OpenGL SC and describe the benefits of implementing it based on other OpenGL variants, instead of creating custom drivers or a software implementation.

In the past 15 years, consumer GPUs have transformed from dedicated fixed-function graphics processors to general-purpose massively-parallel processors. Technologies like CUDA and OpenCL have emerged for developing general data-parallel algorithms on the GPU. There is, of course, a need for these general algorithms, like particle systems and physical simulation, to interop efficiently with OpenGL for rendering. In the final chapter of this section, "Mixing Graphics and Compute with Multiple GPUs," Alina Alt reviews interoperability between CUDA and OpenGL and presents interoperability between multiple GPUs where one GPU is used for CUDA and another for OpenGL.

1. Teaching Computer Graphics Starting with Shader-Based OpenGL
Edward Angel

For at least ten years, OpenGL has been used in the first computer graphics course taught to students in computer science and engineering, other branches of engineering, mathematics, and the sciences. Whether the course stresses basic graphics principles or takes a programming approach, OpenGL provides students with an API to support their learning. One of the many features of the OpenGL API that makes it popular for teaching is its stability and backward compatibility. Hence, instructors needed to make only minor changes in their courses as OpenGL evolved. At least that used to be true: over the last few years, OpenGL has changed rapidly and dramatically.

Starting with version 3.1, the fixed function pipeline was eliminated, an action that deprecated immediate mode and many of the familiar OpenGL functions and state variables. Every application must provide at least a vertex shader and a fragment shader. For those of us who use OpenGL to teach our graphics courses, these changes and the introduction of three additional shader stages in subsequent releases of OpenGL have led to a reexamination of how we can best teach computer graphics. As the authors of a popular textbook [Angel 09] used for the first course, we realized that this reexamination was both urgent and deep, requiring input from instructors at a variety of institutions. In the end, we wrote a new edition [Angel and Shreiner 12] that was entirely shader-based. Some of the key issues were addressed briefly in [Angel and Shreiner 11] but this chapter will not only discuss the reasons for the change but will also include practical observations and issues based on the actual teaching of a fully shader-based course.

I start with a historical overview, stressing how the software used in the first computer graphics course has changed over the years while the concepts we teach have remained largely unchanged. I review the key elements of a first course in computer graphics. Then I present a typical first Hello World program using the fixed-function pipeline. Next, the reader will see how we have to change that first program when moving to a shader-based course. Finally, I examine how each of the major topics in our standard course is affected by use of a shader-based OpenGL.


2. Transitioning Students to Post-Deprecation OpenGL
Mike Bailey

From an educator's perspective, teaching OpenGL in the past has been a snap. The separation of geometry from topology in the glBegin-glEnd, the simplicity of glVertex3f, and the classic organization of the postmultiplied transformation matrices has been fast and easy to explain. This has considerably excited the students because going from zero knowledge to "cool 3D program you can smugly show your friends" was the task of a single lesson. This made motivation easy.

The Great OpenGL Deprecation has changed that. Creating and using vertex buffer objects is a lot more time consuming to explain than glBegin-glEnd [Angel 11]. It's also much more error-prone. Creating and maintaining matrices and matrix stacks now requires deft handling of matrix components and multiplication order [GLM 11]. In short, while post-deprecation OpenGL might be more streamlined and efficient, it has wreaked havoc on those who need to teach it and even more on those who need to learn it.

So the "old way" is not current, but the "new way" takes a long time to learn before one can see a single pixel. How can we keep students enthusiastic and motivated but still move them along the road to learning things the new way? This chapter discusses intermediate solutions to this problem by presenting C++ classes that ease the transition to post-deprecation OpenGL. These C++ classes are

  1. Create vertex buffers with methods that look suspiciously like glBegin-glEnd.
  2. Load, compile, link, and use shaders.
This chapter also suggests a naming convention that can be instrumental in keeping shader variables untangled from each other.


3. WebGL for OpenGL Developers
Patrick Cozzi and Scott Hunter

Don't get us wrong — we are C++ developers at heart. We've battled triple-pointers, partial template specialization, and vtable layouts under multiple inheritance. Yet, through a strange series of events, we are now full-time JavaScript developers. This is our story.

At the SIGGRAPH 2009 OpenGL BOF, we first heard about WebGL, an upcoming web standard for a graphics API based on OpenGL ES 2.0 available to JavaScript through the HTML5 canvas element, basically OpenGL for JavaScript. We had mixed feelings. On the one hand, WebGL brought the promise of developing zero-footprint, cross-platform, cross-device, hardware-accelerated 3D applications. On the other, it requires us to develop in JavaScript. Could we do large-scale software development in JavaScript? Could we write high-performance graphics code in JavaScript?

After nearly a year of development resulting in over 50,000 lines of JavaScript and WebGL code, we have answered our own questions: properly written JavaScript scales well, and WebGL is a very capable API with tremendous momentum. This chapter shares our experience moving from developing with C++ and OpenGL for the desktop to developing with JavaScript and WebGL for the web. We focus on the unique aspects of moving OpenGL to the web, not on porting OpenGL code to OpenGL ES.


4. Porting Mobile Apps to WebGL
Ashraf Samy Hegab

WebGL provides direct graphics hardware acceleration hooks into web browsers, allowing for a richer application experience. This experience is now becoming comparable with native applications. However, the development environment for creating these new types of rich web apps using WebGL is different.

This chapter walks us through the aspects of porting a typical OpenGL mobile app from Android and iOS to the web, covering steps from setting up your GL context to drawing a textured button or handling the camera and controls to finally debugging and maintaining your application.

This chapter includes accompanying source code that demonstrates the concepts introduced in iOS, Android, Qt, and WebGL to help developers get up to speed on web development using WebGL.


5. The GLSL Shader Interfaces
Christophe Riccio

The shader system is a central module of a graphics engine, providing flexibility, performance, and reliability to an application. In this chapter we explore various aspects of the GLSL shader interfaces to improve its quality.

These interfaces are the elements of the language that expose buffers and textures within a shader stage. They allow communication between shader stages and between the application and the shader stages. This includes input interfaces, output interfaces, interface blocks, atomic counters, samplers, and image units [Kessenich 12].

On the OpenGL Insights website,, code samples are provided to illustrate each section. A direct output from this chapter is a series of functions that can be directly used in any OpenGL program for detecting silent errors, errors that OpenGL doesn't catch by design, but eventually result in an unexpected rendering.

I target three main goals:

  • Performance. Description of some effects of the shader interface on memory consumption, bandwidth, and reduction of the CPU overhead.
  • Flexibility. Exploration of cases to ensure the reuse of a maximum number of objects.
  • Reliability. Options in debug mode for detecting silent errors.


6. An Introduction to Tessellation Shaders
Philip Rideout and Dirk Van Gelder

Tessellation shaders open new doors for real-time graphics programming. GPU-based tessellation was possible in the past only through trickery, relying on multiple passes and misappropriation of existing shader units.

OpenGL 4.0 finally provides first-class support for GPU tessellation, but the new shading stages can seem nonintuitive at first. This chapter explains the distinct roles of those stages in the new pipeline and gives an overview of some common rendering techniques that leverage them.

GPUs tend to be better at "streamable" amplification; rather than storing an entire post-subdivided mesh in memory, tessellation shaders allow vertex data to be amplified on the fly, discarding the data when they reach the rasterizer. The system never bothers to store a highly-refined vertex buffer, which would have an impractical memory footprint for a GPU.

Pretessellation graphics hardware was already quite good at rendering huge meshes, and CPU-side refinement was often perfectly acceptable for static meshes. So why move tessellation to the GPU?

The gains are obvious for animation. On a per-frame basis, only the control points get sent to the GPU, greatly alleviating bandwidth requirements for highdensity surfaces.

Animation isn't the only killer application of subdivision surfaces. Displacement mapping allows for staggering geometric level-of-detail. Previous GPU techniques required multiple passes over the geometry shader, proving awkward and slow. Tessellation shaders allow displacement mapping to occur in a single pass [Casta˜no 08].

Tessellation shaders can also compute geometric level-of-detail on the fly, which we'll explore later in the chapter. Previous techniques required the CPU to resubmit new vertex buffers when changing the level-of-detail.



7. Procedural Textures in GLSL
Stefan Gustavson

Procedural textures are textures that are computed on the fly during rendering as opposed to precomputed image-based textures. At first glance, computing a texture from scratch for each frame may seem like a stupid idea, but procedural textures have been a staple of software rendering for decades, for good reason. With the ever increasing levels of performance for programmable shading in GPU architectures, hardware-accelerated procedural texturing in GLSL is now becoming quite useful and deserves more consideration. An example of what can be done is shown in Figure 7.1.

Figure 7.1

Writing a good procedural shader is more complicated than using image editing software to paint a texture or edit a photographic image to suit our needs, but with procedural shaders, the pattern and the colors can be varied with a simple change of parameters. This allows extensive reuse of data for many different purposes, as well as fine-tuning or even complete overhauls of surface appearance very late in a production process. A procedural pattern allows for analytic derivatives, which makes it less complicated to generate the corresponding surface normals, as compared to traditional bump mapping or normal mapping, and enables analytic anisotropic antialiasing. Procedural patterns require very little storage, and they can be rendered at an arbitrary resolution without jagged edges or blurring, which is particularly useful when rendering close-up details in real-time applications where the viewpoint is often unrestricted. A procedural texture can be designed to avoid problems with seams and periodic artifacts when applied to a large area, and random-looking detail patterns can be generated automatically instead of having artists paint them. Procedural shading also removes the memory restrictions for 3D textures and animated patterns. 3D procedural textures, solid textures, can be applied to objects of any shape without requiring 2D texture coordinates.

While all these advantages have made procedural shading popular for offline rendering, real-time applications have been slow to adopt the practice. One obvious reason is that the GPU is a limited resource, and quality often has to be sacrificed for performance. However, recent developments have given us lots of computing power even on typical consumer-level GPUs, and given their massively parallel architectures, memory access is becoming a major bottleneck. A modern GPU has an abundance of texture units and uses caching strategies to reduce the number of accesses to global memory, but many real-time applications now have an imbalance between texture bandwidth and processing bandwidth. ALU instructions can essentially be "free" and cause no slowdown at all when executed in parallel to memory reads, and imagebased textures can be augmented with procedural elements. Somewhat surprisingly, procedural texturing is also useful at the opposite end of the performance scale. GPU hardware for mobile devices can incur a considerable penalty for texture download and texture access, and this can sometimes be alleviated by procedural texturing. A procedural shader does not necessarily have to be complex, as demonstrated by some of the examples in this chapter.

Procedural methods are not limited to fragment shading. With the everincreasing complexity of real-time geometry and the recent introduction of GPUhosted tessellation as discussed in Chapter 6, tasks like surface displacements and secondary animations are best performed on the GPU. The tight interaction between procedural displacement shaders and procedural surface shaders has proven very fruitful for creating complex and impressive visuals in offline shading environments, and there is no reason to assume that real-time shading would be fundamentally different in that respect.

This chapter is meant as an introduction to procedural shader programming in GLSL. First, I present some fundamentals of procedural patterns, including antialiasing. A significant portion of the chapter presents recently developed, efficient methods for generating Perlin noise and other noise-like patterns entirely on the GPU, along with some benchmarks to demonstrate their performance. The code repository on the OpenGL Insights website,, contains a cross-platform demo program and a library of useful GLSL functions for procedural texturing.


code • demo

8. OpenGL SC Emulation Based on OpenGL and OpenGL ES
Hwanyong Lee and Nakhoon Baek


9. Mixing Graphics and Compute with Multiple GPUs
Alina Alt




Related Links:









Partner Links