Recently a reader wrote in regarding my earlier Quadrilateralized Spherical Cubes post, however I had neglected to update the original post with some added corrections. I initially wrote about this topic for my first dip into 3D visualization, and unfortunately at the time I conflated QuadSpheres and Quadrilateralized Spherical Cubes as a single concept. Both are a popular topic for visualization, game making and scientific research so I’m going to expand a little bit further on the differences between the two and hopefully that will make it a little bit clearer.
First let me preface this by saying that there’s more than one way to generate a mesh for a sphere. Since my intention was to reduce visual distortion on the earth texture I was using, I chose to map my textures to a QuadSphere, rather than the common method of using ECP tesselation which causes pinching at the north and south poles of the spheroid. A QuadSphere is composed of 6 faces which correlate to the 6 sides of a cube. These faces are called quads because they are essentially squares before you begin any transformations on them. Each of these 6 faces can be subdivided further into smaller quads to increase detail when the camera is closer to the mesh, and each of those resultant quads are tessellated into tris and rendered. In a modern graphics engine this will take place in the vertex shader stage of the rendering pipeline, so you will just feed in the heightmaps and shaders to shape your terrain.
For an older OpenGL 2.0-style pipeline, you can create a QuadSphere by projecting equidistant rays starting from your origin of the sphere to the surface and pushing those vertices into a buffer. This can start out as a 32 x 32 curved grid or something; 1 grid for each side; 6 grids in total to form the spherical shape. You can further subdivide these grids to get more detail when your camera is closer. That’s a QuadSphere in a nutshell.
My work never used the QLSC agorithm. A QLSC ensures that all virtual points on your map take up an equal area of space, decreasing the amount of distortion. Not necessarily texture distortion, but aberration from your virtual model to the real life thing. Distortion occurs naturally when projecting flat data onto a spheroid, so Doctor Kenneth Chan worked extensively on the QLSC algorithm to make it extremely accurate at mapping points and data. This is great for scientific research and that is why it was used in the Cosmic Background Explorer project at NASA.
There are a lot of little caveats to this, but the big takeaway is there’s not a huge need for QLSC in real-time visualization and games. You’ll likely need it for data processing, geomapping spatial data, or other areas of scientific interest.
One last term to take note of is cube mapping. That is how a texture is mapped onto a shape consisting of quads, but not necessarily how that texture will be projected. For instance, skyboxes are meant to be projected on a cube, not a QuadSphere. Though both of these are cube mapped, you will need properly projected textures to suit your needs to minimize the amount of texture distortion.
Have any questions or corrections? Please leave it in the comments below and I’ll be glad to elaborate more.