HCCMeshes: Hierarchical-Culling oriented Compact Meshes

Tae-Joon Kim 1 , Youngyong Byun 1 , Yongjin Kim 2 , Bochang Moon 1 , Seungyong Lee 2 , Sung-Eui Yoon 1


Eurographics 2010

Applications: These figures show images of applications using our HCCMesh representations. From left, we show a Whitted-style ray tracing of the St. Matthew, photon mapping on a transparent David model in the Sponza scene, a line-art style rendering of the Lucy model reflected on a sphere, and collision detection between the Lucy and a CAD turbine model


Hierarchical culling is a key acceleration technique used to efficiently handle massive models for ray tracing, collision detection, etc. To support such hierarchical culling, bounding volume hierarchies (BVHs) combined with meshes are widely used. However, BVHs may require a very large amount of memory space, which can negate the benefits of using BVHs. To address this problem, we present a novel hierarchical-culling oriented compact mesh representation, HCCMesh, which tightly integrates a mesh and a BVH together. As an in-core representation of the HCCMesh, we propose an i-HCCMesh representation that provides an efficient random hierarchical traversal and high culling efficiency with a small runtime decompression overhead. To further reduce the storage requirement, the in-core representation is compressed to our out-of-core representation, o-HCCMesh, by using a simple dictionary-based compression method. At runtime, o-HCCMeshes are fetched from an external drive and decompressed to the i-HCCMeshes stored in main memory. The i-HCCMesh and o-HCCMesh show 3.6:1 and 10.4:1 compression ratios on average, compared to a naively compressed (e.g., quantized) mesh and BVH representation. We test the HCCMesh representations with ray tracing, collision detection, photon mapping, and non-photorealistic rendering. Because of the reduced data access time, a smaller working set size, and a low runtime decompression overhead, we can handle models ten times larger in commodity hardware without the expensive disk I/O thrashing. When we avoid the disk I/O thrashing using our representation, we can improve the runtime performances by up to two orders of magnitude over using a naively compressed representation.

Video (available to download in the last section)


Related Links