3D Gaussian Splatting has revolutionized neural rendering with real-time performance. However, scaling this approach to large scenes using Level-of-Detail methods faces critical challenges: inefficient serial traversal consuming over 60% of rendering time, and redundant Gaussian-tile pairs that incur unnecessary processing overhead.
To address these limitations, we propose FilterGS, featuring a parallel filtering mechanism with two complementary filters that enable efficient selection without tree traversal, coupled with a scene-adaptive Gaussian shrinkage strategy that minimizes redundancy through opacity-based scaling. Extensive experiments demonstrate that FilterGS achieves state-of-the-art rendering speeds while maintaining competitive visual quality across multiple large-scale datasets.
The framework of FilterGS. (a) Given a set of images and SfM points for a large-scale scene, we first train a LoD tree model. (b) To quantify redundancy of scene-wide Gaussian-tile pairs, we perform a pre-rendering pass over all training views. While following the standard 3DGS pipeline, this stage computes the GTC metric G¯ and derives a scaling factor τ = f(G¯). (c) In the formal rendering stage, the pre-computed scaling factor τ and the LoD tree models are processed by two specialized filters to efficiently select Gaussians for rendering. These Gaussians are adaptively scaled by τ during AABB formation, significantly reducing redundant key-value pairs before final sorting and α-blending.