Polygons are generally used to compose 3D images in computer graphics. The Polygon count refers to the number of polygons being rendered per frame. Google Seurat reduces the overall polygon count that is displayed at any given time and therefore lowers the required processing power and resources. It takes advantage of the limited viewing region, available in a mobile VR, to optimize the geometry and textures in a scene.
What this means is, that Seurat takes all of the possible viewpoints that a VR user may see and removes the area of the 3D environment that they’d never be able to see. By utilizing the limited range of movement to their advantage, Seurat removes object permanence from the equation. So if users can’t see something in virtual reality, chances are it doesn’t actually exist.
On a more technical level, Google Seurat takes RGBD images (color and depth) as input and generates a textured mesh to simplify scenes. It targets a configurable number of triangles, texture size, and fill rate, to achieve this simplification process. Thus delivering immersive VR experiences on standalone headsets. This scene simplification technology was used to bring a 'Rogue One: A Star Wars Story' scene to a standalone VR experience.
Developers can start working with Seurat right away with the GitHub page, containing the documentation and source code required to implement it into their projects.
Alongside Seurat, Google also released Mirage Solo, the first headset on the Daydream VR platform.
Top 7 modern Virtual Reality hardware systems
Oculus Go, the first stand-alone VR headset arrives!
Leap Motion open sources its $100 augmented reality headset, North Star