Hi all! My name is Eugene, I’m a backend developer at Bimeister. Today I would like to continue the story about our Spatium 3D engine. The article will focus on one more of the optimization algorithms – the search and removal of redundant vertices from 3D models.
The material may be of interest to engineers involved in 3D design and development.
In computer graphics, geometry can be specified in various ways – by a list of faces, a table of angles, vertex or “winged” representations, and so on. Therefore, to work with different model formats in a single space, one should first bring the geometries of different representations to some unified description. Such a description in general form is a set positions vertices (points in space with three coordinates x, y, z) and normals vertices (vectors with three components x, y, z) associated with these positions. If you connect three vertices with segments, you get a triangle. The graphics processor can either paint over this triangle with some color, or “pull” a texture onto it. And normals are needed for the correct calculation of reflections and shadows when using light (we won’t see anything without light). The combination of such triangles forms the so-called “mesh“- a figure that defines the shape of a polyhedral object in space.
The more complex and detailed the shape of the figure, the more vertices and triangles that describe it, it contains. An additional increase in the number of triangles can be caused by tessellation or export to a format external to the modeling environment. In all these cases, it may happen that the number of triangles in the geometry exceeds the minimum number necessary and sufficient for the graphics card to correctly render the geometry. This means that the use of memory for storing information will be increased, but this information will not be of any practical use – extra triangles formed by excess vertices do not affect the shape of the geometry. There is a desire to remove these redundant vertices in order to reduce the memory consumption for storing, transferring and processing this geometry. For brevity, we will call this optimization “geometry filtering”. We will immediately make a reservation about the fundamental difference with another optimization, in which vertices are also removed from the geometry – simplification. Simplifying the shape of the geometry changesand when filtering – No.
So under redundant we will understand such vertices, the removal of which from the geometry will not lead to the loss of its shape.
In addition, there are several restrictions:
Geometry for us is two arrays: an array of vertex coordinates (positions) and an array of vertex normals. Therefore, we are working not just with coordinates, but with coordinate-normal pairs. If several vertices have the same coordinate (position), but the normals are different – for us this is different tops.
Vertices have a transformation matrix. We will be interested in the scale component of this matrix.
Polygons can have holes.
As a result of optimization, the shape of the original geometry should be preserved.
Our optimization will consist of several stages. Next, we will consider each of them in more detail.
Stage 1 – Finding and removing degenerate triangles
By degenerate triangles we mean those triangles in which:
Why look for them? Degenerate triangles do not define any shape, but use memory. Therefore, we will start clearing the geometry with them.
Let’s remember about restrictions 1 and 2. Let’s decide for ourselves with what acceptable positioning accuracy we will work: for this, we will determine the order of characteristic distances (mm, cm, m, km). If your subject area is factories with sizes of hundreds of meters, then drawing triangles with micron precision is clearly redundant (difference of 7-9 orders of magnitude).
Therefore, we round up to the established value, taking into account the scale, check the signs of degeneration, and delete the excess.
Stage 2 – Getting the topology of the geometry
The topology of the geometry will be needed to find various areas, such as polygon boundaries, groups of adjacent triangles, etc.
We build triangles from position-normal pairs. Then for each triangle:
We calculate the normal according to the right-hand rule, specify the orientation of this normal according to the normals of the vertices.
We build edges (the hash of the center of each edge is important to us), remember them.
After all the triangles are built, we fill in the dictionaries “Vertex – Triangles with this vertex” and “Triangle – Triangles adjacent to it”. They just come in handy for separating candidate polygons by areas of contiguity and determining their boundaries (limitation 3).
Stage 3 – Search for polygons with potentially redundant vertices
Since the shape of the geometry cannot be changed (limitation 4), we will work with redundant vertices within the polygon. A polygon is a set of triangles lying in the same plane.
But first you need to find a suitable landfill. How to do it? Remember that every vertex has a normal. For any point in the plane, the normal will be the same. So, we need to group the vertices according to the normals. Then you need to check several conditions:
The number of points in the group must be more than three (more than one triangle in the plane).
The number of points in a group must be a multiple of three (the geometry is in a “raw” form, there is no vertex indexing).
All points in a group must not belong to the same line.
All points in the group must belong to the same plane – vertices can have the same normals, but lie in several parallel planes with the same orientation in space.
In addition, even if they are in the same plane, vertices with the same normals can belong to different polygons. Therefore, they need to be divided by “connectivity areas” – triangles with adjacent edges.
You can first group the triangles themselves according to the normals, and group the vertices according to the normals already inside the group of triangles.
Those groups of vertices that satisfy all the above conditions become candidate polygons for filtering. Let’s go further.
Stage 4 – Finding and Removing Redundant Vertices
For each candidate polygon, you need to find the vertices that form the boundaries of the polygon: external (shell) and internal (holes).
The vertices that are not included in the list of boundary vertices are considered redundant – they are located in the “body” of the polygon and do not carry useful information about its shape.
And for the boundary vertices we build segments that we check for belonging to a straight line. If there are more than two such positions, then the positions “inside” the segment will be redundant – they also do not say anything about the shape of the polygon.
Stage 5 – Retriangulation of cleared polygons
Since the video card works with triangles, after deleting the vertices, the polygon must be turned back into a set of triangles, i.e. – triangulate.
Since we are dealing with polygons that can have holes, we need to remember to take this into account. The appropriate algorithm for us would be Delaunay triangulation in restrictions – boundary edges will be taken into account and not lost (good visual example Here).
Hooray! The polygon is cleared, re-triangulated and ready to go to the graphics card. It remains only to remove all occurrences of the found redundant vertices from the original geometry, and then add new data from the triangulation to it. Done, we can save the mesh and use it instead of the original one.
Shape-preserving redundant vertex removal is an algorithm that can be useful if your system is dealing with a lot of unoptimized geometries that cannot be simplified.
We conducted studies of its effectiveness on models of our subject area, designed in CAD. According to the results, for such models, this optimization can be very costly in terms of power and time (especially for large geometries with several million vertices), and the final effect is quite low – less than 0.5% of all vertices are redundant. As a rule, this is the center of the circle (Figure 5, top).
It is possible that for models obtained by other methods (scanning, artistic modeling, multi-stage conversion, etc.), the effect will be more pronounced, but we did not conduct such studies.
Therefore, it is necessary to make a decision on whether to use this algorithm or not individually for each project, focusing on a preliminary estimate of the amount of “empty” data in models of a particular subject area.