What is texel?

Hi all! My name is Grigory Dyadichenko and I am a technical producer. Today I want to talk about textures. About what is uv mapping, mip map and other basic concepts of computer graphics. What is texel or texel? If you are interested in this topic – welcome under the cut!

Many underestimate the basic basic concepts. Many disputes generally arise because of a different understanding of some nuances and terms. There is such a wonderful concept, let’s say, as texel. This is a simple and basic concept. At the same time, in Wikipedia, it is written, in my opinion, not unambiguously and for some reason is associated with a three-dimensional object. These concepts are related, but not in the same way and for the wrong reason. Plus, for some reason, there are extra words like “the minimum unit of the texture of a three-dimensional object.” What does it mean? Completely incomprehensible.

Texel

So what is texel? First, do not confuse it with texel, as it is a breed of sheep. AND texel is the texture pixel. And nothing more. In order to better understand it, it is better to parse it, but why did this term appear at all? After all, there is a pixel, why call a pixel in a picture something else?

In rendering tasks, otherwise everything would sound like “a pixel travels through a pixel, sees a pixel pixel pixel”. The fact is that in addition to the pixel in the texture, there is a pixel on the monitor screen, and for this reason, in order not to be confused, a texel appeared. This term becomes convenient when we discuss what is sampling, texture filtering, mip-mapping. All of these processes are related to how bitmap graphics rendering works and what happens when a larger (or smaller) screen resolution texture is rendered onto it. After all, if you have a 1920 × 1080 screen, and you display a 2048 × 1024 image on the entire screen, then you need to somehow determine which pixels from the image will fall into the screen pixels. Monitors do not have fractional diodes. The number of diodes is integer and they output one color. Therefore, we need mechanisms for how we display our picture on the screen.

And in 3D it’s still a little more complicated. Since the picture is “stretched” on an arbitrary shape. And there the resolution may differ depending on the approach or distance from the camera model. And so on. Such problems can be encountered in VR, AR and games. So, we figured out the texel. Let’s deal with the rest.

To move from a 2D picture to displaying it on a monitor and in 3D, we need one more concept. It’s called UV coordinates.

UV coordinates

The simplest analogy to understand what a sweep is. This is origami. Imagine that you have pre-decorated the origami scheme, and then collect something from it. The sheet you originally painted is your texture. And the assembled little animal is a model. In origami, it is not customary to cut anything, but in computer graphics it is possible and necessary, so the drawing is not a single one.

Giving an example from computer graphics. It looks something like this.

Texture (whose pixels are our texels):

Mesh 3d model:

Model with texture and mesh display:

UV coordinates (or texture coordinates) are the coordinates that are written to the vertices of the model that determine how the texture will be applied to the model. This gives us a correspondence between the coordinates in space and how the texture should be displayed on them. Usually they take a value from (0; 0) to (1; 1). Getting a color from a texture using texture coordinates is called sampling.

And here we have a problem. We have a texture that is displayed on the model. Our texture mapping logic is subject to the fact that the coordinates are from 0 to 1. That is, they can be fractional. And the texture consists of texels (that is, pixels), which are not fractional. If you have a 128×128 texture, that’s exactly how many texels it consists of. And what to do next?

Texture Filtering

We have three players in this task. Screen pixels, uv and texels. Basically three cases.

  1. The texel covers exactly one screen pixel after converting from uv and back (this almost never happens in graphics)

  2. The texel covers many pixels (this is called magnification)

  3. A pixel covers many texels (this is called minification)

Magnification

Magnification – this is as we said above, when we have a lot of pixels in one texel. Filtering is used to decide what will be displayed on a pixel. The first of the filters Nearest neighborhood or Box Filter or in Unity it is called point. It’s easier to explain here by showing how it looks in the pictures. Let’s take a picture with a cat in low resolution and enable Point filtering.

This kind of filtering is great for pixel art games, because otherwise the pixel art itself will not work and it will behave strangely and be smeared. Its logic is simple. We for a particular value of uv coordinates, we choose the color of the texel closest to it.

The second type of filtering is Linear Filtering or else you know how Bilinear Filtration. She will look like this.

In this case, the colors are mixed. 4 neighboring texels are taken (the closest to our uv coordinate) and depending on the distance from the center of the texel to the uv coordinates of the color will be mixed. The closer the texel, the greater the influence of its color.

The third type of filtering is Cubic or Bicubic. But it is almost not supported by hardware, and therefore it is not in the same Unity. trilinear Unity is a modification of bilinear filtering using information from the Mip Map (we will analyze what this is a little later).

Depending on the goals, the necessary filtering is selected.

minification

The pictures aren’t the prettiest, but I hope they’re understandable. minification – this is as we said above, when we have a lot of texels in one pixel. In order to get the desired color “fairly”, you need to calculate the influence of each texel on the color of the pixel. This is expensive for the GPU and impossible to do efficiently in real time. Due to iron limitations, the same filtration methods are used as in magnification. But all of them one way or another lead to the so-called aliasing (or ladder, which you probably often met) or temporal aliasing. It looks something like this.

You can delve into the topic of what aliasing is and what they are for a long time. So we’ll drop that. But what is completely logical from the name MSAA and any other anti-aliasing algorithm corrects this particular problem, and now you know where the legs grow from.

But aliasing algorithms are not the only way to deal with this effect. And here we smoothly approach the next concept. Mipmaps.

mip texturing

Image taken from https://learnopengl.com/Getting-started/Textures
Image taken from https://learnopengl.com/Getting-started/Textures

If there are more texels than pixels. We cannot calculate the pixel we need in real time, so we can precalculate it. By making a series of textures, each smaller than the previous one. These textures are called mip maps. Thus, for objects displayed on the screen that create a case of minification, we can choose a texture with a smaller number of texels and thus partially overcome the problem of aliasing. This also allows less memory to be used when sampling the texture.

In custody

I tried to tell the basic concepts of a texture pipeline in rendering and the meaning of the texel concept in simple language. If you liked it and it was interesting – put pluses. So I will understand that this format of articles can be interesting and I can make out other concepts from the render. The same notorious aliasing.

Subscribe to my telegram blogif you are interested in Unity development.

Sources:

Learn OpenGL. Textures
Real Time Rendering 4th Edition [2019] Tomas Akenine-Möller

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *