frame buffer, Render Server, FPS, CPU vs GPU

Render Loop is spinning – frames are muddied

Good day, dear readers. Here I begin my series of articles on working with graphics in iOS.

My plans are to understand how basic rendering mechanics work and delve into such things as AVFoundation, Metal.

But for now, I want to understand how the rendering of our favorite buttons works out of the box, which we do not get tired of painting. How to achieve 60 frames per second. Magic words that will make anyone desire our interface.

What is FPS?

As the wiki says, it is “frame rate, frame rate is the number of frames to be changed per unit of time in cinematography, television, computer graphics”

To begin with, remember the information that iPhone and iPad displays are updated from 60 Hz. The latest iPad Pro displays at 120Hz. Apple TV can match the refresh rate of the TV or movie being played

A display with a refresh rate of 60Hz will be updated 60 times per second. This is a constant number.

The application must be able to display frames at the display rate. 60 Hz means 60 frames per second for the application, ~16.67 ms for frame rendering.

Note that this does not necessarily mean that the application is rendering 60 times per second. When the content does not change, there is no need to redraw it.

Render Loop – where is everything going?

The render loop is the rendering loop in the iOS system.

Its life cycle is as follows:

  • Getting an event

  • Create a render tree

  • Submit to Render Server

  • Changing frame

General cycle
General cycle

event

First, some event occurs (Touch, callback from the network, keyboard action, timers)

Events are called from anywhere in the hierarchy.

Let’s say we want to change the bounds of our view, then Core Animation calls the setNeedsLayout method. The system understands that you need to call the update layout of the request.

Commit Transaction

there is preparation and the commit itself
there is preparation and the commit itself

At the heart of all layer and animation updates is Core Animation. This framework works not only within the application itself, but also between other applications. When we switch from application to application.

The animation itself takes place in another stage, outside of our application. This stage is called render server.

At the commit transaction stage, the layer tree is prepared and its implicit transaction is updated.

Transactions are the mechanism that Core Animation uses to update properties. Any properties of our layers do not change instantly, but instead are prepared in a transaction and wait for their commit

When we want to perform an animation, we first go through 4 steps:

  • layout – At this stage, we prepare the views, their properties (frame, background color, border, and others)
    Once the layout has been calculated, the system calls the setNeedsDisplay method.

  • display- updates the CGContext. This drawing may involve calling the drawRect, drawLayer functions of each subviews

  • prepare- At this point, Core Animation is preparing to send the animation data to the render server. This is where image preparation and decoding takes place.

  • commit- This is the final step where Core Animation packs up the animation layers and properties and sends them through Interprocess communication (IPC) to the render server for rendering.

Core Animation merges the changes into a transaction, encodes them, and commits them to the render server.

Okay, we’ve prepared our render tree and passed it to the next step.


Render Server

Now we are on the render server – this is a separate process that calls rendering methods for the GPU using OpenGL or Metal. It is responsible for rendering our layers to an image.

Render Prepare

At this stage, the run of the layer tree takes place and we prepare the layer pipeline to run it on the GPU. Recursively running from the parent layer to the child.

Render Execute

After the layers pipeline is passed to rendering. Where each layer will be collected into the final texture. Up to this point, all calculations took place in the CPU, but then the work passed into the hands of the GPU.

Some layers will take longer to display than normal layers. And this is most often the bottleneck for optimizations.

Once the GPU is rendering the images, it’s ready to be rendered for the next VSYNC

VSYNC is the deadline for each phase of our render loop. Each VSYNC changes the next frame for us

To achieve better optimization, each frame is parallelized. While the CPU is reading frame number N, at this time the GPU is rendering the previous frame N-1

We have defined what a render loop is. Now let’s determine what affects the lags and frame drawdowns.


Performance Issues

If we overload our application or don’t manage resources well, we can run into problems like this:

  • Frame loss

  • Fast battery drain

  • Long responsiveness

Therefore, it is worth reading the tips that will help solve the problem.

As we remember, for the render loop we have operations on the CPU and on the GPU.

We already know that here the work is arranged in such a way that the CPU and GPU work in parallel with each other. While the CPU is reading frame number N, at that time the GPU is rendering the previous frame N-1, and so on.

Let’s move on to the main problems and bottlenecks that can affect performance

On the main thread, the code that is responsible for touch events and working with the UI is executed. It also renders the screen. Most modern smartphones render at 60 frames per second. This means that tasks should complete in 16.67 milliseconds (1000 milliseconds / 60 frames). Therefore, speeding up work in the Main Thread is important.

If any operation takes more than 16.67 milliseconds, frame loss will automatically occur, and app users will notice this when playing animations. On some devices, rendering is even faster, for example, on the iPad Pro 2017, the screen refresh rate is 120 Hz, so there are only 8 milliseconds to complete operations per frame.

Offscreen Rendering

What is offscreen rendering? In essence, this is some kind of off-screen calculation.

Under the hood, it looks like this: during the drawing of a layer that needs off-screen calculations, the GPU stops rendering and transfers control to the CPU. In turn, the CPU performs all the necessary operations (for example, creates a shadow) and returns control to the GPU with the layer already drawn. The GPU renders it and the drawing process continues.

In addition, offscreen rendering requires the allocation of additional memory, for the so-called backing storage. At the same time, it is not needed for drawing layers where hardware acceleration is used.

Our GPU will need some extra loopback if we change the properties below:

Shadows

Everything is simple here. The renderer does not have enough information to draw the shadow, so here the shadow is calculated separately

Masks for CALayer

The renderer should display the layer subtree under the mask. But you also need to avoid overwriting pixels outside the mask.

Therefore, we will store all information about the image until the pixels under the mask are calculated and placed in the final texture.

These off-screen calculations can store many pixels that the user will never see.

Corner radius

This type is associated with a mask. The rounding of the corners of a layer can also be calculated off-screen.

If rendering does not have enough information, then it can draw the view completely

and copy the information about the pixels inside

Someone writes do not use the parameter cornerRadius. Application viewLayer.cornerRadius results in offscreen rendering. Instead, you can use the class UIBezierPath, as well as something similar to working with CGBitMap, as was the case with JPEG decoding. In this case, it is used UIGraphics context.

Visual effects

This work is associated with two effects:

To apply the effect, the renderer must copy the content under the visual effect into another texture, which is stored in the off-frame buffer. Then apply the visual effect to the result and copy it back to the render buffer

These 4 types of off-screen calculations greatly slow down the rendering.

Too big image

If you are trying to draw an image that is larger than the maximum texture size supported by the GPU (usually 2048×2048 or 4096×4096, depending on the device), you must use the CPU to pre-process the image. What can affect the performance every time you draw many of these images

Color mixing

Blending is a frame rendering operation that determines the final color of a pixel. Each UIView (to be fair, CALayer) affects the color of the final pixel, for example, in the case of combining a set of properties such as alpha, backgroundColor, opaque all overlying views

Let’s start with the most used UIView properties like UIView.alpha, UIView.opaque and UIView.backgroundColor.

Opacity vs Transparency

UIView.opaque is a hint for the renderer that allows you to view images as a completely opaque surface, thereby improving rendering quality. Opacity means: “Draw nothing under the surface“. UIView.opaque allows you to skip the rendering of the lower layers of the image and thus no color mixing occurs. The topmost color for the view will be used.

Alpha

If alpha is less than 1, then opaque will be ignored even if it is equal to YES.

Even though the default opacity is YES, the result is color blending because we made our image transparent by setting the Alpha value to less than 1.

Make layers opaque whenever possible – as long as they have the same colors that overlap each other. The layer has a property opacity, which should be set to one. Always make sure the background color is set and is opaque.

Overriding the drawRect Method

If we have overridden the drawRect method, then we introduce a significant load before we draw something inside.

To support freehand drawing on the layer’s content, Core Animation must create an in-memory background image equal in size to the size of the view.

Then, once the drawing is complete, it must pass this data via IPC to the render server. On top of this overhead, Core Graphics rendering is very slow anyway, and it’s definitely not something you want to do in a performance-critical situation.

Image decoding and image downsamling

In general, decoding jpeg images should be done in the background. Most third-party libraries (AsyncDisplayKit, SDWebImage, etc.) can do this by default. If you don’t want to use frameworks, you can do the decoding manually. For this you can write an extension over UIImage, in which you will create a context and manually draw the image.

On this, I described the main points of the render loop and its optimization. In the future, I plan articles on frameworks that work with graphics, video and audio.

Subscribe to my telegram channel: https://t.me/iosmakemecry

Material used

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *