Computer system simulators – do they look like reality

In a simple and accessible language about basic terms from the field of simulators, as well as types and levels of detail of models. Material for easy and quick acquaintance with this area.

image

If I had been asked about the simulation a while ago, the first thing that would have occurred to me is my son, who talks about his sick stomach on the eve of the control at school. However, for the past ten years I have been working with simulators of various computer systems, from telephones to servers based on microprocessors, SOCs (System-On-Chip) and chipsets of one of the largest manufacturers (unfortunately, the name is under the NDA), and my idea of The simulation has changed. But first things first.

I am sure that many of you have come across simulators, which are often called virtual machines, hypervisors. Someone installs Parallels Studio on their Mac to run Windows from MacOS, someone uses a product from VmWare – Workstation to have another operating system (OS) running inside already installed. Those familiar with Linux prefer KVM and QEMU. Also popular among the people is VirtualBox. People professionally developing FPGA-based hardware (Programmable Logic Integrated Circuits) are aware of VCS from Synopsys and Mentor Graphics Questa. And yet this is only a small part of what can be called simulators.

What is a simulator?

A simulator is a model, usually a software, real device. Accordingly, simulation is a process of such a model that repeats the operation of the device.

In principle, you can make a model of any device, but the most common are simulators of microprocessor devices, that is, devices whose central component is a microprocessor, and the rest of the logic is already being built around it. One of the main uses for the simulator is to launch programs designed for this microprocessor itself. At the same time, using a real device is difficult for one reason or another, for example, it may simply not exist yet, if we are talking about modeling the future generation of microprocessors.

Airbnb in simulation – guest and host

The code that runs inside the simulator is called the “guest code”, it can be a “guest program” or the whole “guest operating system”. The simulated system itself is simply called a “guest”. In turn, the system, the computer where the simulator runs, is called the “host” (English host), and the operating system running on the host in which the simulator runs is called the “host OS”.

image

Thus, we can say that a simulator that implements a certain set of guest system instructions simulates them using the available host system tools.

Simulation and emulation – which name is correct?

The model can repeat the device with varying degrees of accuracy and detail. Often this is a simulation of only the external behavior of the system available to the program code. The code doesn’t “care” how exactly this or that processor instruction is implemented inside – the main thing is that it works. This version of the simulation is common, not difficult to develop and quite fast, does not slow down even on ordinary user computers.

However, this is not enough if we want to know, for example, how long the program will run on real hardware. This requires modeling not only external behavior, but also a repetition of the internal structure and logic of work. This can also be done with varying degrees of detail and accuracy. It’s more correct to call such models emulators, which really emulate the device, and not “simulate” the results.

Creating emulators is much more complicated due to the greater amount of functionality that must be implemented in the model. They also function much slower compared to simulators of the external behavior of the device. With emulators, we are not talking about starting Windows at all – it can take years. No one is engaged in the creation of a software emulator of the entire platform – it is very long and expensive. Instead, individual components of the system are emulated, such as the same central processor, and only part of the simulation process is launched on it. Various hybrid schemes are possible, when part of the simulator is a high-level model, part is a low-level model, part is in an FPGA, and part is actually a real piece of iron.

image

4 levels of simulation detail

As I wrote above, the most common option is the simulation at the level of the processor instructions, the so-called ISA (Instruction Set Architecture), or, more precisely, the result of their execution, i.e. without emulating all the internal logic of how this happens in a real processor, and without taking into account the execution time of various instructions. Such simulators are also called functional. This is how VirtualBox, Vmware Workstation, Wind River Simics, KVM and QEMU work. This allows you to conveniently, without unnecessary additional actions, run programs designed for the simulated device. In other words, neither recompilation nor any other manipulations with running programs are required. In such cases, they say that it is possible to run unmodified binary code.

If we talk about a higher level of abstraction, then this will be the implementation of a certain ABI (Application Binary Interface). In a nutshell, ABI describes a binary interface for the interaction of two programs – usually a user program and a library or OS. ABI covers calling conventions (how to pass parameters and return values), sizes of data types, making system calls. How it works? For example, if a program written for Linux needs to create an additional thread (from the English thread) for execution, then the pthread_create () function is called. But what if you make a library with such a function in Windows and implement the necessary mechanisms for linking the application and the library (dynamic linking)? In this case, you can run Linux applications from Windows. Windows will “simulate” Linux. This is exactly what was done in Windows subsystem for Linux on Windows 10, which allows you to run unmodified binary Linux applications on Windows.

Now let’s see how the lower-level and detailed levels of simulation look like. This will be the microarchitecture level at which real internal algorithms and processor blocks are simulated, such as an instruction decoder, queues, an out-of-order processing unit, a branch predictor, a cache, a scheduler, and counting devices themselves. Such modeling allows us to analyze the real speed of program execution and, for example, optimize them for existing architectures. And in the case of simulating prototypes of future microprocessors, prediction and evaluation of the performance of these devices are possible.

Below the level of microarchitectural simulation is the level of emulation of the logical elements that modern chips are made of. Such emulators are both software and hardware using FPGAs. FPGA logic is described using RTL (Register Transfer Level) in the languages ​​Verilog, VHDL, etc. After compilation, an image (bitstream) is obtained, which is then flashed into the FPGA. And for this, it is not necessary to use a soldering iron and understand electrical engineering. The board is connected to a computer, for example, via USB or JTAG interface, and special software from the manufacturer of the FPGA card performs recording. The cost of such boards starts from ten dollars for the simplest options to millions of dollars for large FPGA boards the size of a cabinet used in large chip manufacturing companies. In such companies, FPGA simulation is the final stage before RTL is put into production.

If we are talking about simple devices, then having the FPGA image on hand, you can contact specialized companies that will make a real (non-FPGA) device with programmed logic.

The figure below shows the described simulation levels.

image

In addition to these simulation levels, I also had to deal with hybrid simulators. In fact, they are simulators connected with each other, modeling different parts of the system at different levels. For example, you need to analyze the bandwidth of a new network card that works together with the driver being developed for a particular OS. Such a network device, as well as a number of related devices, can be implemented first at the microarchitectural level for preliminary analysis, and then in the FPGA, at the level of logic elements, for final checks. At the same time, the rest of the system, which is only partially involved, is implemented at the instruction level. You can’t do without it, since it is necessary, for example, to load the OS, and it makes no sense to implement it at a lower and more complex level.

So what about comparing simulators and reality?

As it is now clear, there is no task to make this or that simulator as similar to reality as possible. There is a task posed by the business, and the simulation is performed with the degree of “similarity” to reality, which is minimally sufficient to solve this problem, without wasting extra money and time. In one case, it can be a simple library that implements the necessary binary interface (ABI), and in the other, a detailed microarchitectural simulator cannot be dispensed with.

This is the most basic information about what simulators are and what they are. In the next article I will describe the details of the implementation of full-platform simulators, beat models and work with tracks.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *