3D render with map editor in Console

Map Editor Demo

Map Editor Demo

Hello!

Today I will describe in detail how I wrote a full-fledged renderer with a map editor in the command line without any graphic libraries. In this article I will explain the ideas and mathematical models that I used when creating these programs with minimal code inserts.

Basics

Let's start with what the render itself is based on. For rendering, I used the Raycast method, which consists of sending rays from the camera and reading the hits of these rays on objects (as a result, we get a point in space that needs to be displayed on the screen).

To be able to give a RGB color to a symbol in the console, I used the following library

Position of the point on the screen

In this article, the concepts of pixel and console symbol are equivalent for me.

Let S be a point in space. The position of point S on the screen is determined depending on the horizontal and vertical angles (1.1*) between the camera direction vector and the vector emitted from the camera to point S. The point here is that we determine how much we need to rotate the camera direction vector to get point S. Accordingly, the horizontal angle will show how much we need to step back from the center of the screen horizontally to get the X coordinate of our point on the screen, and the vertical angle will show how much we need to step back from the center of the screen vertically to get the Y coordinate of our point on the screen.

It is also worth noting that 1 degree is equal to one pixel on the screen. That is, if, for example, the horizontal angle is 5, and the vertical is 10 degrees, then you will need to step back from the center of the screen horizontally by 5 pixels, and vertically by 10 pixels.

After determining the horizontal and vertical angles, we have another problem: which way from the center to retreat. I propose the following solution:

1) Horizontal indent:Let V1 be the camera direction vector, and V2, V3, V4 be the remaining coordinate axes of the horizontal plane of the camera. Let the angle between vectors V1 and V2 = 90° and the angle between vectors V1 and V4 also be 90°. Vector V2 is to the left of vector V4 (relative to vector V1).We project our point onto the horizontal plane of the camera and determine in which of the quadrants of the coordinate system of this plane it is located (We only need two quadrants, since the player's viewing angle will not exceed 180

°

). Accordingly, if the point is in the left quadrant (relative to the camera direction vector), then we must add the horizontal angle to the X coordinate of the screen center. Otherwise (right quadrant), we must subtract the horizontal angle from the X coordinate of the screen center.

2) Vertical indent:

The vertical indent is done in the same way as the horizontal indent, only there, instead of a horizontal plane, a vertical one appears.

Now we have the position of the point on the screen!(1.1*) The horizontal angle in this case is the angle between the camera direction vector and the vector emitted from the camera into the projection of point S onto the horizontal plane of the camera (1.2*). The vertical angle is determined in a similar way, only by the projection of point S onto the vertical plane of the camera. (1.2*) To get the horizontal plane of the camera, you need to rotate the camera direction vector by an arbitrary number of degrees using the usual formula for rotation around the Z axis, but at the same time divide the z coordinate of the resulting vector after rotation by cos(a) (since at a = 0° -> cos(a) = 1, and when a = 180

°

-> cos(a) = 1). Thus, the horizontal plane of the camera will be composed of the camera direction vector and the vector obtained after rotation. The vertical plane is even easier to obtain using the vector product of the horizontal plane vectors. Similarly, the vertical plane of the camera is composed of the camera direction vector and the vector obtained after the vector product of the horizontal plane vectors.

The intersection point of a ray and a parallelepiped

Let sc be the end point of the ray cast (depending on the rendering distance); cam be the camera coordinates in space; a be the point containing the initial data on the range of the parallelepiped in space (x1, y1, z1); b be the point containing the final data on the range of the parallelepiped in space (x2, y2, z2).

We will determine the intersection point of the ray and the parallelepiped by solving the following system of inequalities:

a\le \frac{\overrightarrow{sc-cam}}{\left| \overrightarrow{sc-cam} \right|}*t+cam\le b

The point of this system is that we start a unit vector N in the same direction as the vector (sc – cam), and look for such numbers t that when we multiply the vector N by it, we get a point from the set of points (sc – cam) lying in the range of the desired parallelepiped.

The result of solving this system will be the numbers t = r1 and t = r2, where the minimum of these numbers, when substituted into the system, will be the closest point of intersection of the ray and the parallelepiped, and the maximum will be the farthest (relative to the camera).

Rotation of a parallelepiped

We will consider the rotation of the parallelepiped only around the Z axis, since the rotation around other axes will be similar.

At the base of the parallelepiped, we rotate the vertices of the lowest plane (relative to the z coordinate). The idea is to change the range of the parallelepiped when rotating these points, and then move the point obtained when the ray intersects with this parallelepiped back or even delete it, so that the illusion of parallelepiped rotation is created:

First, we determine in which of the four polygons the point is located (S1, S2, S3, S4). If the ray from the camera intersects the “real” (rotated) side of the parallelepiped outside the new range, then this intersection point of the ray and the parallelepiped can be ignored. Otherwise, we move this point in the direction of our ray until it ends up in the parallelepiped.

The intersection point of a ray and a triangle in space

I don’t want to go into detail about this, I’ll just say that I found the intersection point of the ray and the triangle in space using the Moller-Trumbor algorithm, because it turned out to be the cheapest.

Cubic interpolation

Cubic interpolation is a method of obtaining a cubic parabola passing through two given points (in our case, through some point on the line x = 0 and a point on the line x = 1).

The point of cubic interpolation is to smoothly transfer a point from state y1 to state y2, with a gradual decrease in the speed of this point starting from the middle.

Cubic interpolation will be used in the future.

Applying texture to a parallelepiped

First, we need to determine on which side of the parallelepiped our point lies (this can be done through the equation of the plane: simply substituting the coordinates of our point into the equations of the planes of the sides of the parallelepiped, and finding the value closest to zero). Then we need to select a “reference point” from the vertices on this side, and translate our point into the 2D coordinate system of this plane:

sc is a point lying on some side of the parallelepiped in space; centre is the starting point of the parallelepiped side; newSC is the point obtained after translating sc into the 2D coordinate system of this plane.Total:

newSC.x = cosa * \left| \overrightarrow{sc – center} \right|

newSC.y = sina * \left| \overrightarrow{sc – center} \right|

Next, we will have to multiply the X and Y coordinates of the point newSC by the following ratios: Wk / Ws and Hk / Hs, where Wk, Hk are the length and height of the parallelepiped side, and Ws, Hs are the length and height of the image (in pixels).

We multiply newSC by Wk / Ws and Hk / Hs so that the outermost edge of our side matches the outermost edge of the picture, i.e.

Accordingly, by multiplying the coordinates of the points on the side of the parallelepiped by these values, we will obtain a more compressed or more enlarged image.

To increase the number of images on the horizontal side by K times, you need to multiply the final point (on the texture) by the X coordinate by K: newSC.x * (Wk / Ws) * K. And then take the remainder from dividing by Wk: (newSC.x * (Wk / Ws) * K) % Wk. (Vertically, everything happens in a similar way)

To sum up this point, our main task is to transform the side of the parallelepiped (rectangle) into a texture rectangle, and see where after this transformation the points we need will be located, lying on this side of the parallelepiped.

Applying texture to a circle

Applying a texture to a circle is similar to applying a texture to a parallelepiped, except that the “reference point” is selected according to the following principle:

Let the vectors UP, RIGHT and N be the axes of the local coordinate system of the circle, where the center of this system is the coordinate of the center of the circle, and N is the normal to the plane of this circle. Let also center be the reference point.

center = \overrightarrow{RIGHT} * rad + \overrightarrow{UP} * rad

Where rad is the radius of the circle.

The remaining points are similar. Applying texture to a modelTo solve the problem of applying texture to a model, we need to solve the problem of applying texture to a triangle.

Here it is worth saying right away that our main task is to transform our triangle into a triangle on the texture, and to see where the points of our triangle will be after this transformation.

We have our triangle and a triangle on the texture (most likely they are the same shape). First, as in the parallelepiped, we need to translate our triangle into the texture coordinate system (we will translate in the same way as in the parallelepiped, but we will take one of the triangle's vertices as the center of the texture coordinate system

(if we import a model from blender in obj format, then we will always take the first vertex of the triangle as the center, since in obj, when applying a texture to a triangle, the vertices of the model triangle and the texture triangle basically correspond to each other in order)

). So, we have converted the coordinates of our triangle into texture coordinates (the model triangle is marked in green, and the texture triangle is marked in red):

Next, we move the texture triangle to the origin (so that one of its vertices has coordinates {0;0}). And then we must align our triangle with the texture one:

First, we rotate it so that the second side of our triangle exactly matches the second side of the texture triangle:

(The transformations that we perform with the model triangle will also need to be performed with the points on this triangle).

Rotation algorithm: we transfer the second side of the model triangle to the second side of the texture triangle. Then we rotate this side by the same angle that was between the side of our triangle before the rotation, and scale the resulting line by the length of the third side of the model triangle.

Let S1, S2, S3 be the vertices of the model triangle. SO1, SO2, SO3 be the vertices of the texture triangle, respectively. Next, we need to transform the vertices of our triangle so that max(SO.y) – min(SO.y) = max(Sy) – min(Sy). That is, we need to multiply the sides of our triangle by some factor K: max(Sy) * K = max(SO.y) \longrightarrow K = \frac{max(SO.y)}{max(Sy)}After these manipulations, the model triangle occupies the same area as the texture triangle along the Y axis.

.

Sometimes there are also cases when our triangle after scaling needs to be moved along the Y axis so that min(Sy) = min(SO.y) and max(Sy) = max(SO.y).

That is, now we know where the model triangle points will be located vertically on the texture. Now we need to find out where they will be located

horizontally

:

Let Q be some point of the model triangle obtained after all the above transformations. Let us draw a line through this point, parallel to the line y = 0 and find the points of intersection of this line with the sides of the triangles. Let P1 be the smallest point of intersection (in X) of our line with the texture triangle, and P2 be the smallest point of intersection (in X) of our line with the model triangle. Then Wk is the length of the line QP1, and Ws is the length of the line QP2.

So, the final point will be written as follows:

Where p is a unit vector emitted from point P1; a is the length of the line QP2.

Next, we simply add the original coordinates of point SO1 (which we initially subtracted to move the texture triangle to the center of the coordinate system) to the resulting point I.

As a result, we stretched all the points of the model triangle vertically without fail, and stretched the points horizontally individually.

This means that in order to apply a texture to a model, we need to perform all these transformations with all the polygons of our model.

Demonstration of the resulting texture overlay

Demonstration of the resulting texture overlay

Enlarge/reduce model

To increase the model by 2 times (for example), you need to move all the vertices of the polygons away from the center of the model by 2 times.

Firstly, in this case the size of the sides of the triangles will increase by 2 times (you can check it yourself: in this case a pyramid is formed, and this can be easily deduced from the similarity of the triangles). Plus, the distance from the center of the model to the triangle will also increase by 2 times.

Secondly, if you do this transformation with all triangles, nothing will change.

Demonstration of model enlargement/reduction

Demonstration of model enlargement/reduction

Map Editor Items

In this part of the article I would like to consider the principle of operation of the main items of the map editor, which can be directly created on the map.

Area portals

An area portal is essentially a regular parallelepiped, but it does some manipulations with rays emitted from the camera and hitting this area portal. The essence of area portals is to reduce the length of a ray when it hits them. It is useful to insert them into various walls, since thanks to this the render will not waste resources on checking whether the ray hits objects behind the wall.

Demonstration of the work of area portals

Demonstration of the work of area portals

Implementing texture transparency

When I get a point on the screen, I go through all the transparent objects (I ignore these objects during the main ray casting), and check them for collision with my ray (it is important that the point obtained from the collision with these objects is not further than the main (real) point on the screen), and I add each RGB color of the textures of these transparent objects to the main color of the point on the screen.

Some parts of the texture may not be transparent (like the Area Portal inscription on the area portal texture). Then our point on the screen will be colored in the color of this opaque part of the transparent texture.

Demonstration of texture transparency

#define DECLARE_DELEGATE(DelegateName, ...)\
	using DelegateName = Delegate<__VA_ARGS__>

template <typename... Args>
class Delegate
{
private:
	std::vector<AActor*> Observers;
	std::vector<void(AActor::*)(Args...)> ObserversFunctions;
	std::vector<std::tuple<Args...>> ObserversFunctionsArgs;

	static void RunObsFuncWithArgs(AActor* Observer, void(AActor::* Func)(Args...), Args... args);

public:
	void Broadcast();

	template <typename Type>
	void AddUObject(AActor* Observer, void(Type::* Func)(Args...), Args... args);
};

template <typename... Args>
void Delegate<Args...>::RunObsFuncWithArgs(AActor* Observer, void(AActor::* Func)(Args...), Args... args)
{
	(Observer->*Func)(args...);
}

template <typename... Args>
void Delegate<Args...>::Broadcast()
{
	for (size_t i = 0; i < Observers.size(); ++i)
	{
		if constexpr (sizeof...(Args) == 0)
			(Observers[i]->*ObserversFunctions[i])();
		else
		{
			std::tuple<AActor*, void(AActor::*)(Args...)> ObsAndObsFuncs(Observers[i], ObserversFunctions[i]);

			std::apply(Delegate::RunObsFuncWithArgs, std::tuple_cat(ObsAndObsFuncs, ObserversFunctionsArgs[i]));
		}
	}
}

template <typename... Args>
template <typename Type>
void Delegate<Args...>::AddUObject(AActor* Observer, void(Type::* Func)(Args...), Args... args)
{
	Observers.push_back(Observer);
	ObserversFunctions.push_back(reinterpret_cast<void(AActor::*)(Args...)>(Func));

	if constexpr (sizeof...(Args) != 0)
		ObserversFunctionsArgs.push_back(std::tuple<Args...>(args...));
}

Demonstration of texture transparency

Demonstration of texture transparency

Delegates

Delegates are essentially an implementation of the Observer pattern. The delegate itself is a repository that stores pointers to methods and their arguments, and can call all of these functions at once when needed.

Delegates are mostly used in conjunction with triggers.

Implementation of the Delegate class:

Implementation of triggers

When creating a trigger in the editor, I need to select a connection type (what the object will do) and a connection object (what the object will do). Each object that supports binding to a trigger has an array of trigger and connection type pairs. After that, we first load all the triggers into the map file, and then the objects themselves (and information about the triggers they are bound to).

In the final game, we create an object for each trigger, in which delegates are created for each type of connection. When loading the map while reading objects associated with triggers, information about functions is written to these delegates (depending on the object currently being read), and the trigger object itself checks for the player (or some other object depending on who the trigger is monitoring) entering it, and when entering, it calls the Broadcast methods of all available delegates.

Demonstration of triggers in action

Demonstration of triggers in action

Demonstration of triggers in action

Implementation of env_shake (earthquake)

The env_shake object only makes sense to use in conjunction with triggers.

Implementation: Draw a circle in a 2D plane of arbitrary radius (the magnitude of this radius determines the strength of camera shaking) with the center at the point { 0; 0 }. Then select a random point on this circle. Having received the coordinates of this point in the 2D system, we must translate these coordinates into the 3D system.

The final point in the 3D system is calculated as follows:

Let S be a random point in 2D on a circle; UP be the direction vector of the camera N rotated upward by 90°; LEFT be the vector obtained by the vector product of UP and N. Then,

I = \overrightarrow{LEFT} * Sx + \overrightarrow{UP} * Sy + cam

Where cam is the camera coordinates; I is the final point.

Two different points (cameras) are used for collision and rendering.

Implementation of 3D skybox

3D skybox is a small section of the map that has its own camera (sky camera). Thanks to this section, the illusion of a large map is created.

Implementation: We launch a ray from our camera, and if this ray hits the skybox texture, then I launch the same ray but from another camera (so that the pixel of the ray that hit the skybox texture is painted in the corresponding color from another camera (sky camera)).

Because the sky camera is close to a certain plane, the effect of a large map is created.

The distance from a point to the camera does not depend on its location on the screen (if we consider this point on one ray). This happens because when projecting a point of a ray emitted from the camera onto the horizontal and vertical planes of the camera, the ray itself is projected onto these planes, and with it the entire set of points of this ray.Implementation of player collisionNote: in my engine, only two objects support collision: a parallelepiped and a pyramid.First, we check if the camera is in an object that supports collision (for example, a parallelepiped). Then we determine which side of the parallelepiped the camera collided with: First, we extend the camera's direction vector, then we go through all sides of the parallelepiped and split them into two triangles. If the extended direction vector collides with one of the triangles of a certain side, then I add the intersection point of this vector with this triangle to the array, and then the closest point to the camera is selected from these points. In this way, we have determined the side of the parallelepiped that the camera collides with.Then we have to push our camera outside this parallelepiped: Let S be the coordinates of the camera at the current moment (inside the parallelepiped), and K be the projection of point S on the side of the parallelepiped that the camera collided with. Then the new coordinates of the camera will be:I = S + \frac{\overrightarrow{KS}}{\left| \overrightarrow{KS} \right|} * (\left| \overrightarrow{KS} \right| + 0.01)Collision for fall is defined similarly, except that the camera's direction vector is always { 0; 0; -0.1 }.ConclusionThank you for reading this far, because I put my whole soul into this project. I don't plan to work on this project in the near future, but I may do small bug fixes.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *