3D Rendering with Map Editor in Console (Part 2)

Greetings!

Greetings!

Today I'm going to continue the story about my 3D render in the Windows command line and discuss the topics that I didn't touch on in Part 1.

This time the article will contain more code and less math (and lots of screenshots).

Writing and reading a map from a file

Recording:

The map is written to a file when it is saved in the map editor. When saving a map, we go through the array of all game objects on the map and write them to a file. Each object on the map belongs to one of the game types(1*)and for each such type, writing to the file occurs differently.

For example, for an object of type ENV_FIRE (an animated fire sprite), writing to a file occurs as follows:

void WriteFire(size_t index, std::ofstream& out)
{
	out << typesOfActors[index - 1] << ":";
	
	out << "{" << actors[index]->GetStaticMesh()->GetCentreCoord().x << ";" << actors[index]->GetStaticMesh()->GetCentreCoord().y << ";" << actors[index]->GetStaticMesh()->GetCentreCoord().z << "};";

	COORDS cubemapCentreCoords = (actors[index]->isActorHasCubemap() ? actors[index]->GetCubemap()->GetCentreCoord() : COORDS{ 0,0,0 });
	out << '{' << cubemapCentreCoords.x << ';' << cubemapCentreCoords.y << ';' << cubemapCentreCoords.z << "}" << '\n';
}

First, the object type itself is written to the file, then its coordinates, and after that, the data on the cubemaps of this object (we will consider why cubemaps are needed later). In total, we have written to the file an object of the ENV_FIRE type:

Result of writing ENV_FIRE in file

Result of writing ENV_FIRE in file

(1*) All game types are described via enum objectType:

enum class objectType : char { PARALLELEPIPED, PYRAMID, LIGHT, PLAYER, TRIANGLE, MODEL, SKYBOX, ENV_FIRE, CIRCLE, ENV_PARTICLES, MOVEMENT_PART
    , ENV_CUBEMAP, CLIP_WALL, TRIGGER, AREA_PORTAL, ENV_SHAKE, SKY_CAMERA, VOLUME_SKYBOX, ENV_FADE };

Reading:

Loading a map in the map editor

Loading a map in the map editor

Like writing, reading a file is different for each object. The Stack class handles file reading.(2*)which when calling the Step method(3*) recursively goes through each line of the file and reads information about objects in order to subsequently create the same objects based on this data. Let's look at reading the ENV_FIRE object from a file:

DEFINE_FUNCTION(ExFire)
{
    stack.codePtr += 2;
    COORDS centreCoords;

    centreCoords.x = atof(stack.codePtr); while (*stack.codePtr++ != ';') {}
    centreCoords.y = atof(stack.codePtr); while (*stack.codePtr++ != ';') {}
    centreCoords.z = atof(stack.codePtr); while (*stack.codePtr++ != '}') {}

    stack.codePtr += 2;
    COORDS cubemapCentreCoords;
    cubemapCentreCoords.x = atof(stack.codePtr); while (*stack.codePtr++ != ';') {}
    cubemapCentreCoords.y = atof(stack.codePtr); while (*stack.codePtr++ != ';') {}
    cubemapCentreCoords.z = atof(stack.codePtr); while (*stack.codePtr++ != '}') {}

    Circle* newObj = new Circle(centreCoords, { 1,0,0 }, "Textures/env_fire.bmp", 3, 5);

    AddActorToStorage<ABaseActor>(actors, newObj);
    typesOfActors.push_back(static_cast<int>(objectType::ENV_FIRE));

    if (!(cubemapCentreCoords.x == 0 && cubemapCentreCoords.y == 0 && cubemapCentreCoords.z == 0))
    {
        AddActorToStorage<ACubemapActor>(actors, new Cubemap(cubemapCentreCoords, { 1,0,0 }, "Textures/env_cubemap.bmp", 1, 5), actors.back());
        typesOfActors.push_back(static_cast<int>(objectType::ENV_CUBEMAP));
    }

    stack.Step();
}

As you can see, when reading we move the codePtr pointer and read the file data via the atoi/atof functions. After reading, the object itself is created based on the received data ( AddActorToStorage(actors, newObj) ). At the end of the entire reading process, the ENV_FIRE method Step of the Stack class is called.

An example of a map written to a file:

(2*) The full code of the Stack class:

class Stack
{
    static inline bool isFuncTableEnable = false;

private:
    std::string code;

public:
    char* codePtr;

public:
    Stack(const std::string& mapName) : codePtr(nullptr)
    {
        if (!isFuncTableEnable)
        {
            isFuncTableEnable = true;
            INCLUDE_FUNCTION(ExPar);
            INCLUDE_FUNCTION(ExPyramid);
            INCLUDE_FUNCTION(ExLight);
            INCLUDE_FUNCTION(ExPlayer);
            INCLUDE_FUNCTION(ExTriangle);
            INCLUDE_FUNCTION(ExModel);
            INCLUDE_FUNCTION(ExSkybox);
            INCLUDE_FUNCTION(ExFire);
            INCLUDE_FUNCTION(ExCircle);
            INCLUDE_FUNCTION(ExSmoke);
            INCLUDE_FUNCTION(ExMovementPart);
            INCLUDE_FUNCTION(ExCubemap);
            INCLUDE_FUNCTION(ExClipWall);
            INCLUDE_FUNCTION(ExTrigger);
            INCLUDE_FUNCTION(ExAreaPortal);
            INCLUDE_FUNCTION(ExEnvShake);
            INCLUDE_FUNCTION(ExSkyCamera);
            INCLUDE_FUNCTION(ExVolumeSkybox);
            INCLUDE_FUNCTION(ExEnvFade);
        }

        std::string line;
        std::ifstream in(mapName);
        if (in.is_open())
        {
            while (std::getline(in, line))
            {
                code.append(line);
            }

            codePtr = const_cast<char*>(code.c_str());
        }
    }

    std::string GetCode()
    {
        return code;
    }

    char* GetCodePtr()
    {
        return codePtr;
    }

    void Step()
    {
        if (codePtr == nullptr || *codePtr == '|')
        {
            return;
        }
        
        int index = atoi(codePtr); while (*codePtr++ != ':') {}
        codePtr--;
        funcTable[index](*this);
    }
};

(3*) As you can see, the Step method contains a variable funcTable.

funcTable is a regular vector that stores pointers to void functions.

std::vector<void (*)(class Stack&)> funcTable;

(class Stack&):

The functions stored in funcTable are intended to tell the Stack class how to correctly read a particular game object from a file. That is, for each game object from enum objectType there must be a similar function, the pointer to which is stored in funcTable.

Implementation of point light sources

After receiving the intersection points of rays from the camera with game objects (i.e. after receiving all the points that will be visible to the player on the screen), the renderer goes through all the point light sources on the map and sends rays from them to these points. If this ray collides with any other object on the map (see how the collision of rays and a parallelepiped was determined in Part 1), then the light from this source does not reach this point. Otherwise, the brightness for this point is calculated using the following formula:

\frac{I}{R^{2}}

Where I is the power of the point light source under consideration, and R is the distance from this light source to the point under consideration.

If none of the rays of all the light sources on the map reach the point in question, then the light will not act on this point (a shadow will form). To create lighting, I used the fact that fewer pixels are used to draw some console symbols than to draw others. Thus, one symbol will be brighter than another or vice versa. That is, for example, the symbol '.' has the lowest brightness, and the symbol '@' has the highest brightness.

env_cubemap:

In the process of creating maps, I had an urgent need to create some object that would determine which objects would be affected by lighting and which would not (For example, when creating a lantern, I needed the lantern model itself not to be affected by lighting). As a result of this thought, I created an object that has the ENV_CUBEMAP type on the map and determines which objects will be affected by lighting.

That is, when creating an object with the ENV_CUBEMAP type, the created cubemap searches for the closest object to itself and attaches to it (if it can), and information is written to the object class that some cubemap is attached to it. If an object with the ENV_CUBEMAP type is attached to some object, then this object will be affected by lighting. Otherwise, it will not.

Demonstration of the use of env_cubemap

Demonstration of the use of env_cubemap

Implementation of env_fade (Dim)

float xCount = imageColors[i][j].x / startRgbVecLen;
float yLen = imageColors[i][j].y / xCount;
float zLen = imageColors[i][j].z / xCount;

The idea is to transfer the RGB pixel to the state {0;0;0} in the same number of steps. To do this, first we split the R color of the pixel into n-th number of steps (xCount). Then we find the step values ​​for the G and B colors of the pixel based on the obtained number of steps xCount:

We perform these calculations at every tick of the game (since the player can rotate the camera, and the color of the pixel in question can change), and for each pixel separately.

imageColors[i][j] = { imageColors[i][j].x * rgbVecLenRatio[i][j].x - startRgbVecLen, imageColors[i][j].y * rgbVecLenRatio[i][j].y - yLen
    , imageColors[i][j].z * rgbVecLenRatio[i][j].z - zLen }; 

Then, from the RGB of the pixel in question, I subtract the resulting steps and get a new RGB for the pixel:

The rgbVecLenRatio array is needed to remember what percentage of the full length of the R, G or B rays remains to pass to the point {0;0;0}. That is, so that when the pixel color changes (when the player rotates the camera), the new pixel color retains the same brightness as the previous pixel color for which the calculations were made.

rgbVecLenRatio[i][j] = imageColors[i][j] / oldImageColor;

Calculation of the variable rgbVecLenRatio:

Where oldImageColor is the RGB of the pixel without any darkening at all.

env_fade demo

env_fade demo

Implementation of env_particles (Smoke)

Spherical coordinate system:

A spherical coordinate system is a special coordinate system that consists of axes ρ, Φ, Θ, where ρ is the radius of the sphere, Φ is the polar angle, and Θ is the axial angle.

The transition from the Cartesian coordinate system to the Spherical one is carried out by the following formulas:x = ρ*cosΦ * cosΘy = ρ*sinΦ * cosΘ

z = ρ*sinΘ

The reverse transition is carried out by the following formulas:ρ = \sqrt{x^{2}+y^{2}+z^{2}}Φ = arcsin(\frac{y}{\sqrt{x^{2} + y^{2}}})

Θ = arcsin(\frac{z}{\sqrt{x^{2} + y^{2} + z^{2}}})

That is, by converting a point from the Cartesian coordinate system to the Spherical one, we can find out on what sphere radius the point lies, and where exactly on this sphere it lies (it is worth considering that the center of the sphere is at the point with coordinates {0;0;0}). Consequently, by changing the coordinates Φ and Θ of some point, and then converting it back to the Cartesian coordinate system, we will get the displacement of this point along a sphere of radius ρ with the center at the point with coordinates {0;0;0}.

Implementation of env_particles in the map editor:

When creating an object with the ENV_PARTICLES type, two objects are created that are linked together, namely: the starting point (where the particles will appear) and the ending plane (where the particles will go) (in this case, the ending plane is a circle whose normal has coordinates {0;0;1}).

Creating env_particles in the map editor

Creating env_particles in the map editor

Implementation of env_particles in the main game: First, when loading a map that contains an object with the ENV_PARTICLES type, PARTICLES_COUNT (PARTICLES_COUNT = 100) particles are created for this object(4*)

and all of them are entered into the particles vector. And then, for the created particles, we select a random point on the final plane (circle). That is, we select the final points of the particles' movement. This is where the creation of particles ends.

for (size_t i = 0; i < PARTICLES_COUNT; ++i)
{
    AddActorToStorage<ASmokeActor::ASmokeParticleActor>(actors, new Circle(particlesSpawnDot, { 1,0,0 }, "Textures/SmokeStackFallback" + std::format("{}", currentColorIndex) + "/SmokeStackFallback" + std::format("{}", currentColorIndex), 0.5f, 5, false, 0.1f));
    particles.push_back(actors.back());
    particlesStartDelayTime.push_back(static_cast<float>(rand()) / (static_cast<float>(RAND_MAX / 5)));
    particleStartDelayTimeCounters.push_back(0.0f);
    motionCubicRates.push_back({0,0,0});
    
    COORDS endDot;
    endDot.x = -rad + static_cast <float> (rand()) / (static_cast <float> (RAND_MAX / (rad + rad)));
    endDot.y = -sqrt(pow(rad, 2) - pow(endDot.x, 2)) + static_cast <float> (rand()) / (static_cast <float> (RAND_MAX / (2 * sqrt(pow(rad, 2) - pow(endDot.x, 2)))));
    endDot.z = 0;
    particlesEndDot.push_back(endDot + endCircleLocalCentreCoord + particlesSpawnDot);
}

Full code for creating particles:

Then, during the game, we move the particle in question from the starting point to the ending point as follows: We convert the coordinates of the center of the current particle in question and the coordinates of the ending point of the movement into the Spherical coordinate system, and change the obtained coordinates of the particle center ρ, Φ and Θ through cubic interpolation (what cubic interpolation is was explained in Part 1 (link)). After changing the coordinates of the particle center in the Spherical coordinate system, we convert the obtained coordinates into the Cartesian coordinate system and save the changes. Due to these transformations, the particles do not move in a straight line, but along a certain spherical curve.

COORDS startSphereCoord = ToSphereFromCartesianCoords(particles[i]->GetStaticMesh()->GetCentreCoord());
COORDS endSphereCoord = ToSphereFromCartesianCoords(particlesEndDot[i]);

MotionCubic(endSphereCoord.x, tick / (float)2, &startSphereCoord.x, &motionCubicRates[i].x);
MotionCubic(endSphereCoord.y, tick / (float)2, &startSphereCoord.y, &motionCubicRates[i].y);
MotionCubic(endSphereCoord.z, tick / (float)2, &startSphereCoord.z, &motionCubicRates[i].z);

particles[i]->GetStaticMesh()->SetCentreCoord() = ToCartesianFromSphereCoords(startSphereCoord);

Full code for particle motion:

If the particle in question has approached the end point, then we transfer the coordinates of the center of this particle to the starting point, and begin the process of movement again.

Env_particles Demo

Env_particles Demo (4*)

The particle is a regular sprite (an object of type ENV_SPRITE).

Conclusion

That's it. I've covered those important topics that I wanted to discuss in Part 1, but for some reason couldn't. I hope no one has any questions left, and if they do, feel free to ask them in the comments!

Criticism and corrections are always welcome.

github.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *