3D heart on Three.js

Today we will tell you how to define, modify and display a 3D model in the browser. Let’s dive into the technical details and see how to render a scene, build and render a custom model, and control the camera to admire the animated model in all its glory.

Why Three.js

three.js is a 3D JavaScript library on WebGL, an API for rendering 2D and 3D models in the browser. The three.js render processing GPU allows you to efficiently manage 3D models directly on a regular HTML canvas.

It’s entirely possible to render the heart in WebGL alone, but the rich three.js API makes it a lot easier.

three.js provides Canvas 2D, SVG and CSS3D renderers, but we’ll stick with WebGL.

You can install three.js in different ways, and we will download it via CDN. Let’s write a classic index.html and load the library in the tag script. Then we will work with the file heart.js. The canvas element will be added automatically.

<html>
  <head>
  <meta charset="utf-8">
  <title>in my heart</title>
  </head>
  <body>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r124/three.min.js"></script>
    <script src="https://habr.com/ru/company/skillfactory/blog/716976/./heart.js"></script>
  </body>
</html>

Scene

Putting your artwork in a suitable landscape is the least you can do. Luckily, you can work with the three.js tools.

Let’s start with the function:

function createScene () {
    const  scene = new THREE.Scene()
    const  camera = new THREE.PerspectiveCamera(60,  window.innerWidth / window.innerHeight, 1, 100)
    camera.position.z = 30

    const  renderer = new THREE.WebGLRenderer({ antialias: true })
    renderer.setSize(window.innerWidth, window.innerHeight)
    document.body.appendChild(renderer.domElement)

    const color = 0xFFFFFF
    const intensity = 0.75
    const light = new THREE.PointLight(color, intensity)
    light.position.set(-15, -10, 30)

    scene.add(light)

    return {
        scene,
        camera,
        renderer
    }
}

You need to define a scene object with almost everything that will be rendered. It’s a lot of work, but in one line. Then you need to create a camera object – a point from where you can see the scene. Three.js provides different types of cameras, but this one works well for our situation:

“This projection mode is designed to mimic how the human eye sees. This is the most common projection mode for rendering a 3D scene.”

You can set the preferred camera configuration according to the recommendations of the document. Note that the second parameter is used for the aspect ratio of the camera, so here it is calculated from the window size.

camera.position.z = 30

The camera should be placed farther from the figure.

const renderer = new THREE.WebGLRenderer({ antialias: true })

Now let’s instantiate the object WebGLRenderer. The model itself can be smoothed or not, as desired.

renderer.setSize(window.innerWidth, window.innerHeight)
document.body.appendChild(renderer.domElement)

We set the size corresponding to the windows, attach the canvas to the body.

const color = 0xFFFFFF
const intensity = 0.75
const light = new THREE.PointLight(color, intensity)
light.position.set(-15, -10, 30)
scene.add(light)

Let’s add a light source to the scene. I chose PointLight as it’s pretty easy to render. He is like a light bulb, that is, without direction or anything else.

Inside the entry point of our program – the method init – let’s call createScene.

function init () {
 const {scene, camera, renderer} = createScene()
}
init()

Heart

Now you can start working on the star… the heart of the show! We are working with a WebGL renderer so we can easily draw triangles with point (or vertex) coordinates. Let’s determine these coordinates – and then we will see how to draw triangles between them all.

Coordinates

We need to define a set of model vertices. Consider that these are the points connecting the edges of the figure. In three.js they can be configured in an object Vector3. The positions of the edges can be completely arbitrary. I calculated them on paper, but here I’ll cheat and show the selected coordinates on the finished model:


Coordinates of points on the anterior side of the heart

Each point has three coordinates (x, y, z) and all surrounding edges lie on the z=0 plane.
Let me show you how it looks from a different angle:


Side view heart model

For the other side, we need another set of points, but they are symmetrical to the first side, so I’ll spare you a screenshot.

Now we render the surface between these points. You may have already noticed that each flat surface of the final model is a triangle. This shape is the simplest thing you can draw if you have three points in space, and it is widely used in 3D rendering as the main part of any model.

We’re aiming for a simple rendering of the heart, so we’re only using a few triangles, but if you want smoother outlines, draw more triangles.

triangles

Okay, we have the coordinates, but we need to tell three.js where the triangles we want to display are. We will store the indices of these coordinates so that each group of three indices represents a triangle.

Now it’s time to take a look at the second function:

function useCoordinates () {
  const vertices = [
    new THREE.Vector3(0, 0, 0), // point C
    new THREE.Vector3(0, 5, -1.5),
    new THREE.Vector3(5, 5, 0), // point A
    new THREE.Vector3(9, 9, 0),
    new THREE.Vector3(5, 9, 2),
    new THREE.Vector3(7, 13, 0),
    new THREE.Vector3(3, 13, 0),
    new THREE.Vector3(0, 11, 0),
    new THREE.Vector3(5, 9, -2),
    new THREE.Vector3(0, 8, -3),
    new THREE.Vector3(0, 8, 3),
    new THREE.Vector3(0, 5, 1.5), // point B
    new THREE.Vector3(-9, 9, 0),
    new THREE.Vector3(-5, 5, 0),
    new THREE.Vector3(-5, 9, -2),
    new THREE.Vector3(-5, 9, 2),
    new THREE.Vector3(-7, 13, 0),
    new THREE.Vector3(-3, 13, 0),
  ];
  const trianglesIndexes = [
  // face 1
    2,11,0, // This represents the 3 points A,B,C which compose the first triangle
    2,3,4,
    5,4,3,
    4,5,6,
    4,6,7,
    4,7,10,
    4,10,11,
    4,11,2,
    0,11,13,
    12,13,15,
    12,15,16,
    16,15,17,
    17,15,7,
    7,15,10,
    11,10,15,
    13,11,15,
  // face 2
    0,1,2,
    1,9,2,
    9,8,2,
    5,3,8,
    8,3,2,
    6,5,8,
    7,6,8,
    9,7,8,
    14,17,7,
    14,7,9,
    14,9,1,
    9,1,13,
    1,0,13,
    14,1,13,
    16,14,12,
    16,17,14,
    12,14,13
  ]
  return {
    vertices,
    trianglesIndexes
  }
}

First array (vertices) represents all the points that form the pattern, from 0 to 17. The second (triangleIndexes) – all the triangles that you want to draw on these points. It’s just an array of integers – the indices of the vertices in the first array.

The point is that for every three indices it is possible to form a triangle with three corresponding points of the first array. The first triangle in the figure corresponds to points A, B and C mentioned above.

Mesh (mesh – mesh)

Now it was possible to ask three.js to draw triangles with no relation to each other, because it is much better to have one object that refers to the model. In three.js this object is called mesh. To create meshyou need two objects – this geometry And material:

function createHeartMesh (coordinatesList, trianglesIndexes) {
    const geo = new THREE.Geometry()
    for (let i in trianglesIndexes) {
        if ((i+1)%3 === 0) {
            geo.vertices.push(coordinatesList[trianglesIndexes[i-2]], coordinatesList[trianglesIndexes[i-1]], coordinatesList[trianglesIndexes[i]])
            geo.faces.push(new THREE.Face3(i-2, i-1, i))
        }
    }
    geo.computeVertexNormals()
    const material = new THREE.MeshPhongMaterial( { color: 0xad0c00 } )
    const heartMesh = new THREE.Mesh(geo, material)
    return {
        geo,
        material,
        heartMesh
    }
}

Consider geometry standard of form, and material – the standard of the texture or fabric of the figure.

We need one more function:

function addWireFrameToMesh (mesh, geometry) {
    const wireframe = new THREE.WireframeGeometry( geometry )
    const lineMat = new THREE.LineBasicMaterial( { color: 0x000000, linewidth: 2 } )
    const line = new THREE.LineSegments( wireframe, lineMat )
    mesh.add(line)
}

First we create geometry default. There is no need for further customization, we will describe all its facets.

for (let i in trianglesIndexes) {
 if ((i+1)%3 === 0) {
  geo.vertices.push(coordinatesList[trianglesIndexes[i-2]], coordinatesList[trianglesIndexes[i-1]], coordinatesList[trianglesIndexes[i]])
  geo.faces.push(new THREE.Face3(i-2, i-1, i))
 }
}

In this loop, the array triangleIndexes is converted; for every three indices, we save the corresponding vertex and, together with it, the two previous indices.

Now that the geometry has these three vertices, we can add face. An object Face3 in three.js is defined by three vertex indexes. We have already compared points and triangles, that is, faces; these are the same indexes that were used when adding vertices to the geometry.

By the way, by default, three.js only renders the face of a face at a time. It is determined by the order of the indexes, so if it doesn’t work, you may need to change that order.

By setting up your material, you can render both sides of the face.

And now, having decided on the geometry, you can create material:

geo.computeVertexNormals()
const material = new THREE.MeshPhongMaterial( { color: 0xad0c00 } )

We use a special special material that reflects light. To do this, before it you need to call computeVertexNormals. Let’s make the material dark red. Let’s add it to this function init:

function  init () {
 const {scene, camera, renderer} = createScene()
 const { vertices, trianglesIndexes} = useCoordinates()
 const { geo, material, heartMesh } = createHeartMesh(vertices, trianglesIndexes)

frame

When we see a floating heart, we want to add an effect that separates the edges. Not only for the sake of style, but also to better understand how the triangles are displayed on the model.

The object that displays the edges of the faces of the mesh is called a wireframe, and if we know how to create a mesh, then creating a wireframe is quite simple. can be automatically created WireframeGeometry from the first geometry created earlier, and with the help of a suitable material and a linear object, build a wireframe. Finally, let’s add this wireframe to the heart mesh.

Model in the scene

scene.add(heartMesh)

If it’s so easy to add a mesh to a scene, you need to see how you can display it and apply modifications to it.

rendering

Rendering is quite simple: let’s create an instance of the renderer object – and we will have everything we need. But render will be called regularly, especially when animating. Today, the usual smooth motion rate is sixty frames per second.

could be written setInterval and set it to a sixtieth of a second, but without relying too much on the main thread, browsers (for the same effect) provide a function requestAnimationFrame.

const animate = function () {
 requestAnimationFrame( animate )
 renderer.render( scene, camera )
 heartMesh.rotation.y -= 0.005
}
animate()

requestAnimationFrame takes a callback as an argument and will call it for every available frame. You can read more about this in MDN.

heartMesh.rotation.y -= 0.005

The higher the increment, the faster the rotation.

Let’s try to call all this in a function init:

(function init () {
  const {scene, camera, renderer} = createScene()
  const { controls } = setControls(camera, renderer.domElement, window.location.hash.includes('deviceOrientation'))
  const { vertices, trianglesIndexes} = useCoordinates()
  const { geo, material, heartMesh } = createHeartMesh(vertices, trianglesIndexes)
  scene.add(heartMesh)
  addWireFrameToMesh(heartMesh, geo)
  const { onMouseIntersection } = handleMouseIntersection(camera, scene, heartMesh.uuid)

  window.addEventListener( 'click', onMouseIntersection, false )

  const animate = function () {
    requestAnimationFrame( animate )
    renderer.render( scene, camera )
    heartMesh.rotation.y -= 0.005
    startAnim && beatingAnimation(heartMesh)
    controls.update()
  }
  animate()
})()

Now, if you think you can handle something big, let’s add a more complex animation and see how to trigger it with user input.

heartbeat

Let’s animate the model with another animation. Let’s take into account everything we learned while working with rotation animation:

Here you need to first increase the scale of the mesh, and then reduce it to the original scale. To make this transformation uniform, add the same amount of increment to the three axes of the mesh’s scale attribute.

You can set a maximum value, and from there decrease it once it’s reached. I took 1.4, but feel free to make it bigger or smaller to control the animation detail.

The algorithm is quite simple: add beatingIncrement to each scale property, and as soon as we exceed the maximum value, we switch the boolean flag and subtract the increment from the scale.

This function will be called multiple times by the function animate. Although it looks like code that could be written inside a while, the iterations are triggered by the recursive rendering of the scene.

Let’s add a function animate:

const animate = function () {
  requestAnimationFrame( animate )
  renderer.render( scene, camera )
  heartMesh.rotation.y -= 0.005
  beatingAnimation(heartMesh)
}

There is a pulse!

Let’s go ahead and let the user decide when the heart should beat.

interactivity

Our model stands proudly in the scene, even moving by itself, but if you hang on, you’ll see how the user interaction can be handled.

Let’s use ray casting to understand how to handle click events on the canvas. Then we’ll set up the camera controls.

Handling user input with ray casting.

Typically, the raycaster class handles user interaction, making it possible to work with the intersection of the mouse pointer and models.

function handleMouseIntersection (camera, scene, meshUuid) {
  const raycaster = new THREE.Raycaster();
  const mouse = new THREE.Vector2();

  function onMouseIntersection( event ) {
      const coordinatesObject = event.changedTouches ? event.changedTouches[0] : event
      mouse.x = ( coordinatesObject.clientX / window.innerWidth ) * 2 - 1;
      mouse.y = - ( coordinatesObject.clientY / window.innerHeight ) * 2 + 1;

      raycaster.setFromCamera( mouse, camera );
      const intersects = raycaster.intersectObjects( scene.children );

      if (intersects.length && intersects[0].object.uuid === meshUuid) {
          startAnim = true
      }
  }

  mouse.x = 1
  mouse.y = 1

  return {
      onMouseIntersection
  }
}

This function needs a reference to the camera, scene, and a unique mesh ID to interact with the pointer. It returns a function that will be called on click or touch.

This handler method sets the coordinates of the mouse object and lets the raycaster know which meshes are intercepting the pointer.

if (intersects.length && intersects[0].object.uuid === meshUuid) {
    startAnim = true
}

After that, you only need to check whether the identifier of the first intersected object matches the one passed in the parameters. The object must come first, otherwise it is hidden behind another mesh. In this case, you can switch the flag variable. This variable can trigger the beat animation in the function animate.

const animate = function () {
 requestAnimationFrame( animate )
 renderer.render( scene, camera )
 heartMesh.rotation.y -= 0.005
 startAnim && beatingAnimation(heartMesh)
}

and switch to falsewhen the animation ends.

We need to change the function beatingAnimation. Here’s how to do it:

function beatingAnimation (mesh) {
  // [...]
   if (mesh.scale.x <= 1) {
    scaleThreshold = false
    startAnim = false // we must stop it right here or it will start over again
   }
 }
}

Can call handleMouseIntersection from function initand then connect the handler returned by this function to the window object’s click listener.

And now the heartbeat is controlled!

Controls

You can be proud of yourself. Your model is in front of you and you’ve got control over its animation, but how about you can admire it from all angles? Let’s first try adding orbit controls and then, for mobile users, use the device orientation API.

Orbit control

The orbit controls allow you to move the camera farther and closer to the point where it is located and, focusing on the target, rotate the camera around that point.

three.js has non-native API control settings. You can add it to the script tag of the file index.html:

<script src="https://threejs.org/examples/js/controls/OrbitControls.js"></script>

We now have access to its constructor via an object THREE. Let’s create it inside a dedicated function:

function setControls (camera, domElement) {
 const controls = new  THREE.OrbitControls( camera, domElement )
 controls.update()
}

We call this code in init — and that’s it! Now, by holding down the left mouse button, you can rotate around the heart, zoom in and out with the scroll wheel, and even move the camera with a right-click.

The range of camera movement can be limited; this might be helpful.

These actions are already more than enough to admire your creation, but you can also take into account the orientation of the mobile device, that is, turn physical movement into camera movement.

Device orientation controls

As with orbit controls, you can use the non-native API. You can read about it Here.

Unlike the Orbit API, it can only change the orientation of the camera, so pretend your phone is a window into a parallel universe where your heart is beating.

Code for DeviceOrientationControls can be found in the three.js repository on GitHub, in examples section, but it refers to an assembly folder that does not exist in the project. You can create your own folder, or work with this file.

Let’s use this file in the previous function. It would be nice not to delete the work that was just done, so let the initialization of the controls become more conditional.

This time we need to return the link controlsbecause when working with DeviceOrientationControls inside the render loop you need to call controls.update().

const  animate = function () {
 requestAnimationFrame( animate )
 renderer.render( scene, camera )
 heartMesh.rotation.y -= 0.005
 startAnim && beatingAnimation(heartMesh)
 controls.update() // this line is new
}
animate()

The control mode can be easily switched by parameter setControls. This switch is easy to relate to user input.

const { controls } = setControls(camera,  renderer.domElement, true)

Perhaps there is a way to combine both controls into one and get full control of the camera, but I think that this can be confusing.
I hope you have learned something from my experience. Thanks for reading!

Brief catalog of courses

Data Science and Machine Learning

Python, web development

Mobile development

Java and C#

From basics to depth

And

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *