Pauli matrices. Final

This is the last article on this topic. All previous ones with this title were training sessions before this one, with different results, of course. Both you and I seem to be interested in the topic, but frankly, let’s not dwell on it.

Spoilers for what awaits you in the finale:

  1. Visualization of the action of Pauli operators on vectors in dynamics.

  2. The concept of combining linear algebra and TFKP.

  3. A simple definition of a geometric product.

  4. Interaction of covectors and vectors: gradient and Laplace operator.

  5. Generalization of Moivre's formula to 2×2 matrices

  6. There is a lot of data on Clifford algebras and projective geometry in the links from my friend at the end of the article.

Let's go.

Visualization of the action of Pauli operators on an arbitrary vector and its lines.

These are matrices of Pauli operators, also elements of Clifford algebra, and part-time unit vectors of the coordinate system. The same flat Cartesian r=xe1+0e2+z*e3.
This is how they act on a regular column vector. And correspond to complex numbers this. (I marked the text with the colors of the equation and the vector in the picture to make it easier to navigate what corresponds to what.) We will write, for example, axis (1,1), implying that this axis is specified by a vector with coordinates (1,1).

Identical transformation, σ0 corresponds to the real unit.

Rotate by pi/2 counterclockwise, σ13 corresponds to imaginary unit.

Reflection relative to the (1,1) axis, formally* corresponds to fractional-linear transformation σ1=(z+1i*x)/(x+1i*z)

Reflection relative to the (1,0) axis formally* corresponds to the fractional-linear transformation σ3=(x-1i*z)/(x+1i*z)

*With the last two things are not so simple. Due to the lack of noncommutativity, this approach will have to consider the point at infinity.

It works similarly when the vector is given in the form of Pauli matrices, only the multiplication rules for transformations into the same vectors are more tricky, so you need to get used to it in order to use it.

Read the coordinates in the basis of the Pauli matrices like this. In our case y=0.

Read the coordinates in the basis of the Pauli matrices like this. In our case y=0.

The same vectors are written through the Pauli matrices.

The same vectors are written through the Pauli matrices.

Horizontal axis σ1. Vertical axis σ3. Reflection relative to the (1,1) axis is a multiplication on the left by 1*σ1, and on the right by 1*σ3. With the second reflection, it’s more complicated, I’ll leave the consideration out of the brackets, in the link at the end of the article there is a lot of detail about this.

Multiplication between these ort operators is given as follows.

Multiplication between these ort operators is given as follows.

You can see that in this formulation it makes almost no difference what to work with: ordinary vectors or complex numbers, or vectors in the form of matrices.

Change of all received vectors when the original vector changes. Horizontal axis σ1. Vertical axis σ3

Change of all received vectors when the original vector changes. Horizontal axis σ1. Vertical axis σ3

When multiplying by one of the operators, you need to keep in mind a picture of how the vector changes during multiplication. It’s convenient to remember this by using a cheat sheet from ordinary linear algebra and watching a video with vectors and the lines orthogonal to them that these vectors form. And it’s even better to multiply and draw yourself.

Below the cut is a cheat sheet and screenshots of how this whole animation is set

Normalized vectors:
purple-σ0rv/|rv|, orange-σ13*rv/|rv|, red – σ1*rv/|rv|, green – σ3*rv/|rv|. Lines correspond to their orthogonal vectors by color.

Additional vectors and lines were created to model the multiplication of two arbitrary vectors, α and β are arbitrary numbers in this case. I left it in case someone needs it.

About the principle of how to combine vector algebra and TFKP.

Formally, this gives the transformation of a vector into an oriented arc on a sphere with the radius of this vector.

Formally, this gives the transformation of a vector into an oriented arc on a sphere with the radius of this vector.

What does the multiplication of vectors a*b correspond to, given that one of them is equal to such a vector matrix?

For more details, see Gordeev, in the folder on Yandex disk, at the link below. But I personally see that everything is simpler and can be solved on a plane. More on this assumption further.

Gradient (differentiation operator) and a vector of real coordinates, on the plane, written in terms of Pauli matrices.

We will say that this corresponds to ordinary vectors. Or rather covector and vector.

We will say that this corresponds to ordinary vectors. Or rather covector and vector.

When multiplying, the operator is placed first and it is always transposed, even if this is not indicated. If only they wrote this in bold text in all textbooks…
An example from ordinary linear algebra, a row (transposed column) is multiplied by another column, the row is an operator on the column.

Let's rotate pi/2 counterclockwise.

We will say that this corresponds to the normals to the covector and the vector.

We will say that this corresponds to the normals to the covector and the vector.

Let's perform half of the operation of reflecting σ1*r*σ1, vectors that are higher, relative to the axis (1,1). That is, let’s do (σ1*r) and (r*σ1) . Let's call it what it is type corresponding to complex numbers (hereinafter referred to as CN), and in 3D this corresponds to quaternions. CN and its conjugate:

The coordinates of the control center are described in the same way, but with additional capabilities. For example, these two gradients are orthogonal.

The coordinates of the control center are described in the same way, but with additional capabilities. For example, these two gradients are orthogonal.

That is, the tool briefly described here allows you to combine the capabilities of linear algebra and TFKP. For example, in Clifford algebras, there is division by a vector, you can multiply from any side, and at the same time, all information is stored in one expression, in contrast to the same operations with vector columns.At the end of the article, and in PPS, everything is even more interesting.

What’s also interesting is that ordinary vectors and vectors in the CN form, if they are superimposed on the same plane, almost never coincide on the set of all vectors, which can be verified by trying to solve this equation in real numbers:

Or you can just watch the video above and work with the animation formulas. It was about this in previous articlebut the majority did not like it for the reasons described at the end of the article.

In total, in order to convert a vector into CN form, it must be multiplied on either side by the unit element σ1 or σ3 (ort), and to return it back, it must be multiplied on the same side by the same thing. It turns out like this:

σ0 corresponds to the usual unit. σ13, corresponds to the imaginary unit

σ0 corresponds to the usual unit. σ13, corresponds to the imaginary unit

In CC the concept of the conjugate appears. Ordinary vectors do not have this, since the transposed vector is equal to itself:

A simple definition for a geometric product.

A geometric product is a way to project a vector simultaneously onto another vector and onto the normal of this other vector, i.e. expand in the directions of this other vector without loss of information. Well, it’s much shorter to write this way.

The operator is placed first, that is, the vector onto which the projection is performed.

You can see that the result of the product of an ordinary vector by a vector is a vector in the form of a CN, and also that this is the sum of what is commonly called the scalar and “vector” product.

“*” Well, it’s an interesting observation that projecting (a vector in the form of a CN) onto (an ordinary vector) gives (an ordinary vector). The truth is a specific device, it consists of:

1. scalar product, in the form of a vector r1.

2. outer product, in the form of a normal to the vector r1

That is, this operation automatically scales the operator vector and its normal along the directions of the object vector. And it is convenient as an additional tool.

If it is true that both vectors and complex numbers are elements of the same plane, for example, the operation σ1*r*σ1 can be considered as first a projection of σ1 onto vector r, and then a projection of the resulting vector onto σ1.

When a vector is multiplied by itself, the geometric product gives the square of the length.

Square length.

Square length.

The scalar product is also called internal, and determines the projection of the vector r onto the direction r1. This is the half sum of the geometric products r1*r and r*r1.

Dot product. In a more familiar notation it is also (r1,r)

Dot product. In a more familiar notation it is also (r1,r)

The “vector” product is actually external with one amendment (many eminent authors have already cursed at the creator of the vector product, I will not repeat them), it determines the projection of r onto the normal to r1, formed by multiplication by σ13. This is the half-difference of the geometric products r1*r and r*r1.

When the vectors are co-directed, the outer product is zero, the vector r is scaled by the vector r1. When the vectors are orthogonal, the scalar product is zero, the vector is scaled by the vector rn1. Just like a normal number.

Relationship between vector and co-vector.

About the relationship between gradients and coordinate vectors in two views? Here:

The result is a metric tensor of the Cartesian coordinate system multiplied by two.

The result is a metric tensor of the Cartesian coordinate system multiplied by two.

That is, the usual gradient is co-directed with the usual coordinate vector, and the gradient in the form of a CN is co-directed with the coordinate vector in the form of a CN.

What then happens when you multiply a coordinate vector and a vector similar to a gradient, but without derivatives in the composition? This is very closely related to quadratic forms and the tensor product of vectors, built on the uvwt decomposition (see under the cut below). I’m not writing in detail, because there are enough letters for one more article.

Where α is the angle of rotation r to position r`. You can check if you made a mistake in the sign.

Where α is the angle of rotation r to position r`. You can check if you made a mistake in the sign.

Laplace operator in the basis of Pauli matrices.

Laplace operator in the form of ordinary vectors and in the form of CN.

If the function is smooth, the mixed derivatives are equal to zero, and then we get the familiar Laplace operator. For details on different physics, see David Hestenes. He has been developing geometric algebra for many decades.

You can continue to combine all this, but I set myself the goal of describing everything here to the smallest detail. Anyone who wants to can create everything they need with it. My goal was to understand this algebra and share it with everyone who is interested. Including new products like uvwt. Figured it out, shared it.

For more details, see the links below to the mountain of information.

Generalization of Moivre's formula for 2×2 matrices.

Bonus generalization Moivre's formulas for powers of complex numbers, into 2×2 + matrices matrix exponent And its logarithm, the last one from any 2×2 matrix. On the topic of uvwt decomposition. I recently derived it using symbolic calculations…

The form in which it was written is the same as it was written, without abbreviations. You can see that if we substitute v=w=0, and take rq imaginary, taking into account properties sinh and coshexactly the Moivre formula is obtained. This means we can now try to connect rotations in ordinary space, with Lorentz transformations in Minkowski space.

More details under the cut, including a reminder of uvwt decomposition.

If rq is real, then we get a mixture of number, vector and vector part of a two-dimensional quaternion, four components in total.

If rq is imaginary, due to the fact that σ13 corresponds to the imaginary unit, we obtain a mixture of the number and the vector part of the two-dimensional quaternion, three components in total.

Both, with what is stated in the article, are not difficult to interpret. But what to do about it?

If rq=0, then the limit is sinh(rq*q)/rq = q.

And this matrix representation turns out to be defined everywhere. And pay attention to the connection with important functions cardinal sinus And integral sine

The trick is that such a representation does not require branches of multi-valued complex functions. And it is also interesting that rq turns out to be an analogue of the argument of complex numbers, and it is given as the root of the sum of two squares of the reflection coefficients, minus the square of the rotation coefficient. Recall that a rotation consists of two reflections.

PS

Why should all this be given not through abstract definitions, but through Pauli matrices? For example, because I wrote the previous article in order to share just a not complicated, freshly invented formula for the decomposition of any 2×2 matrix into the basis of Pauli matrices (uvwt), and test the case of how material is perceived in the form of dry formulas with smart words and without much explanation. Overall the result is negative.

Will abstract mathematical formalism help? From my point of view, it will do harm if you are not writing for specialized specialists… 90% of people do not understand anything in the abstract, even if they want to understand. I think the result of the last article would have remained the same, even if I had doubled its volume, having described all the definitions, completely, and brought everything to the formalism of textbooks. I am sure that if the goal is to transfer knowledge and not to engage in narcissism, then it is necessary to explain on a plane and through these matrices, and only then move on to abstractions and multidimensionality. I don't know of a simpler way. I invite anyone who wants to help rewrite this article even easier.

The first condition that must be fulfilled in mathematics is to be precise, the second to be clear and, as far as possible, simple. (L. Carnot)

And the last two are violated in almost all textbooks and articles.

My list of literature on this topic is in first article. Separately, I especially recommend KazanOva. It is very simply written, in comparison with scientific articles and other textbooks.

Visual pictures of the construction of figures in this basis in second article.

P.P.S.

Here are links to a lot of data on projective geometry and Clifford algebras.

At first I wanted to write here about the assignment of straight lines in this formalism, and I even wrote it, but then I thought that my friend Igor writes more abstractly than I could, but much cooler. For those who really want to figure it out, there are tips in this article. In general, I decided not to increase the volume for your reading.

Here, more about the convenience of such coordination through lines constructed through two points, and points formed by the intersection of these lines. I recommend it, there is a whole series of articles on Clifford algebras and projective geometry: On the meaning of algebra and geometry.

He and Savateev (for those who know this last name) planned a video on this topic.

Igor also gave a link to his entire collection Clifford Algebras books and links. At the root of the folder is the link, the original textbook of Clifford himself.

You can ask him questions on his site on computational mathematics.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *