# Introduction to Spin 1/2 Using NumPy#

Jay Foley, University of North Carolina Charlotte

## Objectives#

To introduce the formalism used to understand the quantum states of particles with spin 1/2

To illustrate the use of NumPy for basic operations with vectors and matrices

## Learning Outcomes#

By the end of this workbook, students should be able to

Identify the eigenstates of z-spin

Use the eigenstates of z-spin as a basis for computing expectation values of x-spin and y-spin

Explain the concept of matrix representations of operators

Utilize NumPy to build matrix representations of operators

Utilize NumPy to identify eigenstates of x-spin and y-spin

Utilize NumPy to confirm the commutation relations for the matrix representations of operators

## Summary#

We will use Python and NumPy to illustrate basic formalism of spin 1/2 in quantum mechanics. We assume familiarity with this formalism; for background on this topic, we recommend you read this chapter on Spin.

Spin angular momentum is an observable in quantum mechanics and has associated operators with eigenstates. Traditionally, the components of angular momentum are represented along the \(x\), \(y\), and \(z\) axes, and we have Hermitian operators associated with each component (\(\hat{S}_x, \hat{S}_y, \hat{S}_z\)), along with the square magnitude \(\hat{S}^2\). For particles like electrons, protons, and neutrons, these component operators all have exactly two eigenstates with eigenvalues \(\pm \frac{1}{2}\hbar\); hence we talk about the formalism of spin for these systems as the formalism of spin 1/2. In this workbook, we will introduce matrix representations of each of these component operators, and the eigenstates will then have vector representations. We will specifically introduce the eigenvectors of the matrix associated with \(\hat{S}_z\) as the basis vectors for any state of spin 1/2. We will then be able to write the matrices associated with \(\hat{S}_x\) and \(\hat{S}_y\) in this basis, and perform useful computations with them, including finding their eigenstates and verifying so-called commutation relations between these operators.

## Import statements#

We will import the `numpy`

library and it’s linear algebra module for working with the spin matrices and vectors.

```
import numpy as np
from numpy import linalg as la
```

## Numpy arrays: Vectors#

Numpy arrays are special types of variables that can make use of different mathematical operation in the numpy library. We will see that a lot of linear algebra operations can be performed with numpy arrays using very simple syntax. Numpy arrays can have an arbitrary number of dimensions, but we will use 2-dimensional numpy arrays with a single column and multiple rows to denote a column vector. We can take the transpose of these numpy arrays to represent a row vector.

Here we will introduce the vector representation of special spin states that have precise value of z-spin, that is, they are the eigenstates of the \(\hat{S}_z\) operator:

We refer to these column vector representations of states as kets.

\(|\chi_{\alpha}^{(z)}\rangle\) can be formed using the following syntax:
`ket_alpha = np.array([[1],[0]])`

We can get the number of rows and number of columns (the shape) of this vector using `np.shape(ket_alpha)`

.

```
# insert code to assign ket chi_alpha
ket_alpha = np.array([[1],[0]])
# insert code to assign ket chi_beta
ket_beta = np.array([[0], [1]])
# insert code to print both kets
print("|Chi_alpha>")
print(ket_alpha)
print("|Chi_beta>")
print(ket_beta)
# compute and print the shape of bra_alpha
print("Printing shape of |alpha>")
print( np.shape(ket_alpha) )
```

```
|Chi_alpha>
[[1]
[0]]
|Chi_beta>
[[0]
[1]]
Printing shape of |alpha>
(2, 1)
```

We can form the bras corresponding to these kets by taking the complex conjugate and transpose of the column vectors we have just formed. The result will be row vectors, keeping the correspondence to the “bra” - “ket” convention.

This operation can be computed using the following syntax:
`bra_alpha = ket_alpha.conj().T`

You can compute the shape of the bras in the same way as you used for the kets; take note of how the shape changes.

```
# insert code to assign bra chi_alpha as adjoint of ket chi_alpha
bra_alpha = ket_alpha.conj().T
# insert code to assign bra chi_beta as adjoint of ket chi_beta
bra_beta = ket_beta.conj().T
# insert code to print both bras
print("<Chi_alpha|")
print(bra_alpha)
print("<Chi_beta|")
print(bra_beta)
# compute and print the shape of bra_alpha
print("Printing shape of <alpha|")
print(np.shape(bra_alpha))
```

```
<Chi_alpha|
[[1 0]]
<Chi_beta|
[[0 1]]
Printing shape of <alpha|
(1, 2)
```

## Computing the bra-ket#

We can view the bra-ket (also called the inner product between the bra and the ket) as a test of how much the state in the bra projects on to the state in the ket. The answer can be anywhere between 0 (the states do not project onto each other at all, they are orthogonal states, they do not overlap at all) to 1 (these states perfectly project onto one another, they have perfect overlap, they are identical states). We know (or will soon learn) that the spin states are orthonormal states: they have perfect overlap with themselves and zero overlap with each other. This is codified with the following mathematical statements

where where have used the Kronecker delta function \(\delta_{nm} = 0\) if \(n\neq m\) and \(\delta_{nm} = 1\) if \(n=m\).

With their vector representations, we can compute the bra-ket using the dot product as follows:
`bra_ket_aa = np.dot(bra_alpha, ket_alpha)`

```
# insert code to compute <alpha|alpha>
bra_ket_aa = np.dot(bra_alpha, ket_alpha)
# insert code to compute <alpha|beta>
bra_ket_ab = np.dot(bra_alpha, ket_beta)
# insert code to compute <beta|alpha>
bra_ket_ba = np.dot(bra_beta, ket_alpha)
# insert code to compute <beta|beta>
bra_ket_bb = np.dot(bra_beta, ket_beta)
# print all bra-kets to make sure they behave as expected
print("<alpha|alpha> = ", bra_ket_aa)
print("<alpha|beta> = ", bra_ket_ab)
print("<beta|alpha> = ", bra_ket_ba)
print("<beta|beta> = ", bra_ket_bb)
```

```
<alpha|alpha> = [[1]]
<alpha|beta> = [[0]]
<beta|alpha> = [[0]]
<beta|beta> = [[1]]
```

## Numpy arrays: Matrices#

We will use 2-dimensional numpy arrays with
a an equal number of rows and columns to denote square matrices.

Let’s use as an example matrix representation of the \(\hat{S}_z\) operator.

\(\mathbb{S}_z\) can be formed using the following syntax:
`Sz = hbar / 2 * np.array([[1, 0],[0, -1]])`

You can take the shape of the Sz matrix as before; take note of how its shape compares to the shape of the bras and kets.

**Note** The value of \(\hbar\) in atomic units is 1.

```
# define hbar in atomic units
hbar = 1
# insert code to define the Sz matrix
Sz = hbar / 2 * np.array([[1, 0], [0, -1]])
# insert code to print the matrix
print("Printing matrix representationof the Sz operator")
print(Sz)
# print shape of Sz
print("Printing the shape of the Sz matrix")
print(np.shape(Sz))
```

```
Printing matrix representationof the Sz operator
[[ 0.5 0. ]
[ 0. -0.5]]
Printing the shape of the Sz matrix
(2, 2)
```

## Matrix-vector products#

An important property of the basis kets \(|\chi_{\alpha}^{(z)} \rangle\) and \(|\chi_{\beta}^{(z)} \rangle\) is that they are eigenstates of the \(\hat{S}_z\) operator satisfying

This property should be preserved with the matrix and vector representations of these operators and states, respectively. We can confirm this by taking the matrix-vector product between \(\mathbb{S}_z\) and the vectors corresponding to these basis kets using the syntax

`Sz_ket_a = np.dot(Sz, ket_alpha)`

To see that this is the case, we will subtract the right hand side of each eigenvalue equation from the left hand side, which should result in zero vectors if the relations hold.

```
# compute product of Sz and ket_alpha
Sz_ket_a = np.dot(Sz, ket_alpha)
# compute product of Sz and ket_beta
Sz_ket_b = np.dot(Sz, ket_beta)
# print product of Sz and ket_alpha
print("Printing Sz|alpha> - 1/2|alpha>")
print(Sz_ket_a - hbar/2 * ket_alpha)
# print product of Sz and ket_beta
print("Printing Sz|beta> + 1/2|beta>")
print(Sz_ket_b + hbar/2 * ket_beta)
```

```
Printing Sz|alpha> - 1/2|alpha>
[[0.]
[0.]]
Printing Sz|beta> + 1/2|beta>
[[0.]
[0.]]
```

## Hermitian matrices#

The matrix representations of operators in quantum mechanics are called Hermitian matrices. Hermitian matrices have the special relationship that they are equal to their adjoint (i.e., their complex conjugate transpose).

You can confirm that \(\mathbb{S}_z\) is Hermitian by the following syntax:

`Sz_adjoint = Sz.conj().T`

`print(np.allclose(Sz_adjoint, Sz))`

where the first line computes the adjoint of \(\mathbb{S}_z\) and stores it to a variable `Sz_adjoint`

and
the second line prints the result of comparing all elements of `Sz_adjoint`

to `Sz`

. The return value of `True`

will
indicate that `Sz_adjoint`

is numerically equal to `Sz`

.

```
Sz_adjoint = Sz.conj().T
# Confirm Sz is Hermitian here
print("Testing if Sz is close to its adjoint")
print(np.allclose(Sz_adjoint, Sz))
```

```
Testing if Sz is close to its adjoint
True
```

## Eigenvalues and eigenvectors#

An important property of Hermitian matrices is that their eigevalues are real numbers. In quantum mechanics, we associate the possible outcomes of measurements with the eigenvalues of Hermitian operators corresponding to the observable being measured. In this notebook, we have been talking about the observable of spin angular momentum, which is a vector quantity. We have been specifically looking at the operators and eigenstates related to the z-component of spin angular momentum, denoted \(S_z\). We have seen that this operator has two eigenstates, \(|\chi_{\alpha}^{(z)}\rangle\) and \(|\chi_{\beta}^{(z)}\rangle\) with associated eigenvalues \(\frac{\hbar}{2}\) and \(-\frac{\hbar}{2}\), which are both real numbers.

These relationships are preserved when we use the matrix - vector representation of operators and eigenstates. In general, an eigenvalue equation with matrices and vectors satisfies

where \(\lambda\) is an eigenvalue (which is a number) and \(\bf{x}\) is an eigenvector. One way of interpreting these equations is to say that the action of a matrix on its eigenvectors is simply to scale the magnitude of the vector by a number (specifically, scale it by its eigenvalue). This is a very special situation, because typically speaking, when a vector is multiplied by a matrix, the result is a new vector that points along a new direction and has a different magnitude. For a lovely explanation with graphical illustrations, please consult this vide. In fact, the entire 3b1b series on linear algebra is wonderful!

We have already seen that vectors associated with the basis kets \(|\chi_{\alpha}^{(z)}\rangle\) and \(|\chi_{\beta}^{(z)}\rangle\) obey this relationship with \(\mathbb{S}_z\). What we will now do, is consider the matrices associated with the spin angular momentum components along \(x\) and \(y\). We will first see that the basis kets \(|\chi_{\alpha}^{(z)}\rangle\) and \(|\chi_{\beta}^{(z)}\rangle\) are not eigenvectors of \(\mathbb{S}_x\) and \(\mathbb{S}_y\). We will then use numpy’s linear algebra sub-library to compute the eigenvalues and eigenvectors of these matrices, which will turn out to be linear combinations of \(|\chi_{\alpha}^{(z)}\rangle\) and \(|\chi_{\beta}^{(z)}\rangle\).

### Build matrix form of \(\mathbb{S}_x\) and \(\mathbb{S}_y\)#

The operator \(\hat{S}_x\) has the matrix form

and the operator \(\hat{S}_y\) has the matrix form

**Hint** The imaginary unit \(i = \sqrt{-1}\) can be accessed as `1j`

in python.

```
# insert code to build Sx
Sx = hbar / 2 * np.array([[0, 1], [1, 0]])
# insert code to build Sy
Sy = hbar / 2 * np.array([[0, -1j], [1j, 0]])
# print Sx
print("Printing the matrix representation of the Sx operator")
print(Sx)
# print Sy
print("Printing the matrix representation of the Sy operator")
print(Sy)
```

```
Printing the matrix representation of the Sx operator
[[0. 0.5]
[0.5 0. ]]
Printing the matrix representation of the Sy operator
[[0.+0.j 0.-0.5j]
[0.+0.5j 0.+0.j ]]
```

### Take matrix-vector product of \(\mathbb{S}_x\) and \(\mathbb{S}_y\) with the basis kets#

Just as we did with \(\mathbb{S}_z\), take the following matrix-vector products: $\( \mathbb{S}_x |\chi_{\alpha}^{(z)}\rangle \)\( \)\( \mathbb{S}_x |\chi_{\beta}^{(z)}\rangle \)\( \)\( \mathbb{S}_y |\chi_{\alpha}^{(z)}\rangle \)\( \)\( \mathbb{S}_y |\chi_{\beta}^{(z)}\rangle \)$

**Question 1:** After inspecting the results of each matrix-vector product, do you think the basis kets are eigenstates of
\(\mathbb{S}_x\) and \(\mathbb{S}_y\)? Explain your reasoning.

**Question 2:** What is the shape of the result of each matrix-vector product?

```
# compute product of Sx and ket_alpha and store to Sx_ket_a; print it
Sx_ket_a = np.dot(Sx, ket_alpha)
print("Printing the product Sx|alpha>")
print(Sx_ket_a)
# compute product of Sx and ket_beta and store to Sx_ket_b; print it
Sx_ket_b = np.dot(Sx, ket_beta)
print("Printing the product Sx|beta>")
print(Sx_ket_b)
# compute product of Sy and ket_beta and store to Sy_ket_b; print it
Sy_ket_b = np.dot(Sy, ket_beta)
print("Printing the product Sy|beta>")
print(Sy_ket_b)
# compute product of Sy and ket_alpha and store to Sy_ket_a; print it
Sy_ket_a = np.dot(Sy, ket_alpha)
print("Printing the product Sy|alpha>")
print(Sy_ket_a)
# print shape of one of the resulting vectors
print("Printing shape of the product Sx|alpha>")
print(np.shape(Sx_ket_a))
```

```
Printing the product Sx|alpha>
[[0. ]
[0.5]]
Printing the product Sx|beta>
[[0.5]
[0. ]]
Printing the product Sy|beta>
[[0.-0.5j]
[0.+0.j ]]
Printing the product Sy|alpha>
[[0.+0.j ]
[0.+0.5j]]
Printing shape of the product Sx|alpha>
(2, 1)
```

### Use `eigh()`

to compute the eigenvectors and eigenvalues of \(\mathbb{S}_x\) and \(\mathbb{S}_y\)#

Numpy has a linear algebra library that can compute eigenvalues and eigenvectors of Hermitian matrices that is called using the syntax

`eigenvalues, eigenvectors = la.eigh(M)`

where `eigenvalues`

will store all of the eigenvectors and `eigenvectors`

will store all the eigenvectors.

Use this method to compute the eigenvalues and eigenvectors of \(\mathbb{S}_x\) and \(\mathbb{S}_y\).

**Note**:
eigenvectors[:, i] is the normalized eigenvector corresponding to the eigenvalue eigenvalues[i].

**Question 3:** What is the shape of the vals_x? What is the shape of vecs_x?

**Question 4:** Do these matrices have the same eigenvalues as \(\mathbb{S}_z\)? Do they have the same eigenvectors as \(\mathbb{S}_z\)?

```
# compute eigenvectors and eigenvalues of Sx, store them to vals_x, vecs_x
vals_x, vecs_x = la.eigh(Sx)
# compute eigenvectors and eigenvalues of Sy, store them to vals_y, vecs_y
vals_y, vecs_y = la.eigh(Sy)
# print shape of vals_x
print("this is the shape of vals_x")
print(np.shape(vals_x))
# print shape of vecs_x
print("this is the shape of vecs_x")
print(np.shape(vecs_x))
print("Eigenvalues of Sx")
print(vals_x)
print("Eigenvectors of Sx")
print(vecs_x)
print("Eigenvalues of Sy")
print(vals_y)
print("Eigenvectors of Sy")
print(vecs_y)
```

```
this is the shape of vals_x
(2,)
this is the shape of vecs_x
(2, 2)
Eigenvalues of Sx
[-0.5 0.5]
Eigenvectors of Sx
[[-0.70710678 0.70710678]
[ 0.70710678 0.70710678]]
Eigenvalues of Sy
[-0.5 0.5]
Eigenvectors of Sy
[[-0.70710678+0.j -0.70710678+0.j ]
[ 0. +0.70710678j 0. -0.70710678j]]
```

### Expectation values#

Another important operation in quantum mechanics is the computation of an expectation value, which can be written as a bra-ket sandwiching an operator:

The result will depend on what \(\hat{O}\) does to \(|m\rangle\), and how the resulting ket projects upon \(\langle n|\).

We can use the different eigenvectors from our last block as kets, and their adjoints as bras, along with the matrix form of the operators to compute these operations.

`ket_x_0 = vecs_x[:,0]`

`bra_x_0 = ket_x_0.conj().T`

`expectation_value = np.dot(bra_x_0, np.dot(Sx, ket_x_0))`

**Question 5:** If we associate \(|\chi_{\alpha}^{(x)}\rangle\) with `vec_x[:,1]`

, what is the expectation value corresponding to \(\langle \chi_{\alpha}^{(x)} | \hat{S}_x | \chi_{\alpha}^{(x)} \rangle \)?

**Question 6:** If we associate \(|\chi_{\alpha}^{(y)}\rangle\) with `vec_y[:,1]`

, what is the expectation value corresponding to \(\langle \chi_{\alpha}^{(y)} | \hat{S}_z | \chi_{\alpha}^{(y)} \rangle \)?

```
# Compute <alpha_x|Sx|alpha_x>; print the result
# store ket_alpha_x
ket_alpha_x = vecs_x[:,1]
bra_alpha_x = ket_alpha_x.conj().T
Sx_exp_alpha_x = np.dot(bra_alpha_x, np.dot(Sx, ket_alpha_x))
print("The expectation value of Sx using the state |alpha_x>")
print(Sx_exp_alpha_x)
# Compute <alpha_y|Sz|alpha_y>; print the result
ket_alpha_y = vecs_y[:,1]
bra_alpha_y = ket_alpha_y.conj().T
Sz_exp_alpha_y = np.dot(bra_alpha_y, np.dot(Sz, ket_alpha_y))
print("The expectation value of Sz using the state |alpha_y>")
print(Sz_exp_alpha_y)
```

```
The expectation value of Sx using the state |alpha_x>
0.4999999999999999
The expectation value of Sz using the state |alpha_y>
0j
```

### Commutators#

We will learn later in 3141 about generalized uncertainty relations. An important mathematical operation in formulation of uncertainty relations is the commutator, which can be taken between two operators or two matrices representing operators. The commutator between operators \(\hat{A}\) and \(\hat{B}\) can be written as

**Question 7:** Are the observables corresponding to \(\hat{S}_x\) compatible with the observables corresponding to \(\hat{S}_y\)? Explain your reasoning.

**Question 8:** Confirm that the matrices \(\mathbb{S}_x\), \(\mathbb{S}_y\), and \(\mathbb{S}_z\) obey the same commutation relations as shown above. The syntax for computing matrix products is either `np.dot(A,B)`

or equivalently `A @ B`

:

`SxSy = np.dot(Sx, Sy)`

is the same as

`SxSy = Sx @ Sy`

```
# compute commutator of Sx and Sy and compare to i*hbar*Sz
SxSy = Sx @ Sy
SySx = Sy @ Sx
Commutator_SxSy = SxSy - SySx
print("Printing difference between [Sx,Sy] and i\hbar Sz")
print(Commutator_SxSy - 1j * Sz)
# compute the commutator of Sy and Sz and compare to i*hbar*Sx
SySz = Sy @ Sz
SzSy = Sz @ Sy
Commutator_SySz = SySz - SzSy
print("Printing difference between [Sy,Sz] and i\hbar Sx")
print(Commutator_SySz - 1j * Sx)
# compute the commutator of Sz and Sx and compare to i*hbar*Sy
SzSx = Sz @ Sx
SxSz = Sx @ Sz
Commutator_SzSx = SzSx - SxSz
print("Printing difference between [Sz,Sx] and i\hbar Sy")
print(Commutator_SzSx - 1j * Sy)
```

```
Printing difference between [Sx,Sy] and i\hbar Sz
[[0.+0.j 0.+0.j]
[0.+0.j 0.+0.j]]
Printing difference between [Sy,Sz] and i\hbar Sx
[[0.+0.j 0.+0.j]
[0.+0.j 0.+0.j]]
Printing difference between [Sz,Sx] and i\hbar Sy
[[0.+0.j 0.+0.j]
[0.+0.j 0.+0.j]]
```