Spin 1/2 with Design Recipe#
Jay Foley, University of North Carolina Charlotte
Overview and Motivation for Using the Design Recipe#
This notebook introduces a structured pedagogical approach to function design known as the Design Recipe. The Design Recipe offers a systematic, step-by-step framework to help students design, implement, and test functions in a repeatable and transparent way.
There are two primary motivations for incorporating this approach into instruction:
1. Research-Based Support for Teaching Programming Skills in Disciplinary Contexts
A recent study by Fuchs, McDonald, Gautam, and Kzerouni (2024) highlights the challenges students face when learning to program in domains like Physical Chemistry. The study identifies several persistent barriers:
Difficulty transferring programming skills to new contexts and representations
Absence of reliable, structured strategies for solving programming problems
To address these challenges, the authors recommend that instructors explicitly teach three core cognitive skills:
abstraction, decomposition, and metacognitive awareness.
The Design Recipe directly supports this recommendation. By requiring students to articulate function goals, design examples, and incrementally refine their code, it makes the process of problem-solving visible and teachable. Instructors can use it to scaffold not just coding, but computational thinking more broadly.
2. Enhancing Student Engagement When Using AI-Based Coding Tools
A second motivation is the hypothesis that structured approaches like the Design Recipe may improve how students interact with AI-based coding agents. With tools such as ChatGPT increasingly integrated into programming workflows, students often lack the skills to guide these tools effectively or interpret their outputs critically.
By giving students a clear framework for specifying and testing their intentions, the Design Recipe can promote:
Greater agency in directing code generation
Improved precision when communicating with AI tools
Better judgment when reviewing and debugging AI-generated code
This notebook explores how the Design Recipe can be used to support both disciplinary learning and productive human–AI collaboration in code development.
The Design Recipe: Step-by-Step#
Header
Define the function’s name, input parameters, and their data types. Also specify the data type of the return value.Purpose
Provide a concise, one-sentence description of what the function is intended to do.Examples
Supply one or more examples showing how the function should be called and what it should return. These serve both as documentation and as test cases.Body
Write the function logic, using the header, purpose, and examples as your guide.Test
Execute your example cases to verify correctness. Think critically about edge cases and incorporate additional tests as needed.Debug/Iterate
If the function fails a test, revisit your logic and syntax. Add more targeted test cases to isolate and resolve errors. Iterate until the function performs as expected.
We will illustrate this process first with a few simple examples first, and then will utilize it throughout the notebook in the context of standard quantum computations on spin 1/2 systems.
Learning Outcomes#
By the end of this workbook, students should be able to
Identify the eigenstates of z-spin
Use the eigenstates of z-spin as a basis for computing expectation values of x-spin and y-spin
Explain the concept of matrix representations of operators
Utilize NumPy to build matrix representations of operators
Utilize NumPy to identify eigenstates of x-spin and y-spin
Utilize NumPy to confirm the commutation relations for the matrix representations of operators
Use the Design Recipe to compose functions
Summary#
We will use Python and NumPy to illustrate basic formalism of spin 1/2 in quantum mechanics. We assume familiarity with this formalism; for background on this topic, we recommend you read more on spin.
Spin angular momentum is an observable in quantum mechanics and has associated operators with eigenstates. Traditionally, the components of angular momentum are represented along the \(x\), \(y\), and \(z\) axes, and we have Hermitian operators associated with each component (\(\hat{S}_x, \hat{S}_y, \hat{S}_z\)), along with the square magnitude \(\hat{S}^2\). For particles like electrons, protons, and neutrons, these component operators all have exactly two eigenstates with eigenvalues \(\pm \frac{1}{2}\hbar\); hence we talk about the formalism of spin for these systems as the formalism of spin 1/2. In this workbook, we will introduce matrix representations of each of these component operators, and the eigenstates will then have vector representations. We will specifically introduce the eigenvectors of the matrix associated with \(\hat{S}_z\) as the basis vectors for any state of spin 1/2. We will then be able to write the matrices associated with \(\hat{S}_x\) and \(\hat{S}_y\) in this basis, and perform useful computations with them, including finding their eigenstates and verifying so-called commutation relations between these operators.
Import statements#
Python has intrinsic functionality as a programming language, but there are also many special-purpose libraries that are helpful. Here we will use the library numpy
for numerical computing.
import numpy as np
from numpy import linalg as la
Functions#
In Python, functions are reusable blocks of code designed to perform specific tasks. They help organize code, making it more modular and easier to maintain. A function is defined using the def
keyword, followed by a name, parentheses ()
, and a colon :
. Inside the function, you can include any number of statements that define what the function does. Functions can take inputs, called parameters, and can return a value after processing. By calling a function by its name, you can execute the code within it whenever needed, making your programs more efficient and easier to understand.
Here’s an example function that multipies an input number by 3:
def multiply_by_three(x):
"""
Multiplies the input by three.
Arguments
---------
x: A number to be multiplied by three.
Returns
-------
result: The input multiplied by three.
Example:
multiply_by_three(5) == 15
"""
result = 3 * x
return result
Important Your function must be indented relative to the def
statement.
Let’s go through this step-by-step:
Header Doest the function name make sense? Are the input parameters adequate for what we will need to pass to the function? Is the return statement adequate for what we want the function to return?
Purpose Does the purpose string adequatly capture the functions behavior?
Examples One example is given; add a second example!
Body Read the body of the function and track what is happening in each line.
Test Test your code against your two examples!
Debug/Iterate Did your tests pass? If yes, great! If not, you know what to do!
def print_message_multiple_times(message, times):
"""
Prints a given string a specified number of times and returns the concatenated result.
Arguments
---------
message : the message you want to print
times : The number of times the message should be repeated.
Returns:
repeated_message : The concatenated message repeated 'times' times.
Example:
print_message_multiple_times("Hello", 3)
Output:
HelloHelloHello
"""
# Concatenate the message 'times' times
repeated_message = message * times
# print the repeated message
print(repeated_message)
# return it
return repeated_message
# test against first example
assert print_message_multiple_times("Hello", 3) == "HelloHelloHello"
assert print_message_multiple_times("bye", 2) == "byebye"
assert print_message_multiple_times("PChem", 1) == "PChem"
HelloHelloHello
byebye
PChem
We used an assert statement to test our functions execution. Simply put, if an assert statement is followed by something that is True, nothing happens, but if it is followed by something that is False, it gives an error. Assert statements are used widely in testing, where you set up a test to give a False if the test fails and a True if it passes. This way when the assert meets a passing tests, the program moves on smoothly, and if it meets a failing test, it stops immediately.
What statement evaluated to True in our first test?
Some Questions. Think back to our function print_repeated_message(message, times)
.
What type of variable was
message
?What type of variable was
times
?What would happen if we only included the
print(repeated_message)
statement and not thereturn
statement?What would happen if we only included the
return repeated_message
statment and not theprint
statement?
Numpy matrices: Spinors and spin matrices#
Numpy matrices are special types of variables that can make use of different mathematical operation in the numpy library. We will see that a lot of linear algebra operations can be performed with numpy matrices using very simple syntax. Numpy matrices are always 2-dimensional (unlike numpy arrays which can have arbitrary dimension), we will use 2-dimensional numpy matrices with a single column and multiple rows to denote a column vector as a representation of a ket. We can take the Hermitian conjugate of our kets to get bras, which are represented as row vectors.
Here we will introduce the vector representation of special spin states (spinors) that have precise value of z-spin, that is, they are the eigenstates of the \(\hat{S}_z\) operator:
We refer to these column vector representations of states as kets.
\(|\alpha\rangle\) can be formed using the following syntax:
ket_alpha = np.array([[1], [0]])
We can get the number of rows and number of columns (the shape) of this vector using np.shape(ket_alpha)
.
# TODO: Assign ket_alpha as a column vector using np.array
# Example: np.array([[...], [...]])
ket_alpha = ...
# TODO: Assign ket_beta as a column vector using np.array
ket_beta = ...
# Print both kets (already provided for you)
print("|alpha>")
print(ket_alpha)
print(ket_alpha.shape)
print("|beta>")
print(ket_beta)
|alpha>
Ellipsis
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[3], line 11
9 print("|alpha>")
10 print(ket_alpha)
---> 11 print(ket_alpha.shape)
13 print("|beta>")
14 print(ket_beta)
AttributeError: 'ellipsis' object has no attribute 'shape'
We can form the bras corresponding to these kets by taking the complex conjugate and transpose of the column vectors we have just formed. The result will be row vectors, keeping the correspondence to the “bra” - “ket” convention.
This operation can be computed using the following syntax:
bra_alpha = ket_alpha.conj().T
You can compute the shape of the bras in the same way as you used for the kets; take note of how the shape changes.
# TODO: Assign bra_alpha as the Hermitian (conjugate transpose) of ket_alpha
bra_alpha = ...
# TODO: Assign bra_beta as the Hermitian (conjugate transpose) of ket_beta
bra_beta = ...
# Print both bras (already provided for you)
print("<alpha|")
print(bra_alpha)
print("<beta|")
print(bra_beta)
Computing the bra-ket#
We can view the bra-ket (also called the inner product between the bra and the ket) as a test of how much the state in the bra projects on to the state in the ket. The answer can be anywhere between 0 (the states do not project onto each other at all, they are orthogonal states, they do not overlap at all) to 1 (these states perfectly project onto one another, they have perfect overlap, they are identical states). We know (or will soon learn) that the spin states are orthonormal states: they have perfect overlap with themselves and zero overlap with each other. This is codified with the following mathematical statements
where where have used the Kronecker delta function \(\delta_{nm} = 0\) if \(n\neq m\) and \(\delta_{nm} = 1\) if \(n=m\) and we are using \(\chi_n\) and \(\chi_m\) to represent arbitrary spin states.
With their vector representations, we can compute the bra-ket using the dot product as follows:
bra_ket_aa = np.dot(bra_alpha , ket_alpha)
🚧 Your Task#
Complete the function compute_bra_ket
so that it returns the inner product of a given bra and ket.
Remember: the bra is a row vector and the ket is a column vector.
Use the
@
operator to compute the product.Add two more examples to the docstring to test your understanding!
Think through the steps of the Design Recipe again. Steps 1 and 2 are totally complete, but think through them anyway. Step 3 is partially complete, and you must complete Steps 4 and 5 on your own.
Header What should we name the function? What input parameters should the function accept? What should their data types be? What will the function return? What data type will it be?
Purpose What is a single sentence that describes the purpose of the function?
Examples Come up with 2 examples where different messages are printed different numbers of times.
Body Now attempt to write the body of your function.
Test Test your code against your examples.
Debug/Iterate Did your tests pass? If yes, great! If not, you know what to do!
def compute_bra_ket(my_bra, my_ket):
"""
A function to compute the bra-ket ⟨bra|ket⟩ and return the value
Arguments
---------
my_bra : a row vector representing ⟨bra|
my_ket : a column vector representing |ket⟩
Returns
-------
bra_ket : a number representing the inner product ⟨bra|ket⟩
Examples
--------
compute_bra_ket(np.array([[1, 0]]), np.array([[1], [0]])) == 1
compute_bra_ket(np.array([[0, 1]]), np.array([[1], [0]])) == 0
# TODO: Add two additional examples of bra-kets with expected outcomes
"""
# TODO: Compute the inner product and store in bra_ket
bra_ket = ...
return bra_ket
🧪 Bra-Ket Inner Product Practice#
Now that you’ve defined compute_bra_ket
, use it to evaluate the following quantum inner products:
⟨α | α⟩
⟨α | β⟩
⟨β | α⟩
⟨β | β⟩
Then write down what you expect each result to be (just numbers!), based on what you know about orthonormal quantum states. After that, run the tests to confirm you’re correct.
# 🚧 TODO: Use compute_bra_ket to evaluate inner products between bras and kets
# Example:
# bra_ket_aa = compute_bra_ket(bra_alpha, ket_alpha)
# TODO: Compute <alpha|alpha>
bra_ket_aa = ...
# TODO: Compute <alpha|beta>
bra_ket_ab = ...
# TODO: Compute <beta|alpha>
bra_ket_ba = ...
# TODO: Compute <beta|beta>
bra_ket_bb = ...
# 🚧 TODO: Write what you *expect* each inner product to be based on your understanding
# Use integers like 0 or 1 where appropriate.
_expected_bra_ket_aa = ...
_expected_bra_ket_ab = ...
_expected_bra_ket_ba = ...
_expected_bra_ket_bb = ...
# ✅ Tests — will only pass if your values and expectations are correct!
assert np.isclose(bra_ket_aa, _expected_bra_ket_aa)
assert np.isclose(bra_ket_ab, _expected_bra_ket_ab)
assert np.isclose(bra_ket_ba, _expected_bra_ket_ba)
assert np.isclose(bra_ket_bb, _expected_bra_ket_bb)
print("✅ All bra-ket tests passed!")
🧩 Define the Sz Operator#
The spin operator \(S_z\) for a spin-½ particle is defined as:
Using np.array
, define this matrix in Python using hbar = 1
(already provided). Then run the cell to check the printed result and confirm the shape is (2, 2)
.
# define hbar in atomic units (already done for you)
hbar = 1
# TODO: Define the Sz matrix using np.array
# Hint: Use hbar / 2 * np.array([[...], [...]])
Sz = ...
# Print the matrix to check your result
print("Sz matrix:")
print(Sz)
# Print the shape to verify it is a 2x2 matrix
print("Shape of Sz:", Sz.shape)
🧮 Matrix-Vector Products: Operator Action on States#
An important property of the basis kets \(|\alpha \rangle\) and \(|\beta \rangle\) is that they are eigenstates of the \(\hat{S}_z\) operator:
In NumPy, we represent this using a matrix-vector product like this:
Sz_ket_alpha = Sz @ ket_alpha
🧪 Design Recipe: Compute Operator on State Let’s build a function to apply any matrix operator to a ket state.
Follow the steps below:
Header What should we name the function? What arguments should it take? What types?
Purpose One sentence: what does this function do?
Examples Add 2 examples of input/output to show the function in action.
Body Write the function code.
Test Try your examples!
Debug/Iterate If something doesn’t work, fix it and try again!
# TODO: Write a function to compute the action of an operator on a ket state
def compute_operator_on_state(...): # <-- fill in the header
"""
TODO: Write a docstring that includes:
- Purpose (what does this function do?)
- Parameters (what are the types and meanings?)
- Returns (what does it return?)
- 2 example calls with expected outputs
"""
# TODO: Compute the resulting ket after applying the operator
result_ket = ...
return result_ket
🧪 Test Your Operator Function#
Let’s test your compute_operator_on_state
function by applying the spin operator \(S_z\) to each basis state:
Use your function to compute
Sz @ ket_alpha
andSz @ ket_beta
Manually define what the expected results should be
Run the tests and confirm they pass!
# 🚧 TODO: Use your operator_state function to apply Sz to the alpha and beta kets
# Compute the result of Sz acting on ket_alpha
Sz_ket_alpha = ...
# Compute the result of Sz acting on ket_beta
Sz_ket_beta = ...
# 🚧 TODO: Define what you EXPECT these results to be
# Remember: |alpha⟩ is an eigenvector of Sz with eigenvalue +½ hbar
# |beta⟩ is an eigenvector of Sz with eigenvalue -½ hbar
expected_Sz_ket_alpha = ...
expected_Sz_ket_beta = ...
# ✅ Tests — if your function and expected values are correct, these will pass!
assert np.allclose(Sz_ket_alpha, expected_Sz_ket_alpha), "Sz|alpha⟩ test failed"
assert np.allclose(Sz_ket_beta, expected_Sz_ket_beta), "Sz|beta⟩ test failed"
print("✅ Both operator-state tests passed!")
🔁 Hermitian Matrices#
In quantum mechanics, operators are represented by Hermitian matrices. These matrices have a special property:
That is, a Hermitian matrix is equal to its own adjoint (or Hermitian transpose), which is defined as the complex conjugate transpose.
For example, to verify that the spin operator \(S_z\) is Hermitian:
Compute the Hermitian adjoint of the matrix using
.conj().T
Compare it to the original using
np.allclose()
(which checks equality with tolerance for rounding errors)
Sz_adjoint = Sz.conj().T
print(np.allclose(Sz_adjoint, Sz)) # Should return True
# TODO: Compute the Hermitian adjoint of Sz using .conj().T
Sz_adjoint = ...
# TODO: Check whether Sz is Hermitian by comparing to its adjoint
print(np.allclose(Sz_adjoint, Sz)) # Expect: True
🎯 Eigenvalues and Eigenvectors#
An important property of Hermitian matrices is that all their eigenvalues are real.
In quantum mechanics, the eigenvalues of Hermitian operators represent the possible outcomes of measurements. In this notebook, we’ve been exploring the spin angular momentum observable, especially its z-component represented by the operator \( \hat{S}_z \).
We’ve already seen that \(\hat{S}_z \) has two eigenstates:
\( |\alpha\rangle \) with eigenvalue \( +\frac{\hbar}{2} \)
\( |\beta\rangle \) with eigenvalue \( -\frac{\hbar}{2} \)
These relationships are preserved in our matrix-vector representation.
📐 What Are Eigenvectors?#
For a matrix \( \mathbb{M} \), a vector \( \mathbf{x}\) is an eigenvector if:
Where:
\( \lambda \) is the eigenvalue (a number),
\(\mathbf{x} \) is a non-zero vector that only gets scaled by the matrix—not rotated.
This is rare and special! Normally, matrices change both the magnitude and direction of vectors.
👉 For an intuitive and visual explanation, check out this wonderful 3Blue1Brown video on eigenvectors.
📌 Next Steps#
So far, we’ve confirmed that the basis kets \( |\alpha\rangle \), \( |\beta\rangle \) are eigenvectors of \(\mathbb{S}_z \).
We will now:
Define the matrix forms of \( \mathbb{S}_x \) and \( \mathbb{S}_y \),
Show that the z-basis kets are not eigenvectors of \( \mathbb{S}_x \) and \( \mathbb{S}_y \),
Use NumPy’s linear algebra tools to find true eigenvectors of \( \mathbb{S}_x \) and \( \mathbb{S}_y \),
Interpret those new eigenvectors as linear combinations of the z-basis kets.
🏗️ Define Matrices for \(\mathbb{S}_x \) and \( \mathbb{S}_y \)#
The spin operators are:
🧠 In Python, the imaginary unit ( i ) is written as
1j
.
# TODO: Define the Sx operator using np.array
# Hint: 2x2 matrix with real values
Sx = ...
# TODO: Define the Sy operator using np.array
# Hint: 2x2 matrix with imaginary numbers; use 1j for sqrt(-1)
Sy = ...
# Print both to verify
print("Sx matrix:")
print(Sx)
print("Sy matrix:")
print(Sy)
🎯 Matrix-Vector Products: Are \( |\alpha\rangle \) and \( |\beta\rangle \) Eigenvectors of \(\mathbb{S}_x \) and \( \mathbb{S}_y \)?#
Now that you’ve defined the matrix forms of \( \mathbb{S}_x \) and \( \mathbb{S}_y \), let’s apply these operators to the z-basis kets $ |\alpha\rangle \( and \) |\beta\rangle $.
Specifically, compute the following matrix-vector products:
Use Python syntax like:
Sx_ket_alpha = Sx @ ket_alpha
# TODO: Apply Sx and Sy to both basis states
Sx_ket_alpha = ...
Sx_ket_beta = ...
Sy_ket_alpha = ...
Sy_ket_beta = ...
# Print the results to inspect
print("Sx |alpha⟩ =")
print(Sx_ket_alpha)
print("Sx |beta⟩ =")
print(Sx_ket_beta)
print("Sy |alpha⟩ =")
print(Sy_ket_alpha)
print("Sy |beta⟩ =")
print(Sy_ket_beta)
# Optionally, print their shapes
print("Shape of result:", Sx_ket_alpha.shape)
Questions to consider#
After inspecting the output of each matrix-vector product, do you think \(|\alpha\rangle\) and \(|\beta\rangle\) are eigenvectors of \(\mathbb{S}_x\) and \(\mathbb{S}_y\)? Hint: What would the results look like if they were eigenvectors?
What is the shape of the result of each matrix-vector product? Does the dimensionality change? Why is that important?
🧮 Use eigh()
to Compute Eigenvalues and Eigenvectors of \( \mathbb{S}_x \) and \( \mathbb{S}_y \)#
NumPy’s linear algebra library (numpy.linalg
) provides a convenient function for computing eigenvalues and eigenvectors of Hermitian matrices: eigh()
.
Here’s the syntax:
eigenvalues, eigenvectors = la.eigh(M)
where eigenvalues
will store all of the eigenvectors and eigenvectors
will store all the eigenvectors.
Use this method to compute the eigenvalues and eigenvectors of \(\mathbb{S}_x\) and \(\mathbb{S}_y\).
Note eigenvectors
is a 2D array where each column eigenvectors[:, i]
is the normalized eigenvector corresponding to eigenvalues[i].
# TODO: Compute eigenvalues and eigenvectors of Sx
vals_x, vecs_x = ...
# TODO: Compute eigenvalues and eigenvectors of Sy
vals_y, vecs_y = ...
# TODO: Print the shapes of the result arrays
print("Shape of vals_x:", ...)
print("Shape of vecs_x:", ...)
# Print out the eigenvalues and eigenvectors
print("Eigenvalues of Sx:", vals_x)
print("Eigenvectors of Sx:\n", vecs_x)
print("Eigenvalues of Sy:", vals_y)
print("Eigenvectors of Sy:\n", vecs_y)
Question 3: What is the shape of the vals_x? What is the shape of vecs_x?
Question 4: Do these matrices have the same eigenvalues as \(\mathbb{S}_z\)? Do they have the same eigenvectors as \(\mathbb{S}_z\)?
🔄 Expressing New Eigenvectors in Terms of \( |\alpha\rangle \) and \( |\beta\rangle \)#
The eigenvectors of $ \mathbb{S}_x \( and \) \mathbb{S}_y \( are **not** \) |\alpha\rangle \( and \) |\beta\rangle $.
But that doesn’t mean they’re unrelated!
In fact, the new eigenvectors are linear combinations (i.e., superpositions) of the basis states \( |\alpha\rangle \) and \( |\beta\rangle \).
This reflects the idea that the spin eigenstates along the x- and y-axes can be written in terms of the z-basis.
🧪 How Can We Test This?#
Each eigenvector from vecs_x
or vecs_y
should be expressible as:
This means we can project each eigenvector onto the z-basis to extract \( c_1 \) and $ c_2 $.
Mathematically, this looks like:
c1 = bra_alpha @ eigvec
c2 = bra_beta @ eigvec
You can then reconstruct the eigenvector using:
reconstructed = c1 * ket_alpha + c2 * ket_beta
If your reconstruction is numerically close to the original eigenvector, then your basis decomposition is correct!
# TODO: Choose one of the eigenvectors from Sx
eigvec = vecs_x[:, 0] # for example
# TODO: Compute components (projections onto alpha and beta)
c1 = ...
c2 = ...
# TODO: Reconstruct the eigenvector from the z-basis
reconstructed = c1 * ket_alpha + c2 * ket_beta
# Print and compare
print("Original eigenvector:")
print(eigvec)
print("Reconstructed eigenvector:")
print(reconstructed)
# Are they close?
print("Are they equal?", np.allclose(eigvec, reconstructed))
📈 Expectation Values#
In quantum mechanics, an expectation value is the average result you’d expect to obtain if you repeatedly measured a quantum observable on a system in a given state.
It is written as a bra-ket sandwich:
If the bra and ket are the same state (i.e. \( |n\rangle = |m\rangle \)), this gives the expectation value of operator \( \hat{O} \) in that state.
In Python, using NumPy arrays, this looks like:
expectation_value = bra.conj().T @ Operator @ ket
Expectation values#
Another important operation in quantum mechanics is the computation of an expectation value, which can be written as a bra-ket sandwiching an operator:
The result will depend on what \(\hat{O}\) does to \(|m\rangle\), and how the resulting ket projects upon \(\langle n|\).
We can use the different eigenvectors from our last block as kets, and their adjoints as bras, along with the matrix form of the operators to compute these operations.
expectation_value = bra @ Operator @ ket
🧪 Function to Compute Expectation Value#
Let’s write a function that computes an expectation value of the form:
This function will take:
a bra (row vector),
an operator (2×2 matrix),
a ket (column vector),
and return the resulting scalar expectation value.
✅ Follow These Design Recipe Steps#
Header
What should we name the function?
What arguments should it take, and what types should they be?
Purpose
Write one sentence describing what this function does.
Examples
Add at least 2 examples that show different inputs and expected outputs.
Body
Implement the formula for the expectation value:
$\( \langle \text{bra} | \hat{O} | \text{ket} \rangle \)$
Test
Run your examples to check if the function returns the correct values.
Debug / Iterate
Did your function pass your tests?
If not, revise it and try again until it works!
# TODO: Write a function to compute expectation value <bra|Op|ket>
def compute_expectation(bra, Operator, ket):
"""
Compute the quantum mechanical expectation value ⟨bra|Operator|ket⟩.
Arguments
---------
bra : 1D or 2D numpy array (row vector)
Operator : 2D numpy array
ket : 1D or 2D numpy array (column vector)
Returns
-------
expectation : complex or float scalar
"""
expectation = ...
return expectation
Question 5: If we associate \(|\alpha^{x}\rangle\) with vec_x[:,1]
, what is the expectation value corresponding to \(\langle \alpha^{x} | \hat{S}_x | \alpha^{x} \rangle \)?
Question 6: If we associate \(|\alpha^{y}\rangle\) with vec_y[:,1]
, what is the expectation value corresponding to \(\langle \alpha^{y} | \hat{S}_z | \alpha^{y}\rangle \)?
# TODO: Assign specific eigenvectors
ket_x_alpha = vecs_x[:, 1]
bra_x_alpha = ket_x_alpha.conj().T
ket_y_alpha = vecs_y[:, 1]
bra_y_alpha = ket_y_alpha.conj().T
# TODO: Compute expectation values
expect_x = ...
expect_z = ...
print("⟨x_alpha| Sx |x_alpha⟩ =", expect_x)
print("⟨y_alpha| Sz |y_alpha⟩ =", expect_z)
🔁 Commutators and Compatibility#
Later in PHYS 3141, we’ll explore the generalized uncertainty principle. One of the most important mathematical tools in that formulation is the commutator.
🧮 What Is a Commutator?#
For two operators \( \hat{A} \) and \( \hat{B} \), the commutator is defined as:
This same formula applies directly to matrices representing those operators.
🧠 Key Facts About Commutators#
If \([\hat{A}, \hat{B}] = 0 \), we say that \( \hat{A} \) and \( \hat{B} \) commute.
If \( [\hat{A}, \hat{B}] \ne 0 \), the operators do not commute.
Commuting operators share the same set of eigenstates (their matrices share the same eigenvectors).
Commuting observables are called compatible observables: you can simultaneously know their values with unlimited precision.
Non-commuting operators correspond to incompatible observables: there is a strict limit on how precisely you can know both values at the same time.
📐 Commutation Relations of Spin Operators#
The spin operators obey the following famous commutation relations:
These relations tell us that no pair of spin components is compatible — there’s inherent uncertainty in knowing any two simultaneously.
🧠 Conceptual Questions#
Q7: Are the observables corresponding to \(\hat{S}_x \) compatible with those corresponding to \( \hat{S}_y \)?
Explain your reasoning using the commutator.
Q8: Verify numerically that the matrix versions of \( \hat{S}_x, \hat{S}_y, \hat{S}_z \) obey the same commutation relations.
You can compute matrix products using either of the following equivalent syntaxes:
SxSy = Sx @ Sy # recommended
# or
SxSy = np.dot(Sx, Sy)
# 🚧 TODO: Compute the commutators for the spin matrices
# [Sx, Sy] = Sx Sy - Sy Sx
comm_Sx_Sy = ...
expected_Sx_Sy = 1j * hbar * Sz
# [Sy, Sz] = Sy Sz - Sz Sy
comm_Sy_Sz = ...
expected_Sy_Sz = 1j * hbar * Sx
# [Sz, Sx] = Sz Sx - Sx Sz
comm_Sz_Sx = ...
expected_Sz_Sx = 1j * hbar * Sy
# 🚧 TODO: Compare results using np.allclose
print("Does [Sx, Sy] = iħ Sz?", np.allclose(comm_Sx_Sy, expected_Sx_Sy))
print("Does [Sy, Sz] = iħ Sx?", np.allclose(comm_Sy_Sz, expected_Sy_Sz))
print("Does [Sz, Sx] = iħ Sy?", np.allclose(comm_Sz_Sx, expected_Sz_Sx))
# Optional: Print the raw commutators to inspect
print("\n[Sx, Sy] =\n", comm_Sx_Sy)
print("Expected:\n", expected_Sx_Sy)