Chapter 19: Generalised Space Curves, Coupled Waveguides and Quantum State Preparation

We conclude our study of elementary Lie Theoretic Systems Theory with an exposition on a class of problems to do with the generalised Frenet-Serret equations for space curves in $\mathbb{C}^N$ and other systems where the differential equations for a system’s state evolution include only the tridiagonal matrices even though the system’s state is transformed by a Lie group whose algebra is much bigger than the class of tridiagonal matrices.

19.1 The Frenet-Serret Equations and Motions of Space Curves

Three Dimensional Space Curves

We recall the basic differential geometry of a space curve, i.e. a one-dimensional manifold embedded in Euclidean 3-space. Wontedly this path is specified by parametric equations, with the position vector $\vec{R}$ of the point on the curve being specified as a function of the arclength parameter $s$ along the curve. Sometimes one uses a general “time” parameter $t$ rather than the real arclength. A unit vector tangent to the space curve is $\d_\tau\,\vec{R}(t) / |\d_t\,\vec{R}(\tau)|$; if we use the genuine arclength as the parameter then $|\d_s\,\vec{R}(s)|$ is of course equal to one length unit per length unit, so that $\d_s\,\vec{R}(s)$ is automatically normalised. Thus we have the unit tangent $\hat{\vec{T}}= \d_t\,\vec{R}(t) / |\d_t\,\vec{R}(t)|$. The “speed” of the endpoint is $v(t) = |\d_t\,\vec{R}(t)| = d_t s$ and so the velocity of the endpoint is $v(t)\,\hat{\vec{T}}$.

Now we consider the rate of change $\d_t\hat{\vec{T}}$ of $\hat{\vec{T}}$. Since $\hat{\vec{T}}\cdot \hat{\vec{T}} = 1$ owing to normalisation, we have $2\,\d_t \hat{\vec{T}} \cdot \hat{\vec{T}} = 0$. Thus we have straight away that $d_t \hat{\vec{T}}$ is orthogonal to $\hat{\vec{T}}$. Now we normalise this vector too to get a unit normal $\hat{\vec{N}} = d_t \hat{\vec{T}}/|d_t \hat{\vec{T}}|$; when the parameter $t$ is the arclength, the magnitude of $|d_s \hat{\vec{T}}|$ is the curvature $\kappa$ of the space curve in question. Thus we have $\d_t\hat{\vec{T}} = v(t)\,\kappa(t) \hat{\vec{N}}$.It is not hard to show that $\kappa$ is the rate of change of heading angle with respect to arclength. For a circle of radius $r$, we naturally have $\kappa=1/r$.

So we have now defined (unless the section has curvature nought) two orthogonal vectors attached to the space curve. We can define a third $\hat{\vec{B}} = \hat{\vec{T}}\times\hat{\vec{N}}$. The triplet $\hat{\vec{T}} ,\,\hat{\vec{N}},\, \hat{\vec{B}} $ is a right handed orthonormal basis, and indeed the orthogonal matrix $R = \left(\hat{\vec{T}} ,\,\hat{\vec{N}},\, \hat{\vec{B}}\right)^T$ (wherein $\hat{\vec{T}} ,\,\hat{\vec{N}},\, \hat{\vec{B}}$ are written as columns) is the rotation matrix that resolves a general vector into its $\hat{\vec{T}} ,\,\hat{\vec{N}},\, \hat{\vec{B}} $ components. $R^T = \left(\hat{\vec{T}} ,\,\hat{\vec{N}},\, \hat{\vec{B}}\right)$ takes the $\hat{\vec{T}} ,\,\hat{\vec{N}},\, \hat{\vec{B}} $ of a vector and rotates them back into Cartesian components.

Now, since $\hat{\vec{N}} $ is of unit magnitude, we know that $\d_s\hat{\vec{N}} $ must be orthogonal to $\hat{\vec{N}} $. That is, $\d_s\hat{\vec{N}} $ must comprise only $\hat{\vec{T}} ,\,\hat{\vec{B}}$ components and so $\d_s\hat{\vec{N}} =\alpha\,\hat{\vec{T}} – \beta\,\hat{\vec{B}}$. But now we note that, because $R = \left(\hat{\vec{T}} ,\,\hat{\vec{N}},\, \hat{\vec{B}}\right)^T$ is an orthogonal matrix, i.e. belongs to $SO(3)$, we know that $\d_t\,R = H\,R$ where $H\in\mathfrak{so}(3)$ must belong to the Lie algebra $\mathfrak{so}(3)$ of $SO(3)$, that is, $H$ must be skew-Hermitian, real and traceless. Since we reasoned above that $d_\tau\hat{\vec{T}} = v(\tau)\,\kappa(\tau) \hat{\vec{N}}$, this together with the skew-Hermitianhood of $H$ uniquely defines the Frenet-Serret Equations insofar that (i) we know the co-efficient $\alpha$ in $\d_s\hat{\vec{N}} =-\alpha\,\hat{\vec{T}} + \beta\,\hat{\vec{N}}$ must be equal to the curvature $\kappa$, (ii) if we call the unknown parameter $\beta$ the torsion $\tau$ then (iii) the equation for the time derivative $\d_t\hat{\vec{B}}$ must be $\d_t\hat{\vec{B}} = -v(t)\,\tau(t)\,\hat{\vec{N}}$.

\begin{equation}\label{FrenetSerret}\d_t\left(\begin{array}{c}\hat{\vec{T}} \\\hat{\vec{N}} \\\hat{\vec{B}} \end{array}\right) =v(t)\,\, H(t)\,\left(\begin{array}{c}\hat{\vec{T}} \\\hat{\vec{N}} \\\hat{\vec{B}} \end{array}\right);\quad H(t) = \left(\begin{array}{ccc}0&+\kappa(t)&0\\-\kappa(t)&0&+\tau(t)\\0&-\tau(t)&0 \end{array}\right)\end{equation}

There are two rather convenient formulas for the curvature and torsion as follows; if a curve has a parametric equation specified by $\vec{R}(t)$, then:

\begin{equation}\label{CurvatureTorsion}\kappa(t) = \frac{|\d_t\vec{R}\times \d_t^2\vec{R}|}{|\d_t\vec{R}|^3};\;\tau(t) = \frac{(\d_t\vec{R}\times \d_t^2\vec{R})\cdot \d_t^3\vec{R}}{|\d_t\vec{R}\times\d_t^2\vec{R}|^2}\end{equation}

The reason for the name of the torsion $\tau$ is a little unclear, but it is well motivated as follows. Suppose we want to set up an orthogonal co-ordinate system for the neighbourhood of the space curve; such things are done, e.g. for the analysis of the mechanics of bending and twisting bars and also so that Maxwell’s equations can be written in a convenient frame for the study of an optical fibre or other translationally invariant waveguide. So, we want the plane of the fibre to have basis vectors $\hat{\vec{P}},\,\hat{\vec{Q}}$ whose heads always move parallel to the tangent vector, so that the planes $\hat{\vec{P}}\wedge \hat{\vec{T}}$, $\hat{\vec{Q}}\wedge \hat{\vec{T}}$ and $\hat{\vec{P}}\wedge \hat{\vec{Q}}$ are always mutually orthogonal. That is, we seek $c(s),\,s(s)$ such that (i) $\hat{\vec{P}} = c(s)\,\hat{\vec{N}} -s(s)\,\hat{\vec{B}}$ and $\hat{\vec{Q}} = s(s)\,\hat{\vec{N}} +c(s)\,\hat{\vec{B}}$ (note that the co-efficients are elements of a rotation matrix in $SO(2)$ so that $\hat{\vec{P}}$ and $\hat{\vec{Q}}$ are orthogonal and $c(s)^2+s(s)^2 = 1$) and (ii) $\d_s\hat{\vec{P}}=p(s) \hat{\vec{T}};\,\d_s \hat{\vec{Q}} = q(s) \hat{\vec{T}}$. On writing down equations for $\d_s\hat{\vec{P}} = \d_s\,c(s)\,\hat{\vec{N}} -\d_s\,s(s)\,\hat{\vec{B}} + c(s) (-\kappa(s)\,\hat{\vec{T}} + \tau(s)\,\hat{\vec{B}}) – s(s)\, \tau(s)\,\hat{\vec{N}}$ and likewise for $\d_s\hat{\vec{Q}}$ and requiring that the $\hat{\vec{N}},\,\hat{\vec{B}}$ components all vanish, we are left with the equations $\d_s c(s) =+\tau(s)\,s(s)$ and $\d_s\,s(s) = -\tau(s)\,c(s)$ which means that:

\begin{equation}\label{PipeCoordinateDefinition}\begin{array}{lcl}\left(\begin{array}{c}c\\s\end{array}\right) &=& \exp\left(\theta\,\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)\right)\,\left(\begin{array}{c}c(0)\\s(0)\end{array}\right)\\ &=& \left(\begin{array}{cc}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{array}\right)\,\left(\begin{array}{c}c(0)\\s(0)\end{array}\right)\\&&\\ \theta&=& -\int_{u=0}^s\,\tau(u)\,\d u\end{array}\end{equation}

so that $\hat{\vec{P}},\,\hat{\vec{Q}}$ twist at an angular speed of $-\tau(s)$ radians per unit arclength relative to $\hat{\vec{N}}$ and $\hat{\vec{T}}$. Actually, it makes more sense to say this backwards: the vectors $\hat{\vec{N}}$ and $\hat{\vec{T}}$ twist at angular speed $\tau(s)$ radians relative to a pair of orthonormal vectors whose heads move parallel to $\hat{\vec{T}}$. If we define the notion of parallel transport on our fibre as being motion of a vector in the fibre’s transverse plane so that the vector’s head always moves in the direction of $\hat{\vec{T}}$, then the Frenet-Serret frame twists at $\tau(s)$ radians per unit arclength relative to the parallel transported frame. In an optical fibre with no stress-induced refractive index changes wrought by its coiling (a condition that is extremely hard to set up), the electromagnetic field distribution is parallel-transported in the above way to successive transverse planes of the fibre. If the fibre makes a loop so that its end is brought back to its beginning so that the field at its input can be compared with that at the end of the loop, then the angle of rotation of the field configuration is the Berry phase [Tomita, 1986], equal to $\theta= \oint\,\tau(s)\,\d s$ around the loop.

The helix with parametric equations $\vec{R}(\phi)=(r\,\cos\phi,\,r\,\sin\phi,\,k\,\phi)^T$ is the standard example for illustrating these ideas. In this case we get:

\begin{equation}\label{HelixFrenetSerret}\begin{array}{l}\vec{R}(\phi)=\left(\begin{array}{c}r\,\cos\phi\\r\,\sin\phi\\k\,\phi\end{array}\right)\\\hat{\vec{T}}(\phi)=\frac{1}{\sqrt{r^2+k^2}}\left(\begin{array}{c}-r\,\sin\phi\\r\,\cos\phi\\k\end{array}\right);\;\hat{\vec{N}}(\phi)=\left(\begin{array}{c}\cos\phi\\\sin\phi\\0\end{array}\right);\;\hat{\vec{B}}(\phi)=\frac{1}{\sqrt{r^2+k^2}}\left(\begin{array}{c}k\,\sin\phi\\-k\,\cos\phi\\r\end{array}\right)\\\kappa = \frac{r}{r^2+k^2};\,\tau=\frac{k}{r^2+k^2}\end{array}\end{equation}

To illustrate the Berry phase, the equations of the rational angle self-closing line on the torus that we met in Example 12.3 and the foregoing theory is about the simplest nonplanar loop one can imagine. Even so, the equations resulting from the above are really messy and reproduced in a Mathematica notebook that can be downloaded by clicking here. The line on the torus has parametric equations:

\begin{equation}\label{LineOnTorus}\vec{R}(\phi)=\left(\begin{array}{c}\cos(\alpha\,\phi) (1 + r\,\cos\phi)\\\sin(\alpha\,\phi) (1 + r\,\cos\phi)\\r\,\sin\phi\end{array}\right)\end{equation}

where the torus is generated the circle of radius $r<1$ in a plane orthogonal to the unit circle and centred on the unit circle as the former’s plane is swept around the unit circle. The slope of the line is $\alpha$. Below in Figure 19.1 are shown the unit normal and binormal vectors $\hat{\vec{N}}$ and $\hat{\vec{B}}$ (purple arrows) together with the “pipe vectors” $\hat{\vec{P}}$ and $\hat{\vec{Q}}$ (green arrows) at the points on the line defined by $\phi = 0,\,\pi,\,2\,\pi,\,\cdots,\,8\,\pi$ when $r = \frac{1}{2}$ and $\alpha=\frac{1}{4}$.

Torus Space Curve

Figure 19.1: Parallel Transport of the Electromagnetic Field around an Optical Fiber Loop wound on a Torus with $r = \frac{1}{2},\,\alpha=\frac{1}{4}$. Blue vectors are the tangent vectors $\hat{\vec{T}}$. The purple vectors show the unit normal $\hat{\vec{N}}$ and binormal $\hat{\vec{B}}$ to the optical fibre whose path is the rational slope loop with $\alpha=\frac{1}{4}$ on a torus $\mathbb{T}^2$ with $r=\frac{1}{2}$. The green vectors $\mathbf{e},\,\mathbf{h}$ are the electric and magnetic field vectors.

The Berry phase, i.e. rotation of the $\hat{\vec{N}},\,\hat{\vec{B}}$ vectors relative to the $\hat{\vec{P}},\,\hat{\vec{Q}}$ vectors in traversing a whole loop is about 11.2 radians. If a bound electromagnetic field were propagating in a perfectly stress free optical fibre, the electric and magnetic field vectors would be parallel-transported as the pipe vectors $\hat{\vec{P}},\,\hat{\vec{Q}}$. The expression for the Berry phase $\phi_B=-\int_0^{\frac{2\pi}{\alpha}}\tau(\phi,\,\alpha)\,\d_\phi\,s(\phi,\,\alpha)\,\d\phi$ is intractable but good approximations for small $\alpha$ can be made by expanding $\tau(\phi,\,\alpha)\,s(\phi,\,\alpha)$ as a Taylor series and integrating this instead. To sixth order in $\alpha$ we get:

\begin{equation}\label{BerryPhaseTorus}\begin{array}{lcl}\phi_B&=&\frac{2 \pi }{r}-\frac{\pi \left(64 r^6+128 r^4\right)}{128 r^7} \, \alpha^2-\frac{\pi \left(-36 r^6-288 r^4-96 r^2\right)}{128 r^7}\,\alpha ^4 -\\&&\quad\frac{\pi \left(25 r^6+450 r^4+600 r^2+80\right)}{128 r^7} \,\alpha ^6\end{array}\end{equation}

which shows that $\phi_B\to\frac{2 \pi }{r}$ as the number of winds through the torus increase without bound. In the present case, this figure is $4\,\pi$, and if the number of windings is 100, then the numerical integration of the exact expression yields $12.5635$ radians, i.e. it falls short of $4\,\pi$ by 0.003 radians. The approximate formula in $\eqref{BerryPhaseTorus}$ yields the same answer, $12.5635$ radians, to within $4\times10^{-12}$ radians. For a four-wind torus, $\eqref{BerryPhaseTorus}$ yields 11.12 radians, i.e. different from the numerical integral of $11.2$ radians by 0.06 radians.

Pointing a Garden Hose in Complex Euclidean Space

The Seret-Frenet equations are an illustration of Theorem 13.4 because if we think of the matrix $H(s) = \tau(s)\,\hat{S}_x+\kappa(s)\,\hat{S}_z$ in $\eqref{FrenetSerret}$, we only have $\tau(t)$ and $\kappa(t)$ as our “control levers” steering the end of the space curve through space. So $H(s)$ lives in a two dimensional subspace of the Lie algebra $\mathfrak{so}(3)$ and contains no $\hat{S}_y$ component. Yet it is an obvious truth that there exists a smooth sequence of steering settings $\tau(s),\, \kappa(s)$ to steer the end of the space curve so that its $\hat{\vec{T}} ,\,\hat{\vec{N}},\, \hat{\vec{B}} $ vectors are in an arbitrary orientation relative to these vectors at the space curve’s beginning. All transformations in $SO(3)$ can be realised by a smooth sequence of steering settings $\tau(s),\, \kappa(s)$.

In readiness for the next section, we look at the Frenet-Serret equations in complex Euclidean space of an arbitrary number of dimensions. At the first step of the three dimensional reasoning, we defined a curvature, which showed that only the second element in the first row of $H$ was nonzero, and equal to $\kappa$. By the skew-Hermitian symmetry of $H$, this means that only the second element of the first column is nonzero, and indeed equal to $-\kappa^*$ in the complex Euclidean space. We proceed inductively to show that only the element at position $(n,\,n+1)$ is nonzero in the $n^{th}$ row, and equal to the $n^{th}$ generalised curvature parameter $\chi_n$. Likewise, the only the element at position $(n+1,\,n)$ in the $n^{th}$ column is nonzero, and equal to $-\chi_n^*$. That is, the matrix $H$ in the $N$-dimensional complex case is the following tridiagonal matrix:

\begin{equation}\label{GeneralFrenetSerret}H=\left(\begin{array}{ccccc}0&\chi_1&0&0&\cdots\\-\chi_1^*&0&\chi_2&0&\cdots\\0&-\chi_2^*&0&\chi_3&\cdots\\0&0&-\chi_3^*&0&\cdots\\\vdots&\vdots&\vdots&\vdots&\vdots\end{array}\right)\end{equation}

Now we prove:

Lemma 19.1: (General Frenet-Serret Lie Algebra)

The smallest Lie algebra over the reals containing the set of all $N\times N$ complex, skew-Hermitian, tridiagonal matrices with all noughts along the leading diagonal (i.e. of the form in $\eqref{GeneralFrenetSerret}$) is $\mathfrak{su}(N)$. Thus by Theorem 13.4, there is a smooth space curve in $N$ dimensional complex Euclidean space that takes a Frenet-Serret frame to any other Frenet-Serret frame, i.e. with the latter given by the former transformed by an arbitrary unimodular, unitary matrix, i.e. an arbitrary member of $SU(N)$.

More generally, the smallest Lie algebra containing the set of all $N\times N$ complex, skew-Hermitian is the Lie algebra $\mathfrak{u}(N)$ of the general $N\times N$ unitary group $U(N)$.

Proof: Show Proof

We prove the last statement, i.e. for $\mathfrak{u}(N)$ and $U(N)$ and then specialise it to $\mathfrak{su}(N)$.

This is readily proven by induction on the dimension $N$. The proposition is trivially true for $N=1$: all skew-Hermitian $1\times1$ matrices are tridiagonal. This is our induction basis. Now we prove the induction step. Given any $H_N\in\mathfrak{u}(N)$ can by assumption be derived from $N\times N$ skew-Hermitian tridiagonal matrices by a finite sequence of linear operations and Lie brackets, the following two $(N+1) \times (N+1)$ matrices can be derived from such tridiagonal matrices too:

\begin{equation}\label{FrenetSerretLieAlgebraLemma_1}\begin{array}{c}\tilde{X}_1=\left(\begin{array}{c|c}i\,\beta_1&\begin{array}{cccc}\chi_1&0&0&\cdots\end{array}\\\hline\begin{array}{c}-\chi_1^*\\0\\0\\\vdots\end{array}&H_{N,\,1}\end{array}\right);\;\tilde{X}_2=\left(\begin{array}{c|c}i\,\beta_1&\begin{array}{cccc}\chi_2&0&0&\cdots\end{array}\\\hline\begin{array}{c}-\chi_2^*\\0\\0\\\vdots\end{array}&H_{N,\,2}\end{array}\right)\\\left[\tilde{X}_1,\,\tilde{X}_2\right]=\left(\begin{array}{c|c}\chi_2\,\chi_1^*-\chi_1\,\chi_2^*&-P^\dagger\\\hline P&H_N\end{array}\right)\end{array}\end{equation}

where $H_{N,\,1} ,\;H_{N,\,2}$ are arbitrary $N\times N$ skew-Hermitian matrices,$\beta_1,\,\beta_2$ are real, we have partitioned the matrices in the obvious way and:

\begin{equation}\label{FrenetSerretLieAlgebraLemma_2}\begin{array}{lcl}P&=&\left(\begin{array}{c}i\,(\beta_1\,\chi_2^*-\beta_2\,\chi_1^*\\\hline0\\0\\\vdots\end{array}\right) + H_{N,\,2}\,\left(\begin{array}{c}\chi_1^*\\\hline0\\0\\\vdots\end{array}\right)-H_{N,\,1}\,\left(\begin{array}{c}\chi_2^*\\\hline0\\0\\\vdots\end{array}\right)\\&&\\H_N&=&\left[H_{N,\,1},\,H_{N,\,2}\right]+\left(\begin{array}{c|c}\chi_1\,\chi_2^*-\chi_2\,\chi_1^*&\begin{array}{ccc}0&0&\cdots\end{array}\\\hline\begin{array}{c}0\\0\\\vdots\end{array}&0_{N\times N}\end{array}\right)\end{array}\end{equation}

Since $H_{N,\,1} ,\;H_{N,\,2}$ are arbitrary $N\times N$ skew-Hermitian matrices, we can choose them and complex $\chi_1 ,\,\chi_2 $ so that $P$ is any $N\times 1$ complex vector and $\chi_2\,\chi_1 ^* -\chi_1\, \chi_2^*$ is any pure imaginary number. This we do, leaving the partition $N\times N$ partition in the lower right corner of $H_N$. This is an $N\times N$ skew-Hermitian matrix, so we can subtract it away and add in its place any other $N\times N$ skew-Hermitian matrix and the result will still be in the smallest Lie algebra. Thus we have proven the induction step for $\mathfrak{u}(N)$ and so the smallest Lie algebra containing the set of all complex, skew-Hermitian tridiagonal matrices is the whole of $\mathfrak{u}(N)$.

Now we specialise to the smallest Lie algebra containing the set of all complex, skew-Hermitian tridiagonal matrices with all noughts along the leading diagonal. Again, the induction basis holds true: the only skew-Hermitian traceless $1\times1$ matrix is the zero $1\times1$ matrix. So now we use the results in the above for the special case, noting that now $\beta_1=\beta_2 = 0$ and the assumption is that $H_{N,\,1} ,\;H_{N,\,2}$ are arbitrary $N\times N$ skew-Hermitian and traceless matrices. We can still choose are arbitrary $N\times N$ skew-Hermitian traceless matrices $H_{N,\,1} ,\;H_{N,\,2}$ and complex $\chi_1 ,\,\chi_2 $ so that $P$ is any $N\times 1$ complex vector and $\chi_2\,\chi_1 ^* -\chi_1\, \chi_2^*$ is any pure imaginary number. Now note the second matrix term on the right of the second equation in $\eqref{FrenetSerretLieAlgebraLemma_2}$. The only nonzero term in it, to wit, $\chi_1\,\chi_2^*-\chi_2\,\chi_1^*$ is the negative of the term $\chi_1\,\chi_2^*-\chi_2\,\chi_1^*$ in the top left partition of the matrix on the right of the second equation in $\eqref{FrenetSerretLieAlgebraLemma_1}$. Thus the matrix on the right of the second equation in $\eqref{FrenetSerretLieAlgebraLemma_1}$ is traceless, given that $H_{N,\,1} ,\;H_{N,\,2}$ are. Thus we have proven the induction step and the smallest Lie algebra containing the set of all complex, skew-Hermitian tridiagonal matrices with all noughts along the leading diagonal is the whole of $\mathfrak{su}(N)$. $\quad\square$

Think of the following situation. You have a garden hosepipe, tethered to its supplying spout and its position and tangent vector at the spout are fixed. Can you bend it smoothly so that it (i) points in any direction you wish and (ii) the plane of bending is in any plane which the end pointing tangent lies in. By the above, the answer is yes, something which is obvious from anyone’s experience with a garden hosepipe. What about if you live in $N$ Euclidean dimensions? Can you point the hose in any direction and still choose the $N-1$ orthogonal planes that define the hosepipe’s bending arbitrarily, as long as one of them contains the tangent vector? Still the answer is yes by the above. Even if we move to complexified Euclidean $N-dimensional space, the answer is still yes.

19.2 Weakly Coupled Lossless Optical Waveguides

The problem of realising different transfer matrices by coupled optical waveguide with nearest neighbour coupling only and the fact that this restricted coupling can indeed realise any lossless (unitary) transfer matrix is precisely the generalisation of the geometrical notion of whether Frenet-Serret equations for an arbitrary number of dimensions and for the complex vector space $\mathbb{C}^M$ (rather than simply for $\R^3$) can steer the end of the space curve in this space into any orientation.

Waveguide

Figure 19.2: Schematic Coupled Optical Waveguide System

Figure 19.2 schematically shows a coupled waveguide system as made in planar optical waveguide technology. An optical waveguide normally is a straight “thread” of high refractive index material embedded in a surrounding lower refractive index material, or substrate. Typical dimensions are shown in Figure 19.2; note the great difference between horizontal and vertical scales. If the right dimensions are chosen, such a waveguide is one-moded, meaning that only one eigenfield electromagnetic field configuration can propagate along it with low loss. The phase delay per unit length propagated that the field undergoes is called the propagation constant $\beta$, thus the field configuration at a position $z$ along the waveguide is simply the configuration at position $x=0$ scaled by the phase constant $e^{i\,\beta\,z}$. Owing to the one-modedness of the waveguide, the electromagnetic field is wholly defined by a single complex amplitude.

Now we think about the situation in the middle of the plan view of Figure 19.2. Here a number of waveguides are brought near each other so that their electromagnetic fields interact. If a number $N$ of waveguides run parallel to one another and near enough that their fields can couple, then the the electromagnetic field configuration is specified by a vector of $N$ complex amplitudes. The vector $X$ of these complex amplitudes is transformed by some linear map $X\mapsto\gamma\,X$ where $\gamma$ is some $N\times N$ matrix. If the waveguides run parallel to one another, let the transformation wrought by a length $z$ of the coupled system be $\gamma(z)$. The transformation wrought by a waveguide system of length $z_1$ followed by the transformation wrought by length $z_2$ is computed by the composition of the linear maps, i.e. by multiplying the matrices and so $\gamma(z_1) \gamma(z_2) = \gamma(z_1+z_2)$ and, given $\gamma$ is a continuous function of $z$, it follows that $\gamma(z) = \exp(H\,z)$ for some constant $N\times N$ matrix $H$. Now, if the system is lossless (or approximately so), then $\gamma(z)$ must be unitary, so that it follows that $H\in\mathfrak{u}(N)$, the Lie algebra of $N\times N$ skew-Hermitian matrices. If the parallel waveguides are far apart so that there is no coupling, then $H = i\,\operatorname{diag}[\beta_1,\,\beta_2,\,\cdots,\,\beta_N]$. If the waveguides are nearer together so that coupling arises, then there are off-diagonal elements in $H$. If the waveguide separation varies with $z$ slowly enough that losses can be taken to be nought, then:

\begin{equation}\label{WaveguideEvolutionEquation}\d_z\,\gamma = H(z)\,\gamma\end{equation}

where $H(z)\in\mathfrak{u}(N);\,\forall\,z\in\R$.

In practical waveguides, the electromagnetic field is concentrated tightly around one of the cores of the $N$ waveguides, and the field dwindles exponentially swiftly with distance from its respective waveguide, as schematically shown in the end view of Figure 19.2. This means that there is, from a practical standpoint, only nearest neighbour coupling between the waveguides. Therefore, the matrix $H(z)$ is both skew-Hermitian and tridiagonal. If the waveguides are all the same so that their propagation constants $\beta_j = \beta$ are all the same, then one can make the substitution $\gamma(z)\, \leftarrow\,\gamma(z)\,e^{-i\,\beta\,z}$. When this is done, the matrix $H(z)$ in $\eqref{WaveguideEvolutionEquation}$ has all noughts along the leading diagonal. That is, the problem is mathematically precisely analogous to the evolution of the Frenet-Serret frame along a space curve in $N$-dimensional complex Euclidean space $\mathbb{C}^N$.

As we showed in Lemma 19.1, the smallest Lie algebra containing the set of all tridiagonal, skew-Hermitian matrices with all noughts along the leading diagonal is precisely $\mathfrak{su}(N)$. It therefore follows, from Theorem 13.4 that any transfer matrix $\gamma\in SU(N)$ can be realised by some finite length of planar coupled identical waveguides, by appropriate control of the coupling as a function of $z$. If we can have different waveguides and thus different propagation constants as well, then $H(z)$ can be any skew-Hermitian, tridiagonal matrix. The smallest Lie algebra containing this set, from Lemma 19.1, is the whole of $\mathfrak{u}(N)$, therefore, from Theorem 13.4 any transfer matrix $\gamma\in U(N)$ can be realised by some finite length of planar coupled waveguides by appropriate control of the coupling and propagation constants.

Note that these results are rather stronger than those quoted elsewhere ([Vance 1996], [Vance 1994]) where it was believed that the transformation realised by finite planar structures could only come arbitrarily near to an arbitrary unitary $\gamma\in U(N)$.

So the reasons why nearest neighbour coupled waveguides can realise any unitary transfer matrix and the reason why you can point a flexible garden hosepipe in any direction and with arbitrary bending in each of the $N$ planes defining complexified Euclidean space are the same, as noted in the last section!

Another “trick” for realising different transfer matrices with coupled planar waveguides is illustrated in Figure 19.3.

Waveguide System

Figure 19.3: Alternating Coupling / Phase Preparation Regions in Coupled Waveguide System

The coupled regions are fixed and have unitary transfer matrices $\gamma_j$, whereas in the regions where the waveguides are far apart and where there is no coupling, the individual waveguide propagation constants are manipulated so that each waveguide imparts a different phase delay on its propagating field. We now have the following lemma:

Lemma 19.2: (Realisation of $U(N)$ by finite Waveguide System Concatenations)

Let there be $N+1$ cycles of the system shown in Figure 19.3 and let $\gamma_1,\,\gamma_2,\,\cdots,\,\gamma_{N+1}$ be the transfer matrices of the coupled regions. Let the $N$ phase delays through the waveguides in each uncoupled region be arbitrarily chosen. If, further, the matrices:

\begin{equation}\label{WaveguideConcatenationLemma_1}\begin{array}{llll}D_1&D_2&\cdots&D_{N-1}\\\gamma_1\,D_1\,\gamma_1^{-1}&\gamma_1\,D_2\,\gamma_1^{-1}&\cdots&\gamma_1\,D_{N-1}\,\gamma_1^{-1}\\\gamma_1\,\gamma_2\,D_1\gamma_2^{-1}\,\,\gamma_1^{-1}&\gamma_1\,\gamma_2\,D_2\gamma_2^{-1}\,\,\gamma_1^{-1}&\cdots&\gamma_1\,\gamma_2\,D_{N-1}\gamma_2^{-1}\,\,\gamma_1^{-1}\\\vdots&\vdots&\vdots&\vdots\\\prod\limits_{k=1}^{N+1}\gamma_k\,D_1\,\prod\limits_{k=0}^{N}\gamma_{N-k}^{-1}&\prod\limits_{k=1}^{N+1}\gamma_k\,D_2\,\prod\limits_{k=0}^{N}\gamma_{N-k}^{-1}&\cdots&\prod\limits_{k=1}^{N+1}\gamma_k\,D_{N-1}\,\prod\limits_{k=0}^{N}\gamma_{N-k}^{-1}\end{array}\end{equation}

are all linearly independent members of $\mathfrak{u}(N)$ when $D_j;\,j=1\,\cdots,\,N-1$ are the following traceless diagonal matrices:

\begin{equation}\label{WaveguideConcatenationLemma_2}\begin{array}{lcl}D_1&=&\operatorname{diag}\left[\begin{array}{ccccccc}-i&i&0&0&\cdots&0&0\end{array}\right]\\D_2&=&\operatorname{diag}\left[\begin{array}{ccccccc}-i&0&i&0&\cdots&0&0\end{array}\right]\\D_3&=&\operatorname{diag}\left[\begin{array}{ccccccc}-i&0&0&i&\cdots&0&0\end{array}\right]\\&\vdots&\\D_{N-2}&=&\operatorname{diag}\left[\begin{array}{ccccccc}-i&0&0&0&\cdots&i&0\end{array}\right]\\D_{N-1}&=&\operatorname{diag}\left[\begin{array}{ccccccc}-i&0&0&0&\cdots&0&i\end{array}\right]\end{array}\end{equation}

Then some finite concatenation of basic $N+1$ block systems of the form in Figure 19.3 can realise any transfer matrix in the group $U(N)$ of $N\times N$ unitary matrices.

Proof: Show Proof

Given the proposed control of the uncoupled waveguide phases, the transfer matrix of long enough uncoupled region can be controlled to be any matrix of the form $e^{i\,\Lambda}=\operatorname{diag}[e^{i\,\phi_1},\,e^{i\,\phi_2},\,\cdots,\,e^{i\,\phi_N}]$, where the $\phi_j$ are arbitrary real phases and $\Lambda=\operatorname{diag}[\phi_1,\,\phi_2,\,\cdots,\,\phi_N]$. Therefore, the whole system’s transfer matrix, if there are $N+1$ basic sections, is:

\begin{equation}\label{WaveguideConcatenationLemma_3}\begin{array}{ll}&\gamma_1\,e^{i\,\Lambda_1}\,\gamma_2\,e^{i\,\Lambda_2}\,\cdots\,\gamma_{N+1}\,e^{i\,\Lambda_{N+1}}\\=&\exp\left(i\,\gamma_1\,\Lambda_1\,\gamma_1^{-1}\right)\,\exp\left(i\,\gamma_1\,\gamma_2\,\Lambda_2\,\gamma_2^{-1}\,\gamma_1^{-1}\right)\,\exp\left(i\,\gamma_1\,\gamma_2\,\gamma_3\,\Lambda_3\,\gamma_3^{-1}\,\gamma_2^{-1}\,\gamma_1^{-1}\right)\,\cdots\,\\&\quad\exp\left(i\,\left(\prod\limits_{k=1}^{N+1}\,\gamma_k\right)\,\Lambda_{N+1}\,\left(\prod\limits_{k=0}^{N}\,\gamma_{N+1-k}^{-1}\right)\right)\,\prod\limits_{k=1}^{N+1}\,\gamma_k\end{array}\end{equation}

Now each term of the form $i\,\left(\prod\limits_{k=1}^r\,\gamma_k\right)\,\Lambda_r\,\left(\prod\limits_{k=0}^{r-k}\,\gamma_{r-k}^{-1}\right)$ belongs to the Lie algebra $\mathfrak{u}(N)$ and indeed, since the diagonal elements in $\Lambda_r$ can be freely chosen spans a dimension $N$, commutative Lie subalgebra over $\R$. This algebra is in fact the direct sum of the one dimensional algebra of matrices of the form $\lambda_0\,\id$ and an $N-1$ dimensional commutative algebra of traceless matrices. Let this $N-1$ traceless algebra be $\h_r$ for the $r^{th}$ such term. Therefore, by choosing the diagonal elements of $\Lambda_r$ freely, the $r^{th}$ such term can be made equal to any member of the Abelian group $\{e^{i\,\lambda_r}\,\exp(X_r)|\,X_r\in\h_r;\,\lambda_r\in\R\}$. Now if all the basis members of all the $\h_r,\,r=1\,\cdots,\,N+1$ in the $N$ such terms in $\eqref{WaveguideConcatenationLemma_3}$ are linearly independent, then together they must span $\mathfrak{u}(N)$, because the dimension of this latter Lie algebra is $N^2$ and the number of linearly independent vectors is $(N+1)\,(N-1) + 1 = N^2$; the last $1$ in this sum is owing to the fact that we can freely choose the common factors $e^{i\,\lambda_r}$ to yield any unit magnitude determinant of the overall product $e^{i\,\sum_r\lambda_r}\,\prod\limits_{r=1}^{N+1} e^{X_r}$ where $X_r\in\h_r$ is any matrix in the Lie subalgebra $\h_r$.

So if we can show the linear independence of all the basis vectors of the $\h_r$, then the product in $\eqref{WaveguideConcatenationLemma_3}$ aside from the final term $\prod\limits_{k=1}^{N+1}\,\gamma_k$ defines a system of canonical co-ordinates of the second kind where the diagonal elements of the $\Lambda_r$ are the canonical co-ordinates, so that the product can be made equal to any member of some nucleus $\mathcal{K}\subset U(N)$. Without loss of generalness we can choose the nucleus to be $\exp(\mathcal{B})$, where $\mathcal{B}$ is some ball centred on $\Or\in\mathfrak{u}(N)$ of the form $\mathcal{B}=\{X|\,X\in\mathfrak{u}(N);\,\left|X\right|<\epsilon\}$ for some $\epsilon>0$ and the norm $\left|\cdot\right|$ is the standard Euclidean norm defined for vectors in $\mathfrak{u}(N)$. Crucially, the operation $\mathcal{B}\to \mathfrak{u}(N);\,X\mapsto\,Ad(\gamma)\,X = \gamma\,X\,\gamma^{-1}$ where $\gamma\in U(N)$ is an isometry with respect to this norm.

So now we imagine concatenating some number $M$ of subsystems in Figure 19.3. By the above discussion, we can choose the $\Lambda_j$ so that the transfer matrix of the whole concatenation can realise any matrix in the set $\bigcup\limits_{k=1}^M\left(\exp(\mathcal{B})\,\gamma_0\right)^k$, where $\gamma_0 = \prod\limits_{k=1}^{N+1}\,\gamma_k$. Take heed that we can rewrite the terms in this union as $\exp(\mathcal{B})\,\exp(\Ad(\gamma_0)\,\mathcal{B})\,\exp(\Ad(\gamma_0)^2\,\mathcal{B})\,\cdots\,\exp(\Ad(\gamma_0)^{k-1}\,\mathcal{B})\,\gamma_0^k$. But, by the isometry discussed above, this set is simply $\exp(\mathcal{B})^k\,\gamma_0^k$. Now we know that $U(N) = \bigcup\limits_{k=1}^\infty \exp(\mathcal{B})^k$, and since (i) $k>j\Rightarrow\exp(\mathcal{B})^j\subset\exp(\mathcal{B})^k$ and (ii) $U(N)$ is a compact group, we know that some finite subcover of the cover $\bigcup\limits_{k=1}^\infty \exp(\mathcal{B})^k$ covers $U(N)$, i.e. $\exists M_0\,\ni\,U(N)=\exp(\mathcal{B})^{M_0}$. This means that after $M_0$ concatenations of the basic $N+1$ section system of Figure 19.3 we can realise any member of $U(N)\,\gamma_0^{M_0} = U(N)$. $\quad\square$

It is experimentally found, through numerical simulations to compute the coupled region transfer matrices $\gamma_1,\,\gamma_2,\,\cdots,\,\gamma_{N+1}$, that the linear independence criterion of Lemma 19.2 is readily fulfilled. Therefore, the concatenation of a finite number of systems of the form shown in Figure 19.3 is an effective way to realise any transfer matrix in the the Lie group $U(N)$. Indeed, the phase delays of the the uncoupled waveguide can be arbitrarily set e.g. by electro-optic phase modulation, so that the transfer matrix of the whole system can be dynamically set to be any member of $U(N)$. This idea leads into our last example of Lie Theoretic Systems Theory.

19.3 Pure Quantum State Preparation

Quantum Mechanical Overview

Very like the Planar Waveguide example is the problem of pure quantum state preparation. I’ll say just a few words about the quantum framework. If you don’t have a background in quantum mechanics and are interested, I recommend the first eight chapters in the third volume of [Feynman, 2011] as a gentle introduction. In particular, let’s think of the Stern-Gerlach Apparatus as a good experimental prototype for the whole quantum framework because it illustrates most of the physical content of quantum theory and it is very simple: the state space is only two-dimensional (spin-up / spin-down). the Stern-Gerlach experiment is described in detail in [Feynman, 2011]. We make a strict distinction in the following between system properties and quantum measurements. Then:

  1. A quantum system is modelled by a quantum state, which is a vector living in a Hilbert space whose components represent “probability amplitudes” for the entity to be in a certain eigenstate (read here: unit basis vector of the Hilbert space). In the Stern-Gerlach experiment, it is a two-dimensional complex value vector of the form $\left(\begin{array}{c}\psi_{up}\\\psi_{down}\end{array}\right)$ holding the probability amplitudes for an electron to be spin-up or spin-down;
  2. We model quantum measurements by observables, which are Hermitian operators on the Hilbert space together with a special recipe that tells us how to interpret these operators as measurements;
  3. Time, in non-relativistic quantum theories, is a definite, real valued parameter – in theory there is no uncertainty in it: it’s just the reading on your clock when you do the quantum measurement;
  4. Other properties (as opposed to measurements) of the system are also parameters, but they are parameters in, say, Schrödinger’s equation, which models how the quantum system evolves with time. They, like time, are assumed to have no uncertainty, and they often postulated by theoretical models and can be adjusted to fit a theoretical model to results of quantum measurements gathered over many experiments. Things like length and mass fit into this category.

Lastly, by way of contrast to the parameters, let me define the notion of observable. As I said, this is an Hermitian operator, say $\hat{E}$ together with a recipe:

  1. Straight after the measurement, the state vector $\psi$ is in one of the observable’s eigenstates and the measurement outcome is the real eigenvalue corresponding to that eigenstate;
  2. If the quantum system has quantum state $\psi$, the $m^{th}$ moment of the probability distribution $p(\lambda)$ for the measurement $\lambda$ modelled by $\hat{E}$ is $\psi^\dagger \hat{E}^m \psi$ in matrix notation.

One can do any unitary transformation on the Hilbert space one likes and still keep all the information about the problem (the observables undergo corresponding transformations too of course). So it is convenient, when talking about a particular measurement, to transform the Hilbert space so that the measurement’s observable becomes a diagonal matrix. In these coordinates, the probability that the state is a particular eigenvector $\psi_0$ and thus the probability to observe a measurement equal to the corresponding eigenvalue $\lambda_0$ is particularly simple, to wit $\psi_0^\dagger \psi_0$.

So, in the Stern-Gerlach experiment, we express the quantum states in “spin” eigen co-ordinates, so that the quantum state, as said above, is represented by the two complex element matrix:

\begin{equation}\label{SternGerlachPsi}\psi = \left(\begin{array}{c}\psi_{up}\\\psi_{down}\end{array}\right)\end{equation}

and the spin oberservable is then

\begin{equation}\label{SpinObservable}\hat{S} = \left(\begin{array}{cc}+1 & 0\\0 & -1\end{array}\right)\end{equation}

that, when combined with the state as defined by the “recipe” defines the probability distribution that the spin will take on its two allowed values $\pm 1$.

It is well to take heed that the only place where “randomness” and “uncertainty” come into the quantum framework is in a measurement by an observable. The quantum state at this point “chooses” to be a random eigenvector of the observable, with probabilities as calculated above. What exactly happens at this point in time is still an open question, called the famous quantum measurement problem: the question of whether the state truly “chooses” or “collapses” into an eigenvector, or whether indeed the quantum framework, being a complexified generalisation of classical probability theory, is simply modelling the quantum equivalent to the switch one makes from an unconditional to a conditional probability distribution given the gleaning of new data. When learn know more about a problem, its statistics change conditioned on the new information.

At all other times, quantum mechanics is utterly deterministic. The quantum state evolves with time following the Schrödinger equation. Quantum mechanics assumes that linear transformations model a quantum states time evolution. Since the complex Euclidean length of the quantum state vector represents the probability that the system must be in some state, then this Euclidean length must always be unity. It follows that any linear mapping modelling the quantum state’s time evolution must be unitary. In the finite ($N$) dimensional quantum state case, the time evolution mapping must therefore belong to the group $U(N)$, i.e. $\psi(t) = \gamma(t)\,\psi(0)$ where $\gamma\in U(N)$. This is equivalent to $\d_t\psi(t) = \d_t \gamma(t)\,\psi(o) = H(t)\,\gamma(t)\,\psi(0) = H(t)\,\psi(t)$ where $H(t)\in\mathfrak{u}(N)$. Physicists generally like to think about Hermitian rather than skew-Hermitian mappings, so we pull some constants out of the $H(t)$ and arrive at the general Schrödinger equation:

\begin{equation}\label{SchroedingerEquation}i\,\hbar\,\d_t\,\psi = H(t)\,\psi\end{equation}

where now $H$ is an $N\times N$ Hermitian matrix. If the quantum system is time invariant, then $H(t) = H_0$ and is constant, and the evolution is described by $\psi(t) = \exp\left(-\frac{i}{\hbar}\,H\,t\right)\,\psi(0)$. This is analogous to the evolution with distance along an translationally invariant coupled waveguide system. Another reason for working with Hermitian rather than skew-Hermitian matrices is that the matrix $H$ is itself an observable (i.e. it has real eigenvalues and must therefore be Hermitian). It is the quantum Hamiltonian and is the system energy observable.

Quantum State Preparation

As in the discussion of the Stern-Gerlach experiment, the quantum state of spin-½ quantum system, such as a lone electron, can be described by a $2\times1$ column vector, with each element being the complex probability amplitude to find the system in a “spin-up” or “spin-down” state, relative to the particular axis (or rotation plane) we choose for measuring spin. If this axis / plane is the $z$-axis / $x\wedge y$-plane, then, as discussed above, when we use these particular co-ordinates (with amplitudes for the up/down spin along the $z$ axis / in the $x\wedge y$ plane) the observable to measure spin along this axis is $\hat{s}_z$. If we want to measure spin up / down along the $x$ or $y$-axes ($y\wedge z$ and $z\wedge x$ planes, respectively), then the observables to use in these co-ordinates are $\hat{s}_x$ and $\hat{s}_y$, respectively. These observables are of course the basis of the Lie algebra $\mathfrak{su}(2)$ we met in Example 1.4. We are about to give a physical interpretation of the Lie group $SU(2)$ that it belongs to.

If our lone electron is steeped in a (classical) magnetic field $\vec{B} = (b_x,\,b_y,\,b_z)^T$, then the quantum spin state “precesses”; the Hamiltonian observable is then:

\begin{equation}\label{SpinHamiltonian}H = -i\,\hbar\,g_e\,\left(b_x\,\hat{s}_x+b_y\,\hat{s}_y+b_z\,\hat{s}_z\right)\end{equation}

so that, by the Schrödinger equation $\eqref{SchroedingerEquation}$, the quantum state’s evolution with time if the magnetic field is constant is:

\begin{equation}\label{LoneSpinEvolution}\psi(t) = \exp\left(g_e\,(b_x\,\hat{s}_x+b_y\,\hat{s}_y+b_z\,\hat{s}_z)\,t\right)\,\psi(0)\end{equation}

Here $g_e$ is a real constant, characterising the electron’s so-called magnetic moment, called the electron’s gyromagnetic ratio in inverse Tesla-seconds if SI units are used. So, by choosing different magnetic field directions and by waiting long enough, we can impart any transformation $\gamma\in SU(2)$ we like to the lone electron’s quantum spin state. Since $SU(2)$ is locally isomorphic to $SO(3)$ and the latter is the former’s image under the adjoint representation, we can build a good, concrete thought picture for the spin-½ quantum state as follows. For the general quantum state:

\begin{equation}\label{SpinQuantumState}\psi=e^{i\,\phi_0}\left(\begin{array}{c}a\,e^{+i\,\frac{\phi}{2}}\\b\,e^{-i\,\frac{\phi}{2}}\end{array}\right)\end{equation}

we form the following real parameters, each transforming as shown when some $\gamma\in SU(2)$ transforms the quantum state if we switch our magnetic field on for a while:

\begin{equation}\label{StokesParameters}\begin{array}{lcllcllcl} x&=&i\,\psi^\dagger\,\hat{s}_x\,\psi &y&=&i\,\psi^\dagger\,\hat{s}_y\,\psi &z&=&i\,\psi^\dagger\,\hat{s}_z\,\psi;\\x&\mapsto&\,i\,\psi^\dagger\,\gamma^\dagger\,\hat{s}_x\,\gamma\,\psi & y&\mapsto&i\, \psi^\dagger\,\gamma^\dagger\,\hat{s}_y\,\gamma\,\psi& z&\mapsto&i\, \psi^\dagger\,\gamma^\dagger\,\hat{s}_z\,\gamma\,\psi\end{array}\end{equation}

In optics, these are called the Stokes Parameters after Sir George Gabriel Stokes (1819-1903) who defined them in 1852; in optics, a $2\times1$ vector is used to represent the complex amplitudes of the two polarisation eigenfields in a beam of light. So we’ll call them the Stokes parameters here. For a quantum state $x^2+y^2+z^2=1$; for a polarised beam of light, $x^2+y^2+z^2$ is the beam’s total power. From our discussion of quantum observables above, the Stokes parameters are the statistical means of the measurements that would be made if three separate experiments imparted the $x,\, y,\,z$ spin observables to separate electrons all prepared in the same way. They are also the $x$, $y$ and $z$ components of the classical angular momentum of a big ensemble of electrons all in the same quantum state.

It is not hard to check that the Stokes parameters uniquely define the parameter $\phi$ in $\eqref{SpinQuantumState}$ and define the parameters $\alpha\,\beta$ to within a common sign, i.e. both $\pm\psi$ beget the same Stokes parameters. The phase common to both components $e^{i\,\phi_0}$ in $\eqref{SpinQuantumState}$ does not bear on the Stokes parameters, so its information is lost; for many (but not all) applications this is not a problem. So now we visualise the quantum state as a unit vector in Euclidean 3-space; a point on the unit sphere. Ultimately, we shall think of this as the Riemann sphere, but hold off on this thought for now.

So now we represent the Stokes parameters as a vector, the Stokes Vector in the Lie algebra $\mathfrak{su}(2)\cong\mathfrak{so}(3)$, the vector $x\,\hat{s}_x+y\,\hat{s}_y+z\,\hat{s}_z$. By definition the transformation laws in $\eqref{StokesParameters}$, if a $\gamma\in SU(2)$ is imparted to the quantum state, the Stokes parameters transform by the image of $\gamma^{-1}=\gamma^\dagger$ under the Adjoint representation, i.e. $X = \left(\begin{array}{c}x&y&z\end{array}\right)^T\mapsto \Ad(\gamma^{-1})\,X$. So the Stokes vector is rotated by a rotation matrix. When our magnetic field is switched on, it rotates at a uniform angular speed about the axis through the origin of $\mathfrak{su}(2)\cong\mathfrak{so}(3)$ defined by the magnetic field’s direction as in Figure 19.4. The stronger the field, the faster the rotation.

QuantumSpinState

Figure 19.4: $\mathfrak{so}(3)$ Quantum Spin State Vector Representation Precessing about Magnetic Field $\vec{B}$

Maybe we might want to simplify the experimental kit needed to prepare our quantum state, so we only need to use two different magnetic field directions. Each of the two directions lets us impart any transformation of the form $e^{X_1\,t}$ and $e^{X_2\,t}$, where $X_1,\,X_2\in\mathfrak{su}(2)$ are the two different transformations c\vorresponding to the two different magnetic fied directions. It is a great deal easier to build apparatus that can switch the magnetic field to two different directions with, say, $10^o$ separation than to build kit that can impart an arbitrary direction magnetic field. If $X_1$ and $X_2$ are linearly independent, then $X_1,\,X_2,\,\left[X_1,\,X_2\right]$ span $\mathfrak{so}(3)$ and, by Theorem 13.4, some finite product of transformations of the form $\prod\limits_{k=1}^M\,e^{\pm\,X_1\,t_{1,M}}\,e^{\pm\,X_2\,t_{2,M}}$ can realise any transformation in $SU(2)$. We can readily reverse a magnetic field in an electromagnetic design, thus realising the $\pm$ signs in this expression, and we impart the transformation by switching on the magnetic field in directions alternating between $X_1$ and $X_2$, pulsing them for times $t_{1,M},\,t_{2,M}$ at the $M^{th}$ cycle, respectively.

When two spin-½ quantum systems are coupled together, the power of Theorem 13.4 can be essential to the ability to prepare arbitrary quantum states. A good example of a two spin-½ quantum system is the hydrogen atom, with one spin-½ electron bound to one spin-½ proton. The quantum state space is the tensor product of the two individual two-dimensional quantum state spaces, thus is four dimensional. There is at present no practicable technology that can impart different magnetic magnetic fields to the electron and proton bound together in a hydrogen atom. So, in the absence of coupling between the two spins, the Hamiltonian is:

\begin{equation}\label{CoupledSpinHamiltonian}H = -i\,\hbar\,\left(g_e\,\left(b_x\,\hat{s}_x+b_y\,\hat{s}_y+b_z\,\hat{s}_z\right)\otimes \id + g_p\,\id\otimes \left(b_x\,\hat{s}_x+b_y\,\hat{s}_y+b_z\,\hat{s}_z\right)\right)\end{equation}

where $g_e,\,g_p$ are the gyromagnetic ratios for the electron and proton, respectively and here $\id$ is the $2\times2$ identity. They are set by the particles themselves, and they are not under the control of the experimenter. There is a further term to the $4\times4$, traceless Hamiltonian that defines the coupling between the two spins, i.e. the tendency of one spin to bear on the other. The time evolution transformation in the presence of a magnetic field is $\gamma\in SU(4)$ is $\gamma=\exp\left(-\frac{i}{\hbar}\,H\,t\right)$. So therefore, by adjusting the magnetic field direction and magnitude, the vector subspace of $\mathfrak{su}(4)$ that can be directly realised by a constant magnetic field is spanned by the following basis vectors:

\begin{equation}\label{CoupledSpinSubspace}\begin{array}{lcl}\hat{X}&=&r\,\hat{s}_x\,\otimes\,\id + r^{-1}\,\id\,\otimes\,\hat{s}_x\\\hat{Y}&=&r\,\hat{s}_y\,\otimes\,\id + r^{-1}\,\id\,\otimes\,\hat{s}_y\\\hat{Z}&=&r\,\hat{s}_z\,\otimes\,\id + r^{-1}\,\id\,\otimes\,\hat{s}_z\\\\\hat{K}&=&\left(\begin{array}{cccc}\frac{i}{2}&0&0&0\\0&-\frac{i}{2}&i&0\\0&i&-\frac{i}{2}&0\\0&0&0&\frac{i}{2}\end{array}\right)\end{array}\end{equation}

where $r=\sqrt{\frac{g_e}{g_b}}$. The form of $\hat{K}$ is justified in [Kuzmak&Tkachuk, 2014] and the whole Hamiltonian, with coupling, takes the form:

\begin{equation}\label{CoupledSpinHamiltonian_2}H = -i\,\hbar\,\left(\sqrt{g_e\,g_b}\left(b_x\,\hat{X}+b_y\,\hat{Y}+b_z\,\hat{Z}\right) + a\,\hat{K}\right)\end{equation}

where $a\neq0$ measures the coupling strength between the two spins. It is set by the hydrogen atom’s orbital geometry and is not controllable.

There are only four controllable parameters in this Hamiltonian: the products $b_j\,t$ and $a\,t$ (the last only controllable by the duration of the preparation pulse). Theorem 13.4 shows that by pulsing the two magnetic fields for different lengths of time, some finite sequence of such pulsing operation will realise any time evolution operator of the form $e^X$, where $X\in\g$ and $\g$ is the smallest Lie algebra containing the vector space defined by $\eqref{CoupledSpinSubspace}$. But it turns out that this smallest Lie algebra is the whole of $\mathfrak{su}(4)$, and so the pulsed magnetic field sequence can impart any $\gamma\in SU(4)$ to the two spin-½ quantum system’s state. This is shown by beginning with $\hat{X},\,\hat{Y}\,\,\hat{Z},\,\hat{K}$ and forming repeated Lie products to compute the following basis for $\mathfrak{su}(4)$:

\begin{equation}\label{SU4Basis}\begin{array}{lclclclclclclcll}\hat{X}&&&\hat{Y}&&&\hat{Z}&&&\hat{K}&&\\\hat{L}_x &=& \left[\hat{X},\,\hat{K}\right]&\hat{L}_y &=& \left[\hat{Y},\,\hat{K}\right]&\hat{L}_z &=& \left[\hat{Z},\,\hat{K}\right]&&&\\\hat{T}_{xx} &=& \left[\hat{X},\,\hat{L}_x\right]&\hat{T}_{xy} &=& \left[\hat{X},\,\hat{L}_y\right]&\hat{T}_{xz} &=& \left[\hat{X},\,\hat{L}_z\right]&\hat{T}_{yy} &=& \left[\hat{Y},\,\hat{L}_y\right]\\&&&\hat{T}_{yz} &=& \left[\hat{Y},\,\hat{L}_z\right]&&&&&&\\\hat{V}_{xy}&=&\left[\hat{K},\,\hat{T}_{xy}\right]&\hat{V}_{xz}&=&\left[\hat{K},\,\hat{T}_{xz}\right]&\hat{V}_{yz}&=&\left[\hat{K},\,\hat{T}_{yz}\right]&&&\end{array}\end{equation}

and then showing that all fifteen $4\times4$ matrices are indeed linearly independent. The Mathematica computations that show this can be downloaded from this link here. Since the dimension of the vector space $\mathfrak{su}(4)$ is fifteen, the construction of the Lie algebra basis members in $\eqref{SU4Basis}$ shows that the smallest Lie algebra containing the vector subspace in $\eqref{CoupledSpinSubspace}$ is indeed the whole of $\mathfrak{su}(4)$. Therefore, by Theorem 13.4, there is a finite sequence of magnetic field pulses that will impart any transformation from $SU(4)$ on the two coupled spin-½ quantum system’s state, as long as there are at least two linearly independent magnetic field directions. In practice, a higher number of linearly independent magnetic field directions will make most transformations realisable in fewer steps. Moreover, in systems wherein the gyromagnetic ratios of the coupled qubits are equal, the algebra above spans only a nine dimensional one and exponentiates to $SU(3)\rtimes U(1)$. In cases where the gyromagnetic ratios are roughly the same, three magnetic field directions are needed.

A paper [Vance, 2014] computing a general control sequence for this coupled system can be downloaded from arXiv as arXiv:1502.05200.

Mathematica files for the calculations within the paper can be downloaded from:

References:

  1. Akira Tomita and Raymond Y. Chiao, “Observation of Berry’s Topological Phase by Use of an Optical Fiber“, Phys. Rev. Lett. 57, 937 (1986)
  2. Rod W. C. Vance, “Matrix Lie Group-Theoretic Design of Coupled Linear Optical Waveguide Devices“, SIAM J. Appl. Math. 56 #3, pp765-782, 1996
  3. Rod W. C. Vance and John D. Love, “Design procedures for passive planar coupled waveguide devices” IEE Proc. Opto-Electron. 141, 231–241 1994
  4. Richard P. Feynman, Robert B. Leighton, Matthew Sands, “The Feynman Lectures on Physics“, Basic Books 2011
  5. A. R. Kuzmak, V. M. Tkachuk, “Preparation of quantum states of two spin-½ particles in the form of the Schmidt decomposition“, Physics Letters A 378, p1469 (2014)
  6. Rod W. C. Vance, “Fine Entanglement and State Manipulation of Two Spin Coupled Qubits: A Lie Theoretic Overview”arXiv:1502.05200 [quant-ph], 2015