09 Feb 2014 No Comments

# Holograms, Diffraction and the Helmholtz Equation

A question on Physics Stack Exchange asks how three dimensional images can be read from two dimensional holograms. My basic answer is this:

*That the phase and amplitude alone on one plane is enough to wholly define a three-dimensional light field arises from the various uniqueness theorems for Maxwell’s equations within a connected volume given the solution on the volume’s boundary; otherwise put: once you know a solution on a boundary, then the values within must follow from “reasonable” physical assumptions.*

Actually, not only do Maxwell’s equations have this property. Holographic-like ideas arise in many branches of physics and for many different equations – it often happens that a solution to a partial differential equation within a hypervolume $V$ is uniquely defined, given reasonable assumptions, by the solution’s values on the volume’s boundary $\partial V$. The principle can often be broadened to half infinite volumes and the plane surfaces on one side of them under certain conditions. Indeed, it is believed my many physicists, notably Leonard Suskind, that the laws of physics are reversible, *i.e. *that there is a one-to-one mapping between the World’s state at any time to the state at any former or future time. Therefore, the World’s state on any three dimensional hyperplane of constant time uniquely determines that state at any time in the past or the future. Things get rather more complicated on cosmolgical scales, where we cannot in general define a universal time and thus constant time sections for arbitrary solutions to the Einstein Field Equations. Nonetheless, the concept is well enough defined for to beget serious debate about the Black Hole Information Paradox and Loschmidt’s paradox is still taken seriously enough that there is no widely accepted proof to the Second Law of Thermodynamics on the scale of the Universe; we can prove a weak form of the Second Law *given* the knowledge that the Universe began in an exquisitely low entropy state, but the deep mystery still remains as to how it got into that state in the first place.

Let’s pull back from such lofty discussions and discuss the holography idea in optics and electromagnetics. For many real-world situations, the value of the electromagnetic field on any closed surface uniquely determines the whole field, as well as its former and future history, within the surface.

For simplicity let’s sit with the scalar diffraction theory, so now we are essentially talking about uniqueness theorems for the Helmholtz equation $(\nabla^2 + k^2) \psi = 0$.

Uniqueness theorems when $k^2 > 0$ or when $k^2 \in \mathbb{C}-\mathbb{R}$ are much more complicated than when $k^2\leq0$. The latter case corresponds to static solutions of the Klein Gordon equation or to static solutions of the Maxwell equations with or without an assumption of a massive photon; see My answer here for more details. Such cases have very strong uniqueness theorems: once a solution’s values are set on a compact volume’s boundary, there is only one possible solution within the volume. This situation even extends to semi-infinite volumes. However the former situation includes $k^2>0$, the case for scalar diffraction in freespace or a lossless dielectric: uniqueness theorems need further strong assumptions about the field to make them work. Thankfully, some of these assumptions are reasonable physically.

We can restore simplicity to the solutions of the freespace Helmholtz equation (*i.e.* to the situation we have with a hologram) by making reasonable physical assumptions such as the Sommerfeld radiation condition or that the field is a tempered distribution; for more information on the latter condition, see my answers here and here. A good summary of these topics is in:

Given these assumptions, together with the assumption that the field is propagating purely left-to-right, we can reconstruct a field from the hologram as follows. One begins with the Helmholtz equation in a homogeneous medium $(\nabla^2 + k^2)\psi = 0$. If the field comprises only plane waves in the positive $z$ direction then we can represent the diffraction of any scalar field on any transverse (of the form $z=c$) plane by:

$$\tag{1}\begin{array}{lcl}\psi(x,y,z) &=& \frac{1}{2\pi}\int_{\mathbb{R}^2} \left[\exp\left(i \left(k_x x + k_y y\right)\right) \exp\left(i \left(k-\sqrt{k^2 – k_x^2-k_y^2}\right) z\right)\,\Psi(k_x,k_y)\right]{\rm d} k_x {\rm d} k_y\\

\Psi(k_x,k_y)&=&\frac{1}{2\pi}\int_{\mathbb{R}^2} \exp\left(-i \left(k_x u + k_y v\right)\right)\,\psi(x,y,0)\,{\rm d} u\, {\rm d} v\end{array}$$

To understand this, let’s put carefully into words the algorithmic steps encoded in these two equations:

- Take the Fourier transform of the scalar field over a transverse plane to express it as a superposition of scalar plane waves $\psi_{k_x,k_y}(x,y,0) = \exp\left(i \left(k_x x + k_y y\right)\right)$ with superposition weights $\Psi(k_x,k_y)$;
- Note that plane waves propagating in the $+z$ direction fulfilling the Helmholtz equation vary as $\psi_{k_x,k_y}(x,y,z) = \exp\left(i \left(k_x x + k_y y\right)\right) \exp\left(i \left(k-\sqrt{k^2 – k_x^2-k_y^2}\right) z\right)$;
- Propagate each such plane wave from the $z=0$ plane to the general $z$ plane using the plane wave solution noted in step 2;
- Inverse Fourier transform the propagated waves to reassemble the field at the general $z$ plane.

If you can understand these steps you should be other see how the solution to Helmholtz’s equation, i.e. the full three-dimensional scalar light field, is reconstructed from its values on a plane. The latter of course is what a phase and intensity mask hologram encodes.

What hinders holography? I am not up with the latest hologram production techniques, but essentially making a hologram is a kind of interferometry and as such calls for low vibration and building of an interferogram between transmitted and reference light. One can’t simply “snap” a hologram like one can with a digital camera (or even with an older style film camera). Moreover, the phase masking needed to make the equations above work is highly colour-dependent, so that any kind of colour holography is even more restrictive than the making of one-colour holograms. The Holography Wiki page gives a good overview; the “rainbow” holographic technique is the nearest I know of to colour holography. Aside from for this technique, most holograms need high coherence in the light source for reconstruction.

Another interesting technique is the manipulation of light by computer generated holography, where one computes by solving Maxwell’s equation the phase and amplitude mask needed for *e.g.* nulling out the mean aberration from a lens before analysis by an interferometer.

Now we look at the asymptotic (farfield) behaviour of (1), i.e. Fraunhofer diffraction. This happens when the distance to the image screen increases without bound and we want to find an approximation to $\psi(x,y,z)$ on a screen far removed from the slit. We need the method of stationary phase to understand that the only substantial contribution to $\psi(x,y,z)$ in the first integral in (1) as $R = \sqrt{x^2+y^2+z^2}\to\infty$ is where the phase factor $k_x\,x+k_y\,y-\sqrt{k^2 – k_x^2-k_y^2} z$ in the function $\exp\left(i \left(k_x\,x+k_y\,y-\sqrt{k^2 – k_x^2-k_y^2} z\right)\right)$ is a stationary function of $k_x$ and $k_y$. At other points, the phase is so swiftly varying with $k_x$ and $k_y$ that the contributions all cancel out by destructive interference. The mathematically rigorous notion to be heeded here is the Riemann Lebesgue Lemma; it confirms the intuitive idea that the swiftly varying phase components knock each other out through destructive interference.

So we find where:

$$\tag{2}\partial_{k_x} \left(k_x\,x+k_y\,y-\sqrt{k^2 – k_x^2-k_y^2}\,z\right) = \partial_{k_y} \left(k_x\,x+k_y\,y-\sqrt{k^2 – k_x^2-k_y^2}\,z\right) = 0$$

which is the point $(k_x,\,k_y)$ where:

$$\tag{3}x + \frac{k_x}{\sqrt{k^2 – k_x^2-k_y^2}} z=0;\;\quad y + \frac{k_y}{\sqrt{k^2 – k_x^2-k_y^2}} z=0$$

so that

$$\tag{4}k_x = -k\frac{x}{R};\quad k_y = -k\frac{y}{R}$$

So the first integral in the algorithm above winds up being approximately proportional to \Psi\left(-k\frac{x}{R},\,-k\frac{y}{R}\right); the full result is:

$$\tag{5}\psi(x,\,y,\,z) \sim \mp\frac{k\,|z|}{R^2}\,e^{\mp i\,k\,R}\,\Psi\left(\pm k\frac{x}{R},\,\pm k\frac{y}{R}\right)$$

The positive root holds if $z>0$ and the point in question is away from the source relative to the plane where the field, whence $\Psi$ is calculated, is known. The negative root holds for a point towards the source relative to the plane of known fields. For a paraxial field (i.e. $x^2+y^2 \ll R^2) the above simply becomes:

$$\tag{6}\psi(x,\,y,\,z) \approx \mp\frac{k}{R}\,e^{\mp i\,k\,R}\,\Psi\left(\pm k\frac{x}{R},\,\pm k\frac{y}{R}\right)$$

which is the wonted Fraunhofer diffraction expression.

You must log in to post a comment.