Chapter 8: The Campbell Baker Hausdorff Theorem and the Dynkin Formula

I have defined and investigated the properties of the Lie bracket, which, we have seen, encodes to some as yet less than fully understood degree, a connected Lie group’s “non-Abelianhood”. A Lie group’s Lie algebra must be closed under the Lie bracket; the question then arises: is there any further structure to the Lie algebra, encoded as information other than either the vector space basis or the Lie product? The answer, as far as the Lie group’s local properties are concerned, no, and the reason for this answer is the Campbell Baker Hausdorff theorem. This theorem states that, for small enough $X$ and $Y$ in a Lie group’s Lie algebra, we have $e^X\,e^Y = e^Z$ where $Z = \log\left(e^X\,e^Y\right)$ is a convergent infinite series whose terms are all built wholly from a finite number of linear and Lie bracket operations on $X$, $Y$. The group product is actually fully locally encoded in the Lie algebra’s linear and Lie bracket structure. Later we shall see that the final ingredient to fully determine a Lie group is a simply and elegantly described global topology. So the Lie group is wholly determined by (i) its Lie algebra and (ii) the discrete data of the connected Lie group’s fundamental group (first homotopy group).

The key to my following approach is the following observation: in the formula $e^X\,e^Y = e^Z$ for describing the group product from the Lie algebra’s standpoint, all the important “stuff” happens in the group’s Adjoint representation, not the group itself. As we have seen, the image $\Ad(\G)$ of the connnected Lie group $\G$ under the Adjoint representation is a matrix group and this lets us use convergent Taylor series for exponentials and logarithms of matrices. Only the “trivial”, Abelian part of the combination happens outside the Adjoint representation; we have seen that (Lemma 7.17) $e^X\,e^Y = e^{X+Y} = e^Y\,e^X$ if and only if $e^X$ and $e^Y$ commute.

Theorem 8.1 (Campbell (1897)-Baker(1905)-Hausdorff(1906))

Let $\G$ be a connected Lie group, $\g$ its Lie algebra and $\{\hat{X}_j\}_{j=1}^N$ a $\g$-basis. In geodesic co-ordinates, group multiplication and inversion are both $C^\omega $, i.e. there is a $\tau_0 >0$ such that:

\begin{equation}
\label{CampbellBakerHausdorffTheorem_1}
\exp \left({\sum\limits_{j=1}^N {x_j \,\hat{X}_j } }\right)\,\exp \left( {\sum\limits_{j=1}^N {y_j \,\hat{X}_j } }\right)=\exp \left(\sum\limits_{j=1}^N\kappa_j \left(x_1 ,\,\cdots ,\,x_N ;\,y_1 ,\,\cdots ,\,y_N \right)\,\hat{X}_j \right)\end{equation}

where the $\kappa_j $ are uniquely defined, have derivatives of all orders and have convergent Taylor series for $\left({x_j }\right)_{j=1}^N ,\,\left({y_j }\right)_{j=1}^N \in [-\tau_0 ,\,\tau_0 ]^N$. Moreover, the $\kappa_j $ are given by the Campbell-Baker-Hausdorff series, to wit, $e^X\,e^Y=e^Z$ where$Z\in \g$ is an infinite series comprising only the terms $X,\,Y$ and scalar weighted entities derivable from $X,\,Y$ by a finite sequence of Lie bracket operations. That is, $Z$ lies in the smallest Lie algebra containing $X$ and $Y$.

Proof: Show Proof

The proof for inversion is clear:

\begin{equation}
\label{CampbellBakerHausdorffTheorem_2}
\exp\left(\sum\limits_{j=1}^N x_j \,\hat{X}_j \right)^{-1}=\exp \left( -\sum\limits_{j=1}^N x_j \,\hat{X}_j \right)
\end{equation}

sending $(x_j)_{j=1}^N $ to $(-x_j )_{j=1}^N $, clearly a $C^\omega $function. To prove the result for the group product, we need to find $Z(\tau)\in \g$ such that:

\begin{equation}
\label{CampbellBakerHausdorffTheorem_3}
\sigma(\tau) \stackrel{def}{=} e^{Z(\tau)} = e^{\tau \,X}\,e^{\tau \,Y}
\end{equation}

we know from the Group Product Continuity Axiom 3, Nontrivial Continuity Axiom 4 and Theorem 6.2 that $Z(\tau)$ is uniquely defined and traces a $C^1$ path through the Lie algebra for $\left| \tau \right|<\tau_0 $ for some $\tau_0 >0$; without loss of generalness we can scale the Lie algebra basis so that $\tau_0 =1$ and restrict all solutions so that $\left| \tau \right|<1$. By Lemma 5.13 applied to $e^{\tau \,X}\,e^{\tau \,Y}$ and, by applying Equation (45) in Theorem 6.1 to $e^{Z(\tau)}$, we can write the following $N\times N$ matrix equation:

\begin{equation}
\label{CampbellBakerHausdorffTheorem_4}
\begin{array}{rcl}
\left.\d_s \left(\sigma(\tau)^{-1}\,\sigma(\tau+s)\right)\right|_{s=0} &=& \sum\limits_{k=0}^\infty \frac{(-1)^k\,\ad(Z(\tau))^k}{(k+1)!} \,\d_\tau Z(\tau) = e^{\tau\,\ad(Y)}\,X + Y\\
\Rightarrow\, \d_\tau Z(\tau) &=&\left(\sum\limits_{k=0}^\infty \frac{(-1)^k\,\ad(Z(\tau))^k}{(k+1)!}\right)^{-1}\,\left(e^{\tau\,\ad(Y)}\,X + Y\right)
\end{array}
\end{equation}

Now we look at the images of $e^{\tau\,X},\,e^{\tau\,Y}$ and $e^Z(\tau)$ under the Adjoint representation and so we have an equation in $N\times N$ matrix exponentials $e^{\ad(Z(\tau))} = e^{\tau\,\ad(X)}\,e^{\tau\,\ad(Y)}$. Therefore, for $\left\|e^{\tau\,\ad(X)}\,e^{\tau\,\ad(Y)}-\id_N\right\|<1$, the matrix Taylor series for the logarithm about the $N\times N$ matrix identity $\id_N$ is absolutely convergent and defines the unique inverse to the matrix exponential, so that:

\begin{equation}
\label{CampbellBakerHausdorffTheorem_5}
\ad(Z(\tau)) =\log \left( e^{\tau\,\ad(X)}\,e^{\tau\,\ad(Y)}\right) =\sum\limits_{j=0}^\infty {\frac{(-1)^j}{j+1}\left(e^{\tau\,\ad(X)}\,e^{\tau\,\ad(Y)}-\id_N\right)^j}
\end{equation}

an equation we can bring to bear to eliminate $\ad(Z(\tau))$ from the right hand side of $\eqref{CampbellBakerHausdorffTheorem_4}$, whence:

\begin{equation}
\label{CampbellBakerHausdorffTheorem_6}
\d_\tau Z(\tau) =\left(\sum\limits_{k=0}^\infty \frac{(-1)^k}{(k+1)!}\,\left(\sum\limits_{j=0}^\infty {\frac{(-1)^j}{j+1}\left(e^{\tau\,\ad(X)}\,e^{\tau\,\ad(Y)}-\id_N\right)^j}\right)^k\right)^{-1}\,\left(e^{\tau\,\ad(Y)}\,X + Y\right)
\end{equation}

The above series is uniformly convergent as long as (i) the matrix logarithm Taylor series in $\eqref{CampbellBakerHausdorffTheorem_5}$ converges (i.e. when $\left\|e^{\tau\,\ad(X)}\,e^{\tau\,\ad(Y)}-\id_N\right\|<1$) and (ii) the matrix inverse in $\eqref{CampbellBakerHausdorffTheorem_6}$ exists. This will also happen when $e^{\tau\,\ad(X)}\,e^{\tau\,\ad(Y)}$ lies in a small enough neighbourhood of $\id_N$, i.e. there exists some nonzero radius $\tau_{max}>0$ such that $\eqref{CampbellBakerHausdorffTheorem_6}$ is a uniformly convergent function of $\tau$ for $|\tau|<\tau_{max}$. Therefore, the series can be integrated term by term to find $Z(\tau), \,\forall\,|\tau|<\tau_{max}$. Moreover, we see from $\eqref{CampbellBakerHausdorffTheorem_6}$, that this integral will be a convergent infinite series whose terms comprise only either linear operations or Lie bracket operations on $X$ and $Y$. So all the partial sums of this series comprise only linear and Lie bracket operations, that is, the partial sums for $Z(\tau)$ all within the smallest Lie algebra containing both $X$ and $Y$. Since a Lie algebra is a vector space over (in this instance) a complete field, limits of Cauchy series within the algebra converge within the algebra, thus $Z(\tau)$ lies within the smallest Lie algebra containing both $X$ and $Y$ when $|\tau|<\tau_{max}$. $\qquad\square$

Take Heed: A slightly jargonish and often heard rendering of the CBH theorem is that “for small enough $X$ and $Y$, $e^X\,e^Y = e^Z$ where $Z$ is a Lie Bracket Series in $X$ and $Y$”. Obviously a “Lie Bracket Series in $X$ and $Y$” is one whose terms are all derived from $X$ and $Y$ by a finite number of linear and Lie bracket operations.

Definition 8.2 (Campbell-Baker-Hausdorff Product in the Lie Algebra):

For an abstract Lie algebra $\g$ over a $\R$ or $\mathbb{C}$, we define the function $\varphi_{CBH}:\O\times\O\to\g$ by $\varphi_{CBH}(X,\,Y)=\log\left(e^X\,e^Y\right)$ where $\O\subset\g$ is a small enough neighbourhood of the origin in the Lie algebra $\g$ that the Campbell-Baker-Hausdorff series converges so that:

\begin{equation}
\label{CampbellBakerHausdorffProductDefinition_1}
\varphi_{CBH}(X,\,Y) = \left(\sum\limits_{k=0}^\infty \frac{(-1)^k}{(k+1)!}\,\left(\sum\limits_{j=0}^\infty {\frac{(-1)^j}{j+1}\left(e^{\tau\,\ad(X)}\,e^{\tau\,\ad(Y)}-\id_N\right)^j}\right)^k\right)^{-1}\,\left(e^{\tau\,\ad(Y)}\,X + Y\right)
\end{equation}

Take heed that the convergence and well-definedness of the CBH series is quite independent from the Lie group: all the computations were done in the image of the adjoint representation $\ad(\g)$, which is a matrix algebra. The matrix exponential’s and matrix logarithm’s well known analyticity can then be brought to bear to study the CBH product $\varphi_{CBH}(X,\,Y)$, which is well defined whenever $X_1$ and $X_2$ are belong to a small enough neighbourhood of the origin $\O\subset\g$ (i.e. both the matrix norms $\left\|\ad(X)\right\|,\,\left\|\ad(Y)\right\|$ are small enough) that the Campbell-Baker-Hausdorff series converges;

Definition 8.3 (Smallest Lie Algebra containing a Set):

Given a “global” Lie algebra $\g$ over a field $\mathbb{K}$, we write $\left<\mathcal{A}\right>_{\text{Lie}}$ for the smallest Lie algebra containing the set $\mathcal{A}\subseteq\g$. That is, $\left<\mathcal{A}\right>_{\text{Lie}}$ contains all objects derivable from members of $\mathcal{A}\subseteq\g$ in a finite number of linear operations (additions and scalings by any scalar $x\in\mathbb{K}$) and Lie brackets. Another characterisation is that $\left<\mathcal{A}\right>_{\text{Lie}}$ is the intersection of all Lie algebras containing $\mathcal{A}$.

This notation is not standard, but it does simplify discussions we shall be having and furthermore should be fairly clear with the “generator” angle brackets.

The Campbell-Baker-Hausdorff theorem is most interesting indeed, for it tells us that there is no more structure needed in a Lie group’s Lie algebra aside from the linear space structure and the “Abelianhood Detection” information encoded in the Lie bracket to define the group product locally: the fact that $Z$ belongs to the smallest Lie algebra $\left<\{X,\,Y\}\right>_{\text{Lie}}$ containing $X$ and $Y$. The linear operations and Lie bracket wholly, fully determine the group product, at least locally. We can do the group multiplication operation, at least locally, wholly within the Lie algebra through the Campbell-Baker-Hausdorff series. If $\gamma_1,\,\gamma_2\in\G$, then any product of the form $\gamma_1\, \zeta_1\,\gamma_2\,\zeta_2$ where $\zeta_1,\,\zeta_2$ belong to a nucleus $\K_{\gamma_2}\subseteq\Nid$ small enough that:

  1. $\K_{\gamma_2}$ can be uniquely labelled by geodesic co-ordinates, i.e. $\zeta_1 = e^{X_1},\,\zeta_2 = e^{X_2},\,X_1,\,X_2\in\g$;
  2. Both $X_1$ and $X_2$ are small enough (i.e. both the matrix norms $\left\|\ad(\Ad(\gamma_2^{-1}) \,X_1)\right\|$ and $\left\|\ad(X_2)\right\|$ are small enough) that the Campbell-Baker-Hausdorff series for $\exp\left(\Ad(\gamma_2^{-1}) X_1\right)\,e^{X_2}$ converges;

then the product $\gamma_1\, \zeta_1\,\gamma_2\,\zeta_2$ of elements $\gamma_1\, \zeta_1$ and $\gamma_2\, \zeta_2$ “near” $\gamma_1$ and $\gamma_2$ is uniquely defined by the CBH series and thus by linear and Lie bracket operations on $X_1$ and $X_2$. So the CBH series makes the analyticity ($C^\omega$-nature) of the group operations when we use geodesic co-ordinates for sets in the neighbourhood of any $\gamma_1,\, \gamma_2\in\G$ in a very obvious in an elegant way.

Now, in the interests of concreteness, I derive some bounds that give an idea what a neighbourhood small enough for the CBH series to converge looks like.

Lemma 8.4 (Neighbourhood for Convergence of CBH Series):

In any abstract Lie algebra $\g$ over $\R$ or $\mathbb{C}$, the CBH series $\varphi_{CBH}(X,\,Y)$ converges for $X$, $Y\in\g$ whenever $\left\|\ad(X)\right\|,\,\left\|\ad(Y)\right\|<\frac{1}{4}$. Therefore, any open neighbourhood of the origin will work as the neighbourhood $\O$ in Definition 8.2 as long as $\left\|\ad(X)\right\|<\frac{1}{4},\,\forall\,X\in\O$. Indeed, the CBH series $\varphi_{CBH}(X,\,Y)$ converges if $\ad(Y) + \ad(Y)<\frac{1}{2}$.

Proof: Show Proof

The above is only a sufficient condition for convergence. There are indeed tighter, but more complicated bounds; see for example Equation 1.13 in [Casas and Murua, 2009] or, for the group $SO(3)$, [Engø, 2001].

The matrix:

\begin{equation}
\label{CBHConvergenceLemma_1}
\sum\limits_{k=0}^\infty \frac{(-1)^k\,\ad(Z(\tau))^k}{(k+1)!} = \id_N-\frac{1}{2!}\ad(Z) + \frac{1}{3!}\ad(Z)^2 + \cdots
\end{equation}

in $\eqref{CampbellBakerHausdorffTheorem_4}$ of Theorem 8.1 is nonsingular so long as the nonconstant part of the series fulfills:

\begin{equation}
\label{CBHConvergenceLemma_2}
\left\|\frac{1}{2!}\ad(Z) – \frac{1}{3!}\ad(Z)^2 + \frac{1}{4!}\ad(Z)^3+\cdots\right\|<1
\end{equation}

in the trace (Frobenius) norm. We can bound the left hand side of $\eqref{CBHConvergenceLemma_2}$ above by:

\begin{equation}
\label{CBHConvergenceLemma_3}
\begin{array}{rl}&\left\|\frac{1}{2!}\ad(Z) – \frac{1}{3!}\ad(Z)^2 + \cdots\right\|\\
\leq &\frac{1}{2!}\left\|\ad(Z)\right\| + \frac{1}{3!}\left\|\ad(Z)\right\|^2 + \frac{1}{4!}\left\|\ad(Z)\right\|^3+ \cdots \\
= & \frac{e^{\left\|\ad(Z)\right\|}-1-\left\|\ad(Z)\right\|}{\left\|\ad(Z)\right\|}
\end{array}
\end{equation}

which is a monotonically increasing function of $\left\|\ad(Z)\right\|$ with a removable singularity at $\left\|\ad(Z)\right\|=0$ so that, from $\eqref{CBHConvergenceLemma_2}$, if:

\begin{equation}
\label{CBHConvergenceLemma_4}
\left\|\ad(Z)\right\| = \left\|\log(e^{\ad(X)}\,e^{\ad(Y)})\right\|<\mathscr{z}_1= 1.25643\cdots
\end{equation}

where $\mathscr{z}_1 = 1.25643\cdots$ is the unique real solution to the transcendental equation $z = \log(1+2\,z)$ then we can be sure that we can invert the matrix. Now:

\begin{equation}
\label{CBHConvergenceLemma_5}
\begin{array}{lcl}
\left\|\log(U)\right| &=& \left\|(U-\id_N) – \frac{(U-\id_N)^2}{2} + \frac{(U-\id_N)^3}{3}-\cdots\right\|;\;\left\|U-\id_N\right\|<1\\
&\leq& \left\|U-\id_N\right\|+\frac{\left\|U-\id_N\right\|^2}{2} +\frac{\left\|U-\id_N\right\|^3}{3} +\cdots\\
&=& -\log(1-\left\|U-\id\right|);\;\left\|U-\id_N\right\|<1
\end{array}\end{equation}

and, by using this to simplify $\eqref{CBHConvergenceLemma_4}$ and by taking heed that convergence of the logarithm series ($\eqref{CampbellBakerHausdorffTheorem_5}$ of Theorem 8.1) is assured so long as $\left\|e^{\ad(X)}\,e^{\ad(Y)}-\id_N\right\|<1$, we see that the fulfilling of both the following conditions will ensure convergence of the CBH series:

\begin{equation}
\label{CBHConvergenceLemma_6}
\begin{array}{lcl}
-\log(1-\left\|e^{\ad(X)}\,e^{\ad(Y)}-\id\right|) &<& \mathscr{z}_1= 1.25643\cdots\\
\left\|e^{\ad(X)}\,e^{\ad(Y)}-\id_N\right\|&<&1
\end{array}
\end{equation}

but the second is automatically fulfilled if the first is (which translates to $\left\|e^{\ad(X)}\,e^{\ad(Y)}-\id_N\right\| < 1-e^{-\mathscr{z}_1}$). Now:

\begin{equation}
\label{CBHConvergenceLemma_7}
\begin{array}{rl}
&\left\|e^{\ad(X)}\,e^{\ad(Y)}-\id_N\right\|\\\leq &\left\|e^{\ad(X)}\,e^{\ad(Y)}\right\|-1\\
\leq &\left\|e^{\ad(X)}\right\|\,\left\|e^{\ad(Y)}\right\|-1\\
\leq &\exp(\left\|\ad(X)\right\|+\left\|\ad(Y)\right\|)-1
\end{array}
\end{equation}

So that convergence of the CBH series is assured if:

\begin{equation}
\label{CBHConvergenceLemma_8}
\exp(\left\|\ad(X)\right\|+\left\|\ad(Y)\right\|)<2 – e^{-\mathscr{z}_1}
\end{equation}

or:

\begin{equation}
\label{CBHConvergenceLemma_9}
\left\|\ad(X)\right\|+\left\|\ad(Y)\right\|<\log(2 – e^{-\mathscr{z}_1}) = 0.539607\cdots
\end{equation}

Hence we see the stated conditions as sufficient for convergence of the CBH series.$\qquad\square$

The Campbell-Baker-Hausdorff theorem is often (quite understandably) confused with the Dynkin formula; it is common to call the latter the Campbell-Baker-Hausdorff formula. Whilst this is unlikely to lead to incorrect mathematics, it does give a slightly misleading picture of the history. The Campbell-Baker-Hausdorff theorem is the fact that the group product pulled back into the Lie algebra can be represented by the binary operation $\varphi(X, Y)$ which lies in the smallest Lie algebra generated by $\{X,\,Y\}$. This knowledge is mostly all we need in Lie theory and is the grounding of the idea that the group is almost wholly determined by its Lie algebra. This was essentially the work of [Campbell, 1897/1898], [Baker, 1905] and [Hausdorff, 1906]. However, for numerical work especially, a formula is needed and the full formula was almost wholly the contribution of [Dynkin, 1947].

Lemma 8.5 (Dynkin’s Formula, 1947):

Given an abstract Lie algebra $\g$ over a $\R$ or $\mathbb{C}$, and $X, Y\in\g$ small enough that the Campbell-Baker-Hausdorff Series converges:

\begin{equation}
\label{DynkinFormulaLemma_1}
\varphi_{CBH}(X,\,Y) = \sum\limits_{k=1}^\infty\sum\limits_{\begin{array}{c}(i_1,\,\cdots,\,i_k,j_1,\,\cdots,\,j_k)\ni\\ i_r\geq0,\,i_r+j_r\geq1\end{array}}\frac{(-1)^{k-1}}{k}\frac{1}{\sum\limits_{r=1}^k i_r+j_r}\frac{[X^{(i_1)}\,Y^{(j_1)}\cdots X^{(i_k)}\,Y^{(j_k)}]}{\prod\limits_{r=1}^k i_r!\,j_r!}
\end{equation}

The first few terms of this series are:

\begin{equation}
\label{DynkinFormulaLemma_2}
\varphi_{CBH}(X,\,Y) = X+Y+\frac{1}{2}[X,\,Y] + \frac{1}{12}[X,[X,Y]] – \frac{1}{12}[Y,[X,Y]] – \frac {1}{24}[Y,[X,[X,Y]]] +\cdots
\end{equation}

Proof: Show Proof

As detailed in [Rossmann, §1.3 Theorem 1], this is “simply” an expansion of $\eqref{CampbellBakerHausdorffTheorem_6}$ of my proof (which shows how what is essentially Rossmann’s proof for matrix groups is generalised to general Lie groups) of Theorem 8.1. In my experience, multiplying this mess out and grouping the terms is not “simple”: the notation needs explanation. The notation $[X^{(i_1)}\,Y^{(j_1)}\cdots X^{(i_k)}\,Y^{(j_k)}]$ is a generalisation of

\begin{equation}
\label{DynkinFormulaLemma_3}
[X_1,\,X_2,\,\cdots,\,X_k] = [X_1,\,[X_2,\,\cdots,\,[X_{k-1},\,X_k\overbrace{],\,\cdots]}^\text{k brackets}
\end{equation}

When there are superscripts are added, the relevant operands are repeated the number of times stated by the superscript. Thus, for example:

\begin{equation}
\label{DynkinFormulaLemma_4}[X^2\,Y^3\,X^2\,Y] = [X,\,[X,\,[Y,\,[Y,\,[Y,\,[X,\,[X,\,Y]]]]]]] \end{equation}

$\square$

For any serious computations, there is a rather technical but ultimately much smoother and wieldier method of treating the CBH series wherein each of the Lie bracket terms is one-to-one associated with a “two-coloured tree” (a graph-theoretic tree, i.e. simply connected (loopless) graph whose vertices are either “black” or “white”) and one can then calculate the terms by iterating systematically over a lexicographical ordering of the two coloured trees. This is the ingenious and intriguing approach of [Casas and Murua, 2009].

Another highly worthwhile proof of the Campbell-Baker-Hausdorff Theorem is that given by [Eichler, 1968]; good, clear retellings of Eichler’s tale are to be found in [Stillwell, 2009] and [Sagle and Walde, 1973]. The proof, sadly, is very long winded, but its length belies its underlying elegance and simplcity: the idea is simply to set up a mathematical induction that shows that, once terms of the series built by up to a number $n$ of induction steps have been proven to be Lie bracket operations on $X$ and $Y$, then those of built by up to a number $n+1$ must also Lie brackets too. This is all that is needed: as with my proof here, the uniform convergence of the logarithm series and the universally convergent matrix series:

\begin{equation}
\label{LittleAdDerivativeSeries}
\sum\limits_{k=0}^\infty \frac{(-1)^k\,\ad(Z(\tau))^k}{(k+1)!}
\end{equation}

that is also invertible over a neighbourhood of the identity shows that the series’ convergence, at least for a neighbourhood of the identity, is not in question.

Definition 8.6 (Conntinuous Local Isomorphism)

A local isomorphism between the connected Lie groups $\G$ and $\H$ is a continuous bijective mapping $\rho: \K\subset \G \to \mathcal{L}\subset \H$ from a nucleus $\K\subset\G$ of $\G$ and a nucleus $\mathcal{L} = \phi(\K)\subset\H$ such that $\phi(\gamma\,\zeta) = \phi(\gamma)\,\phi(\zeta)$ for all $\gamma,\,\zeta\in \K$. Naturally, connected Lie groups $\G$ and $\H$ are said to be locally isomorphic iff there are nucleusses $\K\subset \G,\, \mathcal{L}\subset \H$ bijectively mapped onto one another by a local isomorphism and its inverse (which is needfully a local isomorphism the other way).

We shall simple call a continuous local isomorphism a “local isomorphism”, since the notion of continuity is needed to bring sense to the notion of “local” anyway. So a local isomorphism is a partial isomorphism between the groups: the mapping is not defined for a whole group but is an isomorphism where it is. Clearly:

Lemma 8.7 (Local Isomorphism Lemma)

Two connected Lie groups $\G$ and $\H$ are locally isomorphic if and only if they have the same Lie algebra.

Proof: Show Proof

If $\G$ and $\H$ have the same Lie algebra, then choose a nucleus $\K\subset \G$ small enough that the CBH theorem holds within it and then let $\phi:\K\to\H$ be $\phi = \exp_\H\circ\log_\G$ and set $\mathcal{J}=\phi(\K)$ (here $\exp_\H$, $\log_\G$ have their obvious meanings as the respective function in the Lie group / algebra named by the subscript). Then the group product is wholly, uniquely defined by the CBH product of Definition 8.2 and since the two Lie algebras are the same we get $\phi(\gamma\,\zeta) = \phi(\gamma)\,\phi(\zeta),\,\forall\,\gamma,\,\zeta\in\K$. Conversely, let $\G$ and $\H$ be locally isomorphic, i.e. injectively homomorphic. Then the reasoning of Theorem 7.12 applies here, even though the isomorphism is defined only locally; check the steps of Theorem 7.12 to understand this is so. Since the homomorphism is injective, the kernel of $d\phi$ is $\{\Or\}$, so the Lie algebras are isomorphic, hence the same.

It follows straight away that the group isomorphism (indeed automorphism) $\G\to\G$ defined by $\zeta\mapsto\gamma^{-1}\,\zeta\,\gamma$ is a (continuous) local isomorphism: choose a nucleus small enough that unique geodesic co-ordinates can be defined therein, then $\e^X\mapsto e^{\Ad(\gamma^{-1})\,X}$ and the Lie algebra is reversibly transformed by the Lie bracket respecting, nonsingular $N\times N$ matrix $\Ad(\gamma^{-1})$. Now, for any $\gamma\,\zeta\in\G$ we can write all $\G$-members in neighbourhoods of $\gamma,\,\zeta$ as $\gamma\,e^X$ and $\zeta\,e^Y$ where $X,\,Y\in\g$, so that the product is:

\begin{equation}\label{GeneralProductNeighbourhood}\gamma\, e^X\,\zeta\,e^Y=\gamma\,\zeta\,\zeta^{-1}\,e^X\,\zeta\,e^Y=\gamma\,\zeta\,\exp(\Ad(\zeta^{-1})\,X)\,\exp(Y)\end{equation}

$\zeta^{-1}\,\G\,\zeta$ is locally isomorphic to $\G$ and $\Ad(\zeta^{-1})\g\cong\g$ so that the local isomorphism concept is meaningful for neighbourhoods of points in $\G$ other than $\id$.

The Cambell-Baker-Hausdorff theorem and the other tools explored so far show that, when we begin with an abstract group with its elements connected by what seems to be mere “threads”, ultimately the group laws show instead that those threads are ultimately woven into the finest, $C^\omega$ cloth. We have so far concentrated on the neighbourhood $\Nid$ and its local topology and alluded to the definition of the whole group’s point set topology being the defined by deeming the system of neighbourhoods (see [Mendelson]) of each point $\gamma\in\G$ to be the sets of the form $\gamma\,\mathcal{N}$ where $\mathcal{N}\subseteq\Nid$ and $\lambda(\mathcal{N})$ is open in $\V$. It is now time, helped by the CBH theorem and the tools forged so far, to study that global topology in earnest.

References:

  1. F. Casas and A. Murua, “An efficient algorithm for computing the Baker–Campbell–Hausdorff series and some of its applications”, J. Math. Phys. 50, number 3, 2009, pp033513-1-033515-23
  2. K. Engø, “On the BCH-formula in so(3)”, BIT Numerical Mathematics, 41, number 3, 2001, pp629-632
  3. J. E. Campbell, “On a Law of Combination of Operators Bearing on the Theory of Continuous Groups”, Proc. London Math. Soc. 28, 1897, pp381-390
  4. J. E. Campbell, “On a Law of Combination of Operators Bearing on the Theory of Continuous Groups”, Proc. London Math. Soc. 29, 1898, pp14-32
  5. H. F. Baker, “Alternants and Continuous Groups”, Proc. London Math. Soc. 3, 1905, pp24-47
  6. F. Hausdorff, “Die sybolische Exponentialformel in der Gruppentheorie”, Berichte der Sächsischen Akademie der Wissenschaften (Math. Phys. Klasse) 58, 1906, pp19–48
  7. E. Dynkin, “Calculation of the Coefficients of the Campbell-Hausdorff Formula”, Dokl. Akad. Nauk. 57, 1947, pp323-326, English translation (the only one I’ve read as I don’t ken Russian) in E. B. Dynkin, “Selected Papers of E. B. Dynkin with Commentary”, 2000 American Mathematical Society and International Press.
  8. Wulf Rossmann: “Lie Groups: An Introduction through Linear Groups (Oxford Graduate Texts in Mathematics)”
  9. M. Eichler, “A new proof of the Baker–Campbell–Hausdorff formula”, J. Math. Soc. Japan, 20:23–25, 1968
  10. John Stillwell, “Naïve Lie Theory”, Springer Science + Business Media, New York, 2008, §2.8, Chapter 7
  11. A. A. Sagle and R. E. Walde, “Introduction to Lie Groups and Lie Algebras”, Academic Press, New York, 1973, §5.3, Theorem 5.18, Chapter 5
  12. Bert Mendelson, “Introduction to Topology: Third edition”, Dover Publications, New York, 1990, Chapter 3, §3: “Neighbourhood Spaces”.