## Symmetric spaces

This subsection will be a bit technical, but will hopefully become clearer in the next subsection. Consider a (semi-simple) Lie algebra $\mathfrak g$ associated to a Lie group $G$. The Lie bracket $\left[X,Y\right]$ can be used to define the adjoint representation $\ad X : Y \to \left[X,Y\right]$, which are formed of square matrices of size $dim(\mathfrak g)\times dim(\mathfrak g)$. The symmetric bilinear form \begin{equation} \kappa \left( X, Y \right) \doteq \tr \left( \ad X \ad Y \right) \end{equation} is known as the Killing form, which can be used as an inner product on the Lie algebra by setting $\langle X, Y\rangle \doteq \kappa \left( X, Y \right)$. Now try to find involutive automorphisms $\sigma$ of the Lie algebra, i.e. $\sigma^2 =1$. Then the Lie algebra decomposes according to the eigenvalues $\pm 1$ as $\mathfrak g = \mathfrak p + \mathfrak k$ with $\sigma (\mathfrak p) = - \mathfrak p$ and $\sigma (\mathfrak k) = \mathfrak k$. It's easy to show that the subsets are orthogonal, i.e. $\langle \mathfrak p, \mathfrak k\rangle=0$, and that they satisfy the commutation rules \begin{equation}\left\{ \begin{aligned} \left[\mathfrak p,\mathfrak p \right] &\subseteq \mathfrak k \\ \left[\mathfrak p,\mathfrak k \right] &\subseteq \mathfrak p\\ \left[\mathfrak k,\mathfrak k \right] &\subseteq \mathfrak k. \end{aligned} \right. \end{equation} This procedure is known as the Cartan decomposition. Note that $\mathfrak k$ is a subalgebra, but $\mathfrak p$ is not. If $K$ is the Lie group corresponding to $\mathfrak k$, then the homogeneous space $G/K$ is a symmetric space. We can denote elements in the symmetric space as a left coset $p K$, where $p$ is the exponential map of an element in the subset $\mathfrak p$. Then left actions on the symmetric space are well defined, $ L_g :p K \to g p K = p' k K = p' K$ ($L_g$ is just the left multiplication) for some $p'$ and $k$. There's an elegant formula for the Maurer-Cartan form (the expression for the differential is originally due to Helgason, I think). Suppose that the elements in the Lie algebra are sorted as $X_i, X_j, \ldots \in \mathfrak g$, $Y_\mu, Y_\nu, \ldots \in \mathfrak p$ and $Z_a, Z_b, \ldots \in \mathfrak k$. Then an element in the symmetric space can be written as $p = \mbox{EXP}\left( Y (\omega)\right) \doteq \mbox{EXP}\left( \omega_\mu Y_\mu\right)$, and the Maurer-Cartan form can be written in the form of the nontrivial formula (in a slightly less obscure notation than Helgason's, I hope; note that everything can of course also be done in the right invariant formalism) \begin{equation} p^{-1} dp = \left( \frac{1-e^{-\ad Y (\omega)}}{\ad Y (\omega)} \right) \left( Y(d\omega) \right). \end{equation} In components, \begin{equation} p^{-1} \partial_\mu p = \left( \frac{1-e^{-\ad Y (\omega)}}{\ad Y (\omega)} \right) \left( Y_\mu \right) = X_k \left( \frac{1-e^{-\ad Y (\omega)}}{\ad Y (\omega)} \right)_{k \mu} . \end{equation} It's also common to write $p^{-1} dp = X_i \otimes \Omega_i$, and call the one forms \begin{equation} \Omega_i = \left( \frac{1-e^{-\ad Y (\omega)}}{\ad Y (\omega)} \right)_{k \mu} d\omega^\mu \doteq \mathcal A_{k \mu} d\omega^\mu \label{matrix.gauge} \end{equation} the (left invariant) Maurer-Cartan forms. The left invariant metric is then \begin{equation} ds^2 \doteq \langle p^{-1} dp, p^{-1} dp \rangle = \Omega_i \kappa_{ij} \Omega_j, = d\omega^\mu \left( \mathcal A^T \kappa \mathcal A \right)_{\mu\nu} d\omega^\nu \label{metric.explicit}\end{equation} where $\kappa_{i j} = \langle X_i, X_j\rangle$. The above formula may however not be very useful in practice, since it involves exponentiations of the adjoint representation. The point here is that the metric can be defined and calculated by purely algebraic operations.

## $\mathbb H^2 \simeq P_2$ from $sl(2;\mathbb R)$

The Lie algebra $\mathfrak g \doteq sl(2;\mathbb R)$ can be defined as the set of traceless two by two matrices with elements \begin{equation} X(\omega) \doteq \left( \begin{array}{cc} -\omega_2 & \omega_1 + \omega_3 \\ \omega_1-\omega_3 & \omega_2 \end{array} \right) \doteq \sum\limits_{i=1}^{3} \omega_i X_i. \label{definingrep} \end{equation} In this particular basis the matrices $X_i$ in fact satisfy the Lie algebra $so(1,2;\mathbb R)$, \begin{equation} \left\{ \begin{aligned} \left[X_1,X_2 \right]&= 2 X_3 \\ \left[X_2,X_3 \right]&= -2 X_1\\ \left[X_3,X_1 \right]&= -2 X_2, \end{aligned}\label{comms.so21} \right. \end{equation} which is just a realization of the well known isomorphism $sl(2;\mathbb R) \simeq so(1,2;\mathbb R)$ (note how flipping the sign in e.g. the first commutator brings the algebra to the familiar $so(3;\mathbb R)$ Lie algebra). Now try to find all involutive automorphisms $\sigma$ of the Lie algebra. It can be shown that in this case there is only one such mapping, \begin{equation} \sigma : X \to -X^T. \end{equation} It is then straightforward to see that $\mathfrak p = \left\{ X_1, X_2 \right\}$ and $\mathfrak k = \left\{ X_3 \right\}$. Note that $K=\exp (\mathfrak k) = SO(2;\mathbb R)$. Using radial coordinates $\left(\omega_1, \omega_2 \right) = \left( r \sin \phi, r \cos \phi \right)$, we have \begin{equation} \small{p \doteq \exp \left(\omega_1 X_1 + \omega_2 X_2 \right) = \left( \begin{array}{cc} \cosh r -\cos \phi \sinh r & \sin \phi \sinh r \\ \sin \phi \sinh r & \cosh r +\cos \phi \sinh r \end{array} \right)}, \end{equation} which is identical to the matrix in eq. (10) in part I. The matrix $\mathcal A$ in eq. \eqref{matrix.gauge} can be computed explicitly as follows. The $\ad$ basis can be written as in the defining representation in eq. \eqref{definingrep}, \begin{equation} \ad X(\omega) \doteq 2\left( \begin{array}{ccc} 0 & \omega _3 & -\omega _2 \\ -\omega _3 & 0 & \omega _1 \\ -\omega _2 & \omega _1 & 0 \end{array} \right) = \sum\limits_{i=1}^{3} \omega_i \ad X_i. \end{equation} It is straightforward to verify that e.g. $\ad X_1 (X_2) = 2 X_3$, where now $X_2 = \left(0,1,0\right)^T$, $X_3 = \left(0,0,1\right)^T$ and the $\ad$ representation acts in the usual way. The matrix $\mathcal A$ can be expanded in power series (in fact the power series is the definition), \begin{equation} \mathcal A = \left( \frac{1-e^{-\ad Y (\omega)}}{\ad Y (\omega)} \right) = \sum\limits_{n=0}^{\infty} \frac{1}{(n+1)!}\left(-\ad Y(\omega) \right)^n ,\end{equation} remembering that $Y(\omega) \in \mathfrak p$. By using the Cayley-Hamilton theorem, the right hand sum can be written in finitely many powers of the matrix as \begin{equation} c_0 (\omega) +c_1 (\omega) \ad Y(\omega) + c_2 (\omega) \ad Y(\omega)^2. \end{equation} The task of finding the coefficients $c_n (\omega)$ is straightforward, although slightly tedious (NOTE TO SELF: I once wrote a Mathematica function, similar to the MatrixExp function, that does the trick automatically... must remember to post it somewhere and insert a link here). Anyway, the result is \begin{equation} \scriptsize{ \mathcal A = \left( \begin{array}{ccc} \frac{\sinh (2 r) \cos ^2(\phi )}{2 r}+\sin ^2(\phi ) & \frac{\left(r-\frac{1}{2} \sinh (2 r)\right) \sin (\phi ) \cos (\phi )}{r} & \frac{\sinh ^2(r) \cos (\phi )}{r} \\ \frac{\left(r-\frac{1}{2} \sinh (2 r)\right) \sin (\phi ) \cos (\phi )}{r} & \frac{\sinh (2 r) \sin ^2(\phi )}{2 r}+\cos ^2(\phi ) & -\frac{\sinh ^2(r) \sin (\phi )}{r} \\ \frac{\sinh ^2(r) \cos (\phi )}{r} & -\frac{\sinh ^2(r) \sin (\phi )}{r} & \frac{\sinh (2 r)}{2 r} \end{array} \right)}. \end{equation} Using the definition in eq. \eqref{metric.explicit} will then yield the same metric as in part I (up to a factor of 2): \begin{equation}ds^2 = dr^2 + \sinh^2 (r) d\phi^2.\label{metricP2} \end{equation}