# Lemma 1
^49716f
A [[Matrix]] which [[Commutator|commutes]] with all matrices of an [[Irreducible Representation]] is a constant matrix. That is, if a non-constant matrix exists, the [[Group Representation|representation]] is [[Irreducible Representation|reducible]], if none exists, the representation is irreducible.
## Proof
Let $M$ be a matrix which commutes with all the matrices of the [[Irreducible Representation]] $D_1, D_2, \ldots, D_h$
$
M D_\alpha = D_\alpha M.
$
Then, taking the adjoint of both sides,
$
M^\dagger D_\alpha^\dagger = D_\alpha^\dagger M^\dagger
$
Without loss of generality, we can take $D_\alpha$ to be [[Unitary Operator|unitary]] by the theorem on [[Theorem on the Unitarity of Representations|unitarity of representations]]. Thus, we get:
$
M^\dagger D_\alpha = D_\alpha M^\dagger,
$
i.e, if $M$ commutes with all the $D_\alpha$, then so does $M^\dagger$, and so do the Hermitian matrices $H_1$ and $H_2$
$
\begin{align}
H_1 &= M + M^\dagger,\\
H_2 &= i(M - M^\dagger), \\
H_j D_\alpha &= D_\alpha H_j, \quad j = 1, 2 \\
\end{align}
$
Now, we perform a [[Similarity Transformation|unitary transformation]] on the matrices of the irrep to get $\hat{D}_\alpha = U^{-1} D_\alpha U$, where $U$ is a matrix that [[Diagonalizable Matrix|diagonalizes]] $H_j$, e.g., $H_1$, to give the diagonal matrix $d$
$
d = U^{-1}H_j U
$
Multiplying the equation $H_j D_\alpha = D_\alpha H_j$ by $U^{-1}$ on the left and $U$ on the right yields
$
d \hat{D}_\alpha = \hat{D}_\alpha d
$
So now, we have a diagonal matrix which commutes with all matrices of the representation. Lets take the $ij$ element of both sides
$
d_{ii} (\hat{D_\alpha})_{ij} = (\hat{D}_\alpha)_{ij} d_{jj}
$
or
$
(\hat{D_\alpha})_{ij} (d_{ii} - d_{jj}) = 0
$
If $d_{ii} \neq d_{jj}$, then the matrix $d$ is not constant. Thus $(\hat{D}_\alpha)_{ij}$ must be $0$ for all $\alpha$, and the unitary transformation has brought all the matrices into a similar block diagonal form, i.e., the representation is reducible. However, this is a contradiction. We have assumed that the representation is an irrep from the get go, and thus $d_{ii} = d_{jj}$ and the matrix is diagonal.
In conclusion, if the representation is irreducible and there exists a matrix which commutes with all representation matrices, then the matrix is a constant matrix $\square$.
# Lemma 2
Consider two matrix representations of the [[group]] [[Group Element|elements]] $a_1, a_2, \ldots, a_h$: representation $D^{(1)}(a_\alpha)$ of [[dimension|dimensionality]] $\ell_1$ and representation $D^{(2)}(a_\alpha)$ of [[dimension|dimensionality]] $\ell_2$. If there is $\ell_2 \times \ell_1$ matrix ($\ell_1$ columns and $\ell_2$ rows) $M$ such that:
$
M D^{(1)}(a_\alpha) = D^{(2)}(a_\alpha)M
$
for $\alpha = 1, \ldots, h$, then:
1. If $\ell_1 \neq \ell_2$, $M$ must be the null matrix $\mathbb{0}$. ^fb3244
2. If $\ell_1 = \ell_2$, either
1. $M = \mathbb{0}$
2. The representations $D^{(1)}(a_\alpha)$ and $D^{(2)}(a_\alpha)$ differ from each other by a [[similarity transformation]], i.e., they are equivalent representations. ^fa8c62
## Proof
### Preliminaries
Using the theorem on [[Theorem on the Unitarity of Representations]], we will assume that the matrices of both representations $D^{(1)}(a_\alpha)$ and $D^{(2)}(a_\alpha)$ have already been made [[Unitary Operator|unitary]].
We assume $\ell_1 \leq \ell_2$ and take the adjoint of the equation above:
$
[D^{(1)}(a_\alpha)]^\dagger M^\dagger = M^\dagger [D^{(2)}(a_\alpha)]^\dagger
$
By the unitarity of the [[Group Representation|representation]], we have:
$
[D^{(j)}(a_\alpha)]^\dagger = [D^{(j)}(a_\alpha)]^{-1} = [D^{(j)}(a_\alpha^{-1})], \quad j = 1,2, \quad \alpha = 1,\ldots, h
$
Thus,
$
\begin{align}
D^{(1)}(a_\alpha^{-1})M^\dagger &= M^\dagger D^{(2)}(a_\alpha^{-1}),\\
M D^{(1)}(a_\alpha^{-1})M^\dagger &= M M^\dagger D^{(2)}(a_\alpha^{-1})
\end{align}
$
since $a_\alpha^{-1}$ is also an element of the [[group]], we have:
$
\begin{align}
M D^{(1)}(a_\alpha^{-1}) = D^{(2)}(a_\alpha^{-1})M
\end{align}
$
and thus, we conclude:
$
D^{(2)}(a_\alpha^{-1}) M M^\dagger = M M^\dagger D^{(2)}(a_\alpha^{-1})
$
To summarize what we have done so far, if $M D^{(1)}(a_\alpha) = D^{(2)}(a_\alpha)M$, then $M M^\dagger$ commutes with all matrices of the representation (2). Then, by [[#Lemma 1|Lemma 1]], it is a constant matrix of [[dimension]] $\ell_2 \times \ell_2$:
$
M M^\dagger = c I
$
### Case 1: $\ell_1 = \ell_2$
In this case, $M$ is a [[square matrix]] with an inverse:
$
M^{-1} = M^{\dagger}/c, \quad c \neq 0
$
Then, if $M^{-1} \neq \mathbb{0}$, we have:
$
D^{(1)}(a_\alpha) = M^{-1} D^{(2)}(a_\alpha)M
$
and the representations differ only by a [[similarity transformation]] (case [[Schur's Lemma (Group Theory)#^fa8c62|(2b)]], $\square$).
However, if $c = 0$, we consider $M M^\dagger = \mathbb{0}$ directly.
$
\begin{align}
(M M^\dagger)_{ij} &= \sum_k M_{ik} M^{\dagger}_{kj} = \sum_k M_{ik}M^{*}_{jk} = 0, \\
(M M^\dagger)_{ii} &= \sum_k M_{ik}M^{*}_{ik} = \sum_k |M_{ik}|^2 = 0, \\
\end{align}
$
Thus, $M_{ik} = 0$ for all $i, k$ and the matrix $M$ is a null matrix (case [[Schur's Lemma (Group Theory)#^fa8c62|(2a)]], $\square$).
### Case 2: $\ell_1 \neq \ell_2$
Without loss of generality, we can take $\ell_1 < \ell_2$. Then $M$ has $\ell_1$ columns and $\ell_2$ rows. We can make it a square $\ell_2\times\ell_2$ matrix by padding it using $(\ell_2 - \ell_1)$ columns of zeros.
$
\begin{align}
\begin{pmatrix}
& & & 0 & 0 & \ldots & 0 \\
& & & 0 & 0 & \ldots & 0 \\
&M& & 0 & 0 & \ldots & 0 \\
& & & \vdots & \vdots & \ldots & \vdots \\
& & & 0 & 0 & \ldots & 0 \\
\end{pmatrix}
\end{align}
= N = \text{square } \ell_2 \times \ell_2 \text{ matrix}
$
And we have the adjoint as well:
$
\begin{align}
\begin{pmatrix}
& & & & \\
& &M^\dagger & & \\
& & & & \\
0 & 0 & 0 & \ldots & 0 \\
0 & 0 & 0 & \ldots & 0 \\
0 & 0 & 0 & \ldots & 0 \\
\vdots & \vdots & \vdots & \ldots & \vdots \\
0 & 0 & 0 & \ldots & 0 \\
\end{pmatrix}
\end{align}
= N^\dagger
$
Note that now we have
$
N N^\dagger = c I \quad (\text{dimension } \ell_2 \times \ell_2)
$
$
\begin{align}
(N N^\dagger)_{ii} &= \sum_k N_{ik} N_{ki}^\dagger = \sum_k |N_{ik}|^2 = c \delta_{ii}, \\
\end{align}
$
Now in the second of these equations, we see *all* the diagonal entries of $N N^\dagger$ are equal to $c$. Thus, if a single one of those entries is zero, then we can prove that $c = 0$.
Multiplying $N N^\dagger$ directly, we get:
$
\begin{align}
\begin{pmatrix}
& & & 0 & 0 & \ldots & 0 \\
& & & 0 & 0 & \ldots & 0 \\
&M& & 0 & 0 & \ldots & 0 \\
& & & \vdots & \vdots & \ddots & \vdots \\
& & & 0 & 0 & \ldots & 0 \\
\end{pmatrix}
\begin{pmatrix}
& & & & \\
& &M^\dagger & & \\
& & & & \\
0 & 0 & 0 & \ldots & 0 \\
0 & 0 & 0 & \ldots & 0 \\
0 & 0 & 0 & \ldots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \ldots & 0 \\
\end{pmatrix}
=
\begin{pmatrix}
& & &0 &0&0&\ldots &0 \\
&MM^\dagger& &0 &0&0&\ldots &0 \\
& & &0 &0&0&\ldots&0 \\
0 & 0 &0 &0 &0&0&\ldots&0 \\
\vdots & \vdots &\vdots &\vdots &\vdots&\vdots&\ddots&\vdots \\
0 & 0 &0 &0 &0&0&\ldots&0 \\
\end{pmatrix}
\end{align}
$
As we can see, there is still some zeros on the diagonal and we conclude that $c = 0$. Thus, we get $N_{ik} = 0$ for all $i,k$ and the matrix $N$ (and consequently $M$) is a null matrix ([[#^fb3244|case 1]], $\square$).