2x2 Real Matrix :  Matrix Division , Finding Square Root (+ve determinant)

  Matrix A   Matrix B  
  (a) (b)   (e) (f)  
  (c) (d)   (g) (h)  
       (K1)   (K2)   Frobenius Norm:  
  putting value of K1,K2 is optional. K1,K2 are required for matrix equation AX=K where X= x &K=K1

               y         K2

       
  Δ(det): tr:   Δ(det): tr:  
  anti-det :   anti-tr:   anti-det :   anti-tr:  
  (tr/2)2   (tr/2)2  
  b/d: a/c:   f/h: e/g:  
  c-b:     g-f:    
  c/b:   g/f:  
  (d-a) / b   (h-e) / f  
  (d+a) / b   (h+e) / f  
  a/d:   h/e:  
  grey c/b =g/f common cond. for commutative and anti-commutative   pink (d-a) / b = (h-e) / f cond for commutative blue (d-a) /b =- (h-e) / f cond for anti commutative  
  a/d=h/e specific cond. for anti-commutative          
  normcol1   normcol1  
  normcol2   normcol2  
  <col1,col2>   <col1,col2>  
  angle between col1 & col2 in degree   angle between col1 & col2 in degree  
  angle of col1 wrt x-axis(-a/c)in deg   angle of col1 wrt x-axis(-e/g)  
  angle of col2 wrt x-axis(-b/d)in deg   angle of col2 wrt x-axis(-f/h)  
        trB/trA  
    anti-tr/2b     anti-tr/2f  
    tr/2b     tr/2f  
  (anti-tr/2b)2+c/b   (anti-tr/2f)2+g/f  
         
           
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  λ1 +i   λ1 +i  
  |λ1|   |λ1|  
  λ2 +i   λ2 +i  
  |λ2|   |λ2|  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
  <evector1,evector2> (1-|c/b|)   <evector1,evector2> (1-|g/f|)  
  ||evA1||   ||evB1||  
  ||evA2||   ||evB2||  
 

Angle of vector 1 w.r.t x-axis  in degree

 

Angle of vector 1 w.r.t x-axis  in degree

 
  Angle of vector 2 w.r.t x-axis in degree   Angle of vector 2 w.r.t x-axis in degree  
  Angle of vector 2 & 1 (in degree)   Angle of vector 2 & 1 (in degree)  
             
  normalize A & B  

*after normalization, the imaginary part is omitted and calculation done on real part only.

   
  Matrix C=A/B BC=A   Matrix D=B/A AD=B  
  (m) (n)   (q) (r)  
  (o) (p)   (s) (t)  
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2  
  anti-tr: anti-tr/2n   anti-tr: anti-tr/2r  
  o/n: tr/2n   s/r: tr/2r  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  p: m:   t: q:  
  θ=tan-1 √(p/m) in °   θ=tan-1 √(t/q) in °  
  (anti-tr/2n)2+o/n   (anti-tr/2r)2+s/r  
  λ1 +i   λ1 +i  
  |λ1|   |λ1|  
  λ2 +i   λ2 +i  
  |λ2|   |λ2|  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
  <evector1,evector2> (1-|o/n|)   <evector1,evector2> (1-|s/r|)  
  ||evC1||   ||evD1||  
  ||evC2||   ||evD2||  
 

Angle of vector 1 w.r.t x-axis  in degree

 

Angle of vector 1 w.r.t x-axis  in degree

 
  Angle of vector 2 w.r.t x-axis in degree   Angle of vector 2 w.r.t x-axis in degree  
  Angle of vector 2 & 1 (in deg)   Angle of vector 2 & 1 (in deg)  
             
          Link- 1,  
  Matrix C1=A/B C1B=A   Matrix D1=B/A D1A=B  
  (m1) (n1)   (q1) (r1)  
  (o1) (p1)   (s1) (t1)  
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2 -&DeDelta;  
  anti-tr: anti-tr/2n1   anti-tr: anti-tr/2r1  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  p1: m1:        
  θ=tan-1 (p1/m1) in °        
  Δθ=(θC1 -θC) in °        
  RC =Rotation Matrix for Δθ        
  (RC11) (RC12)        
  (RC21) (RC22)        
             
  λ1 +i   λ1 +i  
  λ2 +i   λ2 +i  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
 

If C,C1 are same,then B,C are commuting matrices

    If D,D1 are same,then A,D are commuting matrices    
             
  Matrix X CX=C1        
  (x11) (x12)        
  (x21) (x22)        
  Δ(det): tr:        
             
             
             
  Matrix E=A+B     Matrix F=A-B    
  (E11) (E12)   (F11) (F12)  
  (E21) (E22)   (F21) (F22)  
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2  
  anti-tr: anti-tr/2E12   anti-tr: anti-tr/2F12  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  λ1 +i   λ1 +i  
  λ2 +i   λ2 +i  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
             
  Matrix G=√A     Matrix H=√B    
  (G11) (G12)   (H11) (H12)  
  (G21) (G22)   (H21) (H22)  
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2  
  anti-tr: anti-tr/2G12   anti-tr: anti-tr/2H12  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  λ1 +i   λ1 +i  
  λ2 +i   λ2 +i  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
             
  Matrix G=√A          
  (G11b) (Gd12)        
  (Gd21) (G22b)        
  Δ(det): tr:        
  (tr/2)2        
  anti-tr: anti-tr/2Gd12        
  √[(tr/2)2 -Δ] +i        
  λ1 +i        
  λ2 +i        
  (y/x)1: +i        
  (y/x)2: +i        
             
  Matrix AB     Matrix BA    
     
     
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2  
  anti-tr(a): anti-tr(b):   anti-tr(a): anti-tr(b):  
  anti-tr(a+b): anti-tr/2AB12   anti-tr(a+b): anti-tr/2BA12  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  λ1 +i   λ1 +i  
  λ2 +i   λ2 +i  
  (y/x)1a: (y/x)1b:   (y/x)1a: (y/x)1b:  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
             
  Matrix[A,B]     Matrix{A,B}    
     
     
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2  
  anti-tr(a): anti-tr(b):   anti-tr(a): anti-tr(b):  
  anti-tr(a+b): anti-tr/2AcB12   anti-tr(a+b): anti-tr/2AaB12  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  λ1 +i   λ1 +i  
  λ2 +i   λ2 +i  
  (y/x)1a: (y/x)1b:   (y/x)1a: (y/x)1b:  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
             
  vector X vectorAX=AX   vector Y vectorBY=BY  
     
     
  ||X||: ||AX||:   ||Y||: ||BY||:  
  lorentz ratio Lx:        ||AX|| /||X||   lorentz ratio (Ly)  
  lorentz ratio part A:   lorentz ratio part A:  
  lorentz ratio part B:   lorentz ratio part B:  
  lorentz ratio part C:   lorentz ratio part C:  
  LorenzRatio(A+B+C)   LorenzRatio(A+B+C)  
  L2A/L2B   L2A/L2B  
         
 

clicking above, A matrix becomes a-1,  b, c , d-1

  clicking above, B matrix becomes e-1,  f, g , h-1  
             
             
  rotomatrix X     rotomatrix Y    
  (rx11) (rx12)   (ry11) (ry12)  
  (rx21) (rx22)   (ry21) (ry22)  
  roto-angle in degree anti-clockwise(b-a)   roto-angle in degree anti-clockwise(b-a)  
  roto-angle a   roto-angle a  
  roto-angle b   roto-angle b  
  det: tr:   det: tr:  
             
  roto  matrix Xa     roto  matrix Ya    
  (rxa11) (rxa12)   (rya11) (rya12)  
  (rxa21) (rxa22)   (rya21) (rya22)  
  det: tr:   det: tr:  
             
  roto  matrix Xb     roto  matrix Yb    
  (rxb11) (rxb12)   (ryb11) (ryb12)  
  (rxb21) (rxb22)   (ryb21) (ryb22)  
  det: tr:   det: tr:  
             
  trans-rotomatrix X   trans-rotomatrix Y  
  (trx11) (trx12)   (try11) (try12)  
  (trx21) (trx22)   (try21) (try22)  
  det: tr:   det: tr:  
             
             
             
  x2+y2 +(2*)y+(2*)x+(2*)xy +  =0

  (a1)x2  + (b1)y2 +2*   (f1)y +2*   (g1)x +2* (h1)xy +  c1      =0

This is the general equation of a conic. a1=ac,b1=bd,h1=(anti-Δ)/2           g1=- (aK2+cK1)/2 ,f1=-(bK2+dK1)/2  , c1=K1K2

δ =

δ =a1b1c1+2f1g1h1-af12 -b1g12-c1h12

δ= +=

   =abcdK1K2+(1/4)(bK2+dK1)(aK2+cK1)(anti-Δ) -(ac/4)(bK2+dK1)2         - (bd/4)(aK2+cK1)2 - (K1K2/4)(anti-Δ)2

=(1/4)(bK2+dK1)(aK2+cK1)(anti-Δ) - (K1K2/4)(anti-Δ)2     +

   abcdK1K2 -(ac/4)(bK2+dK1)2         - (bd/4)(aK2+cK1)2 =partA+partB  ;

Grey part (PartA) is first component containing anti-determinant & 2nd part is the rest.

PartA=anti-Δ / 4 [(bK2+dK1)(aK2+cK1) - K1K2*anti-Δ].

If anti-Δ =0, (bK2+dK1)(aK2+cK1) =0 or

a      c   * K2  =  0

b      d     K1       0 .   

The matrix above is AT .

if anti-Δ =0, ac ≠bd when a,b,c,d are real numbers

if at all ac=bd, then 2 out of 4 matrix elements become imaginary. For convenience, we think (b,c) become imaginary with b = -a2 , c = -d2 ;

Construction of 2x2 IDEM Potent matrix: M

 
  4bc < 1     4bc >1  
     +i  
     +i  
  det   det  
  trace   trace  
  M2          
         
         
             
  matrix M   matrix N  
 

Enter the figures in Matrix A. M shall appear at matrix B upon pressing submit at the side of matrix M. Press submit above matrix A. BA shall be the transpose of A.

   

Enter the figures in Matrix A. N shall appear at matrix B upon pressing submit at the side of matrix N. Press submit above matrix A. BA shall be the adjoint of A.

   
  matrix O        
 

Enter the figures in Matrix A. O shall appear at matrix B upon pressing submit at the side of matrix O. Press submit above matrix A. BA shall be the inverse of A.

         
             
             
           
 

*we assume that as happens for spin 1/2 particles, rotational invariance is achieved upon 720 degree rotation in stead of usual 360 degree.

* Ax, Ay in terms of angle φ

       
  Initial state vector | A>

(normalized)

(Ax)

(Ay)

(normA)

 

  Initial state vector | A> (Ax)

(Ay)

(normA)

 
  φ in degree   Rotation angle θ in degree  
  Final state vector

 | A1>

(A1x1)

(A1y1)

(normA1)  

  Final state vector | A1> (A1x)

(A1y)

(normA1) 

 
             
  state vector | A2> (A2bx1)

(A2by1) 

(normA2) 

  state vector | A2> (A2bx)

(A2by) 

(normA2)

 
  When θ =(2n+1)π where n is any integer, | A2> =0 So at θ =180,270 it is only A3   When θ =2nπ where n is any integer, | A3> =0 So at θ =360,720 degree, it is only A2  
 

state vector | A3>

(A3bx1)

(A3by1) 

(normA3) 

  state vector | A3> (A3bx)

(A3by) 

(normA3)

 
  angle(A2,A3) in degree

A2.A3=

  angle(A2,A3) in degree

A2.A3=

 
   | A4>=| A2>+| A3> =

| A1>

(A4bx1)

(A4by1) 

(normA4) 

   | A4>=| A2>+| A3> =

| A1>

(A4bx)

(A4by) 

(normA4) 

 
   | A4>=| A2> -| A3> =

| A1>

(A4bxm1)

(A4bym1) 

(normA4) 

   | A4>=| A2>-| A3> =

| A1>

(A4bxm)

(A4bym) 

(normA4) 

 
             
             

 

To find A/B =C or BC=A

em+fo=a....(1)

en+fp=b... (3)

gm+ho=c..(2)

gn+hp=d..(4)

from equn (1) & (2), we get m=(ah-cf)/(eh-fg)=(ah-cf)/detB

                                              o=(a-em)/f

from equn. (3) & (4), we get n=(bh-df)/detB

                                               p=(b-en)/f;

To find square root of the matrix A:

Let the matrix be G.

Then detG=G11*G22-G21*G12=(detA) .......(1)

G112 +G12*G21 =a........(2)

(1)+(2)= G11(G11+G22)= a+(detA)........(2a)

Similarly G22*G22+G12*G21=d.........(3)

                 G11*G22-G12*G21=(detA)....(4)

Adding (3) & (4), we get

G22(G11+G22)=d+(detA) ......(4a)

2(a) / 4(a), we get  [a+(detA)] /[d+(detA)] =G11/G22=k...(5)

since k is known, G11=G22*k ......(6)

Now a-d= G112- G222=k2G222 -G222=G222 ( k2-1) or G22= [(a-d)/(k2-1)] .....(7)

G11=G22*k

b/c= G12/G21 =m or G12=G21*m.........(8)

d=G12*G21+G222 .........(9)

or d -G222  =G21*m*G21 =G212 m

or G21= [(d -G222 ) / m ]

From equn (8), G12=m*G21.......(10)

Frobenius Norm of a Matrix:

It is the square root of sum of square of all the elements of the matrix.

IDEM Potent Matrix: If A is such a matrix, then A2 = A .

If Matrix A= a  b   and A2 = a2+bc       b(a+d) = a     b

                      c  d                  c(a+d)     d2 + bc   c     d

Equating, we get   b(a+d) - b =0 or  b(a+d-1)=0.......(1)

                              c(a+d) - c =0  or c(a+d-1)=0.......(2)

                            a2+bc-a=0     or   a2-a + bc= 0 .....(3) or a =[ 1 ± √(1-bc) ] /2......(5)  

                            d2+bc-d=0 or      d2-d + bc= 0 .... (4)  or d =[ 1 ± √(1-bc) ] /2.....(6)        

then, from (1) & (2), the conditions are (a)  b=c=0 , a+d-1 ≠ 0 .....matrix A is a diagonal matrix  and a2 -a =0 from (3)or a(a-1) =0 => a=0 or a=1.

                                                                                                                                                                                                    Similarly d=0 or d=1

Combination is a=0, d=0

                         a=1, d=1

1  0           0   0                        

0  1           0   0

tr  2         tr   0

det 1       det 0

ev:1,1    ev: 0,0

(b)  b=c=0 , a+d-1 =0 ....... matrix A is diagonal and trace=a+d=1. Then

a=1,d=0

a=0,d=1

1  0       0  0

0  0       0  1

tr  1       tr1

det 0     det 0   

ev:1,0  ev:0,1                                                              

(c) b=0, c ≠ 0 , then a+d-1 = 0 ; trace=a+d=1, A is lower triangular matrix and a+d=1 => if a=0,d=1 or vice versa.

 1  0       0   0

 c  0       c   1

tr 1       tr  1

det 0    det 0

ev:0,1  ev:0,1

(d) b ≠ 0, c = 0 , then a+d-1 = 0 ; trace=a+d=1, A is upper triangular matrix and a+d=1 => if a=0,d=1 or vice versa.

1   b        0   b

0   0        0   1

tr 1        tr 1 

det 0       det 0

ev:1,0    ev:0,1                                                                 

(e) b ≠ 0, c ≠ 0 , then a+d-1 = 0 ; trace=a+d=1

from (5) & (6),

a= [ 1 ±  (1-4bc) ] /2  . In this case, a =[ 1 ±  (1-4bc) ] /2

d=[ 1 ± (1-4bc) ] /2 .  In this case,  d =[ 1 ∓  (1-4bc) ] /2                                 

and a+d=1

[ 1 ± (1-4bc) ] /2            b 

     c                       [ 1 ∓ (1-4bc) ] /2

trace=1

det= 0     

ev:0,1                             

eigen vector

1        0

0        0

 

The red matrices are also Projection Operators.                                                                 

If a+d=-1, then A2 = -A .

The matrix is rewritten as         Example is

                     a         b                3      -6

               a(1-a)/b   1-a              1      -2

The eigen values can be found out from Characteristic Equn, and they are 0,1.The matrix is said to be positive, semi definite since one of the eigen values is positive and the other is zero.

Ratio of y,x component of eigen vector is y/x = 1/2b  ± 1/2b = 1/b, 0

* The 2x2 idempotent matrices are all singular barring the identity matrix. Their eigen values are either 0 or 1.

A = (1/2)  1-cos x          sinx      =          cos2  x/2             sin x/2 * cos x/2

                   sinx          1+cosx         sin x/2 * cos x/2             sin2  x/2

 is idempotent.

In the above IDEM Potent matrix, I * A = A (trivial )

                                                     A* A = A (By definition of IDEM Potent matrix)

                                                     B * A = A ( By definition of Projection Matrix)

where B = sec2  x/2          -tan  x/2

                -cot    x/2         cosec2  x/2   and Δ = 4cosec2  x  -1

To find B, let B=

e     f

g    h

B  * A = A   =>

(e - 1)(1+cos x) + f sin x = 0 ........(1)

(e - 1) sin x  + f(1-cos x) = 0 ......(2)

(h - 1) sin x + g (1+ cos x) = 0 .....(3)

(h - 1)(1- cos x) +g sin x = 0 ...... (4)    From  Equ (1) --(4), we get  gf = (e-1)(h-1).......(5) we can set equ (5) to zero.

We take equation (2) , e = 1  -    f * 2sin2 x/2  / (2sin x/2 cos x/2) = 1 - f tan x/2;

If we put

f= - tan x/2, then

e= sec2  x/2

Now we take equation (3). By analogy,

  h = 1  -    g * 2cos2 x/2  / (2sin x/2 cos x/2) = 1 - g cot x/2;

If we put

g= - cot x/2, then

h= cosec2  x/2

If A =

0   0

0   1

then B=

e  0

g  1

if A=

1  0

0  0

then B=

1  f

0  h

In case of a projection matrix A,

BA=A where B is a representation. In this case, the representation is reducible.

If in addition,  B (I - A ) =I -A where I is the unit matrix, representation B is fully reducible.

With 0 and 1, we can construct 4*4=16  2x2 matrices. These are

0  0     1    1               (A) consist of 4 1's. If one interchanges 1 and 0, the matrices remain in (A). Self- reflecting

0  0     1    1

 

0  1     1   0       1     1        1   1          (B) consist of 3 1's . If one interchanges 1 and 0, the matrices goes into (D). Not Self- reflecting

1  1    1   1        1    0         0   1

 

0   0    1  1       1   0         0   1       1    0       0   1         (C) consist of 2 1's . If one interchanges 1 and 0, the matrices remain in (C). Self- reflecting

1   1    0  0       1  0         0    1       0    1      1    0

 

1  0     0   1     0   0        0     0                                                (D) consist of 1 1's. If one interchanges 1 and 0, the matrices goes into (B). Not Self- reflecting

0  0    0    0     1   0        0     1

Number of self reflecting matrix group is 2 with 8 matrices

Number of not self reflecting matrix group is 2 with 8 matrices.

Look at the symmetry ?

No. of singular matrices 10 and non-singular matrices 6. Out of 6 non-singular matrices, 2 are involuntary.
  singular non singular Total Involutary
IDEM Potent 07 01 08 01
non IDEM Potent 03 05 08 01
TOTAL 10 06 16 0

2

Blue (2)  are the involuntary matrices,  pink (7) are IDEM Potent matrices which are singular, the red matrix is singular  and can be made IDEM Potent by multiplication of  1/2. If we take the unit matrix which is IDEM potent, then total IDEM potent matrices are 8 and non IDEM Potent matrices are 8. Look at the Symmetry again ?

No. of IDEM Potent matrices =8 (A=1,B=0,C=5, D=2, total=8)

No. of non IDEM Potent matrices =8 (A=1,B=4,C=1, D=2, total=8)

Among A,B,C,D, the two IDEM Potent matrices in (D) are projection matrices.

D1=1 1  is the representation matrix   for P1=  1  0   as D1*P1=P1

       0 1                                                                  0  0

D2=1 0  is the representation matrix   for P2=  0  0   as D2*P2=P2

       1 1                                                                  0  1

D1,D2 are reducible but not fully reducible. In general, we can express

D1= 1 1

       0 c

D2=c  0

       1  1

Inverse of 4 non-involutary , non IDEM Potent matrices

1 1     inverse  1 -1

0 1                   0  1

1 1     inverse  0  1

1 0                   1 -1

1 0     inverse  1  0

1 1                  -1  1

0 1     inverse -1 1

1 1                   1 0

The non IDEM Potent matrix

1  1    becomes IDEM potent on multiplication by 1/2  and it becomes   1/2      1/2

1  1                                                                                                           1/2      1/2

This matrix now is a Projection matrix  with respect to the representation

1/2   1/2

 0      1 

 

Projection operator is represented by a matrix which is IDEM Potent and the projection is either orthogonal (if the projection matrix is symmetric, it is orthogonal) or oblique.

Oblique projection 2x2 matrix is given by

0 0

x 1

if x=0, it is orthogonal projection. Condition of orthogonalty in projection matrix, P2 =P = PT

Orthogonal Projection of vector X=(x,y,z) on X-Y plane is x=(x,y,0)

If T is the linear transformation that maps every vector in 3 to its orthogonal projection in x-y plane, the corresponding matrix A is

1 0 0

0 1 0

0 0 0   and A X = x

*A representation is completely reducible if all the matrices in the representation D(Ai) can be simultaneously  brought into block diagonal form by the same similarity transformation matrix S such that S(Di)S-1 . In other words, all the group actions can be realized in some sub-space.

It is interesting to note that every representation of a finite group is completely reducible and also equivalent to a unitary representation.

Periodic Matrix:

A square matrix A is called a periodic matrix, if for some positive integer m,  Am =A. The periodicity of such a matrix is m-1.

Example :

1   -2  -6

-3  2    9

2   0  -3   here m=3 and periodicity is 2

A n-periodic matrix is diagonalizable on a complex field.

A=

0   0    1

1  0    0

0  1   0

A3 =I.Hence  A4 =A; periodicity is 3.

Heisenberg Matrix:

H=

1  a  c

0 1  b

0 0  1

this is a upper triangular matrix. All such matrices form a group called Heisenberg Group under matrix multiplication. The Center of the group is a set  whose whose elements commute with all the elements of the group. Center of the Heisenberg Group is a matrix of the form

C=

1  0  z

0  1 0

0 0  1

Nilpotent Matrix:

A square matrix A of order k (where k is a positive  integer) is said to be nilpotent if

Ak = O and Ak-1     ≠ O where O is the null matrix.

1    1  3

5    2  6

-2 -1 -3 is a nilpotent matrix of order 3.

All the eigen values are zero.

The smallest of various k's is the index of the matrix.

Any triangular matrix with zeros along the main diagonal is nilpotent with index n < = k

Determinant and trace of a nilpotent matrix is zero.

We show that any complex singular square matrix T is a product of two nilpotent matrices A and B with rank A = rank B = rank T except when T is a 2 X 2 nilpotent matrix of rank one.

Hessian Matrix :

is a square matrix of of second order partial derivatives of scalar valued functions f. It is a symmetric matrix. It describes the local curvature of a function of many variables. The Hessian matrix of a convex function is positive semi-definite.

If the Hessian (eigen values) is positive, definite at x -- f attains an isolated local minimum at x.

If the Hessian (eigen values) is negative, definite at x -- f attains an isolated local maximum at x.

if Hessian has both positive and negative eigen values, x is a saddle point of f .

The second-derivative test for functions of one and two variables is simpler than the general case. In one variable, the Hessian contains exactly one second derivative; if it is positive, then  is a local minimum, and if it is negative, then it is a local maximum; if it is zero, then the test is inconclusive. In two variables, the determinant can be used, because the determinant is the product of the eigen values. If it is positive, then the eigen values are both positive, or both negative. If it is negative, then the two eigen values have different signs. If it is zero, then the second-derivative test is inconclusive.

The Hessian matrix of a function f  is the Jacobian matrix of the gradient of the function 

Suppose  is a function f : Rn ->R taking as input a vector x ∈ R and outputting a scalar f(x) ∈ R. If all second partial derivatives of f exist and are continuous over the domain of the function, then the Hessian matrix  of f  is a square  matrix.

A 2x2 Hessian matrix is

      2 f /∂x21                  ∂2 f / ∂x1∂x2

  ∂2 f / ∂x1∂x2                  ∂2 f /∂x22

 

Jacobian Matrix :

In vector calculus, the Jacobian matrix  of a vector-valued function in several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and (if applicable) the determinant are often referred to simply as the Jacobian

A 2x2 Jacobian D is represented as

f1 /∂x1             ∂f1 /∂x2

∂f2 /∂x1             ∂f2 /∂x2  

Some represent the Jacobian as the transpose of the above.The Jacobian can also be used to determine the stability of equilibria for systems of differential equations by approximating behavior near an equilibrium point

Degrees of Freedom of n x n Matrix :

The dimension of the matrix is n and so also the dimension of eigen vector.

Number of Degrees of freedom = n! / [(n-2)! * 2!]

For a vector of dimension n, corresponding bivector has dimension =n! / [(n-2)! * 2!]

So an orthogonal matrix or bivector may be able to represent

(a) plane in n-dimensions.

(b) rotation in n-dimensions.

Commutative Property of matrix [A,B] i.e. AB-BA :

Condition 1: (If anti-trace / 2b) of matrix A=(anti-trace / 2f) of matrix B or (y/x)1 of A = (y/x)1 of B, then

(y/x)2 of AB =(y/x)2 of BA . Also AB12=BA12;

Condition 2: If (b/c) of Matrix A =(f/g) of Matrix B, then AB11=BA11, AB22=BA22. But (y/x)1 and (y/x)2 of AB may not have any relation with that of BA .

Condition 1 + Condition 2 : If both conditions are fulfilled, then [A,B] =0

anti-trace of AB & BA: It consists of  2 parts, part a and part b. Both the parts are same in AB as well as BA. But in AB total anti-trace is sum of 2 parts whereas in BA, it is difference of 2 parts.

anti- trace of AB part a =(dh-ae)=anti- trace of BA part a ; total anti-trace=part a + part b

anti- trace of AB part b =(cf-bg)=anti- trace of BA part b ; total anti-trace=part a - part b

(y/x) part 1 of AB and BA: Due to splitting of anti-trace , the (y/x) 1 is also split into part a and part b. Both are identical in AB and BA. In AB, both parts are added whereas in BA, these are subtracted. Their value not only depends on anti-trace of originators i.e matrix A & B but also on AB12, BA12.  If AB12=BA12 =k,

then they have the form (X +Y) /k and (X-Y)/k respectively for AB, BA.

Possible Relation With Uncertainty Principle: Heisenberg's uncertainty principle states that two attributes or observables of a physical system cannot be measured simultaneously with exact precision if they are canonical conjugates of each other, i.e one is a Fourier transform of another. We find that for 2 observables to interact , their (y/x)1 of Eigen vectors of A and B should be same. This ensures that (y/x)2 of the commuting products AB and BA remains same. Now coming to (y/x)1 of AB,BA, their components remain the same in magnitude but differ by a minus sign. It may so happen that in case of non-commuting observables, when the corresponding operators interact in 2 different ways i.e. AB and BA, there is a reflection of y component of part 1 by 180 degree whereas for commuting operators, this phase shift does not occur.

Another Approach to Commutative Property of matrix [A,B] i.e. AB-BA :

[A,B] =AB -BA=X=

ae+bg-ae-cf         af+bh -be-df     =     bg - cf                   -f(d-a)+b(h-e)

ce+dg-ag-ch        cf+dh-bg-dh            g(d-a)-c(h-e)         -(bg-cf)

Trace of X=[A,B] = 0 which implies that any square matrix with zero trace can be expressed  as a linear combination of commutative relation between a pair of same order square matrices.It also may be noted that trace of a product of equal sized matrices functions similar to the dot product of vectors. Trace is also a map of Lie Algebra gln --> k,(k is a scalar; n is the order of the matrix) or a mapping from operators to scalars.

Condition a :Now we put b/c =f/g , the condition 2 as above. Then the matrix becomes -

0                             -f(d-a)+b(h-e)

g(d-a)-c(h-e)              0

det[A,B]=det X=fg(d-a)2 + bc(h-e)2 - (d-a)(h-e)(bg+fc) = fg(d-a)2 + bc(h-e)2 - 2(d-a)(h-e)bg =[√(fg)*(d-a)-√(bc)*(h-e)]2

and determinant is either zero or positive since there is a square term and hence X is a positive, semi-definite matrix.

Condition a1: If d=a, h=e, then det X =0; or if   

√(fg)*(d-a)-√(bc)*(h-e)=0 or (h-e)/√(fg)  =(d-a)/√(bc), then det X=0;(see link 1). If X is a positive, semi-definite matrix, then its square root is a unique semi-definite matrix.

Condition a2: If fg=bc, then det X=bc(fg)[(d-a) -(h-e)]2 . This condition implies that g=±c, f=±b.

Condition b: if d-a=h-e, then

det X =-(bg-cf)2 + (d-a)2(b-f)(c-g)

          =-(bg-cf)2 + (h-e)2(b-f)(c-g)

Condition C:

If d=a, then det X=-(bg-cf)2 + (h-e)2(bc)

If h=e, then det X=-(bg-cf)2 + (d-a)2(fg)

When matrix A & Matrix B Commute ?

when  b/c =f/g  .....(a)

(anti-trace of A )/2b = (anti-trace of B )/2f ....

or (d-a) /(h-e) =b/f.......(b)

both conditions (a) and (b) are satisfied.

Anti-Commutative property of Matrix {A,B} i.e. AB+BA:

{A,B} =AB +BA=Y=

2ae + (bg+cf)                     b(h+e) +f(d+a)

g(d+a) + c(e+h)                  2dh +( bg+cf)

Condition for A,B to anti commute

a/d=h/e  ....(1)

trA/2b + trB/2f=0 ....(2)

(b/c) + (f/g) +2(a/c)(e/g) =0 .......(3)

More transparent condition:

a/d=h/e  ....(1)    [Choose a,d,h,e accordingly]

bg + cf = -2ae .......(2)

f/b  =g/c  = - tr2/tr1 = -k ....(3)

choose b

f=-bk

c=-ae/f

g=-ck

It will be observed that bg=cf=-ae

proof: bg+cf=-2ae; or b(-ck) +c(-bk) =-2ae which implies that bg=cf=-ae;

With the above conditions, it is found that both AB & BA are null matrices and hence A,B commute as well as anti-commute at this value. Moreover, both A and B are singular matrices.

General condition for A & B to commute or anti-commute

b/c  = f/g

Other Condition :-

(d ± a) / (h ± e) = ∓ (b/f )    In LHS, + for anti- commutative & - for commutative and reverse for RHS.

Specific condition for anti-commutative property :

a /d =h/e

When matrix A is divided by  Matrix B, the results are matrix C such that BC=A and C1 such that C1B=A. If B,C matrices commute, then only C1=C, otherwise not. In general case, when C is on the right side, result is A. When on the left side, to get A, in stead of C, C1 matrix is multiplied. Thus depending on the choice of direction, C is modified to C1, and CX=C1 where X is the modification matrix. CX & XC do not necessarily commute.

X21=(m*o1-m1*o)/detC   ; X11=(m1-nX21)/m; X22=(m*p1-n1*o)/detC; X12=(n1-n*X22)/m ;

The equn are

m    n   * X11   X12    =   m1    n1

o     p      X21   X22         o1     p1

m    n   * X11   =  m1       and   m   n   *   X12   = n1

o     p      X21       o1                 o    p        X22      p1

* If A,B are both symmetric matrices, then {A,B} is a symmetric matrix and [A,B] is skew-symmetric matrix.
* While finding out the Eigen Vector of the 2x2 matrix a   b

                                                                                      c   d

(eigen vector y component) / (eigen vector x-component) = (anti-trace/2b) ± (1/b)√[(tr/2)2 - det ] =(anti-trace/2b) ±√[(anti-trace/2b)2 + c/b ]

We already know that (y/x)part2 = (1/b)√[(tr/2)2 - det ] =(1/b)√[a2 /4 +d2/4+ad/2-ad+bc] = √[((d-a)/2b)2 +4c/4b ]=(1/2)√[((d-a)/b)2 +4*c/b ]. The blue portion is the (y/x)part1.

If  ((d-a)/2b)2 > c/b,  (y/x)part2 is +ve

     ((d-a)/2b)2 = c/b,  (y/x)part2 is +ve

     ((d-a)/2b)2 < c/b,  (y/x)part2 is -ve

If the matrix is hermitian, c/b=1,

     ((d-a)/2b)2   > 1,  (y/x)part2 is +ve & greater than √2

     ((d-a)/2b) = 1,   (y/x)part2 is √2

     0 < ((d-a)/2b)2 <  1,  (y/x)part2 is greater than 1 , less than √2

     ((d-a)/2b) = 0  , (y/x)part2 is + 1 or -1

(y/x) =(d-a)/2b ± (1/b)√[((d-a)/2)2 - (ad-bc) ] if we take x=1

then y = (d-a)/2b ± (1/b)√[(d-a/2)2 - (ad-bc) ] =(d-a)/2b ±(1/2) √[((d-a)/b)2 +4c/b ]

which is the solution of the quadratic equation

Ay2 -By - C=0 or y2 -By - C=0

where A=1

           B=(d-a)/b

           C= c/b

If B=1, C=1(as in case of hermitian matrices)

y2 -y - 1=0   ( y2 =2.618 ; y=1.618 )

This is a polynomial of degree 2 having roots y1=1.6180 , y2=0.6180

1/y =y+1 where 1/y is called the Golden Ratio.

y=2cos36

These correspond to the fact that the length of the diagonal of a regular pentagon is y times the length of its side, and similar relations in a pentagram.

The number φ=y turns up frequently in geometry, particularly in figures with pentagonal symmetry. The length of a regular pentagon's diagonal is φ times its side. The vertices of a regular icosahedron are those of three mutually orthogonal golden rectangles.

The right triangle with sides 1, y, y is called the Kepler triangle and is the only right triangle whose sides are in Geometric Progression.

* A linear fractional transformation is, roughly speaking, a transformation of the form

Z-> (az+b) / (cz+d) where ad-bc not equal to zero..  In other words, a linear fractional transformation is a transformation that is represented by a fraction whose numerator and denominator are linear.

*A'*A=A*A' =Δ2 I where Δ= determinant of square matrix A, A' is adjoint of A. The proof lies with the fact that A-1A=I and A-1 =A' /Δ2  .

* A symmetric matrix has got real eigen values. We prove it for a 2x2 matrix.

Let A= a     b

             c     d

eigen value= λ = (a+d)/2  ± √[(a+d)2/4  - (ad-bc)] =(a+d)/2  ±√ [(d-a)2/4 + bc)] =tr/2 ± √ [(anti-tr/2)2 + bc)]

For symmetric matrix, b=c, hence bc=b2   so that the second red part is real. Hence, λ s are real.

For non symmetric matrices if bc is -Ve and absolute value of bc is >(anti-tr/2)2   ,then eigen values are complex.

For non symmetric matrices if bc is -Ve and absolute value of bc is <  (anti-tr/2)2   ,then eigen values are real.

* Now for a 2x2 matrix, if

(a)  (tr/2)2  > [(anti-tr/2)2 + bc)], then  ad >  bc or  Δ > 0 irrespective of whether ad,bc are positive , negative, zero. Matrix is non-singular

(b)  (tr/2)2  = [(anti-tr/2)2 + bc)], then  ad =  bc or  Δ = 0 irrespective of whether ad,bc are positive , negative, zero. Matrix is singular

(c)  (tr/2)2  > [(anti-tr/2)2 + bc)], then  ad <  bc or  Δ < 0 irrespective of whether ad,bc are positive , negative, zero. Matrix is non-singular

        

Nature of Eigen Value (2x2 real matrix)

matrix trace det eigen value remark
   > 0  > 0 +, + positive,definite
  = 0  > 0  X inconsistent
  < 0  > 0 -, - negative, definite
         
   > 0  = 0 +, 0 positive,semi definite
   = 0  = 0 0, 0  
   < 0  = 0 - ,0 negative, semi-definite
         
   > 0  < 0 +, - indefinite
   = 0  < 0 -, + indefinite
   < 0  < 0 -, + indefinite
         

 

*A is called a normal matrix if AAT=ATA i.e. [A,AT]=0. This happens when in A= a  b

                                                                                                                                      c  d        b=c i.e. A is a 2x2 symmetric matrix.

* Normalization of the matrix is done by dividing each matrix element by square root of determinant.
Properties of different types of matrices:

matrix                   eigen value                           eigen vectors

Hermitian                  real                                   orthogonal

Anti-hermitian   pure imaginary or zero            orthogonal

real but not symmetric            imaginary 

real and symmetric                   real                 

unitary                   unit magnitude                      orthogonal

Normal              A has eigen value λ

                          A+ has eigen value λ+                   A, A+ have same eigen vectors

Eigen Value of 2x2 square matrices

λ = tr/2 ± √[(tr/2)2 - Δ] =tr/2 ± √[(anti-tr/2)2 +bc]

(y/x) = (anti-tr/2b)  ± √[(anti-tr/2)2 +bc]  / b = (anti-tr/2b)  ± √[(anti-tr/2b)2 + c/b ]

EV1=Eigen vector 1 = i + ((anti-tr/2b)  + √[(anti-tr/2b)2 + c/b ]) j

EV2=Eigen vector 2 = i + ((anti-tr/2b)  - √[(anti-tr/2b)2 + c/b ]) j

<EV1,EV2> =1 - |(c/b)|

Absolute value of c/b is taken because if c/b is negative and / or  square root of 2nd part of co-efficient of j is imaginary number, say

EV1=1i'+i*j'

EV2=1i'-i*j'

then multiplication of co-efficients including i yields 1-i2 =1+1=2 which is incorrect as we take only real coefficients which yields 1-1=0. So absolute value of c/b is taken.

||EV1|| =√(1+ [( anti-tr /2b)  +√( (anti-tr/2b) + c/b)]2 ;

||EV2|| =√(1+ [( anti-tr /2b)  -√( (anti-tr/2b) + c/b)]2 ;

2x2 Orthogonal Matrix:

Orthogonality condition:

Let A=

a     b

c     d

 AT=

a     c

b    d

AT =A-1 ;

 AAT=

a2 +b2        ac+bd

ac+bd         c2 +d2

Since AA-1 =I, so I=ATA=AAT

which implies from AAT=I

ac+bd=0

 a2 +b2  =c2 +d2    =1

which leads to

b=c  and a=-d , a2 +b2   =1.......(1) The matrices with these conditions are reflection matrix.

or

b=-c and a=-d , a2 +b2   =1.......(2) The matrices with these conditions are rotation matrix.

(1) leads to 4 types of matrices

A1=

     a            √(1-a2)

√(1-a2)             -a       and  Δ= -1

A2=

     a            -√(1-a2)

-√(1-a2)            - a    and  Δ= -1

A3=

     -a            √(1-a2)

√(1-a2)             a       and  Δ= -1

A4=

     -a            -√(1-a2)

-√(1-a2)             a       and  Δ= -1

 

(2) leads to 4 types of matrices

A5=

     a            √(1-a2)

-√(1-a2)             a     and  Δ= 1

A6=

     a            -√(1-a2)

√(1-a2)             a      and  Δ= 1

A7=

     -a            √(1-a2)

-√(1-a2)             -a     and  Δ= 1

A8=

     -a            -√(1-a2)

√(1-a2)             -a      and  Δ= 1

 

8 types of Orthogonal matrices, 4 rotational and 4 reflection. You get another 8 orthogonal matrices by just interchanging a with √(1-a2) .

Out of total 16 orthogonal matrices, 8 are rotational and 8 are reflection. All matrix elements are functions of one variable i.e. a.

 

2x2 Involutory Matrix:

Let A=

a     b

c     d

 A-1=

a     b

c    d

A =A-1 ;

 AA-1=A2 =

a2 +bc        ab+bd

ac+cd         bc +d2

Since AA-1 =I, so I=A2

which implies

ab+bd=ac+cd=0 or  b(a+d) =c(a+d) =0 or b=c

 and either b=0 ..........(1)

or

a+d=0 ...........................(2)

Also a2 +bc =bc +d2 = 1 or a2 =d2   and which implies  a2 +bc =a2 +b2 =1 or b=± √(1-a2)

If condition (1) is satisfied, then the matrices are

A1=

1  0

0  1         Δ= 1

A2=

1  0

0 -1       Δ=- 1

A3=

-1   0

0    1    Δ=- 1

A4=

-1  0

0  -1     Δ= 1

If condition (2) is satisfied

b=c, a=-d, a2 +b2 =1. This is the condition for orthogonal, reflection matrices.

The matrices are

A5=

    a            √(1-a2)

√(1-a2)             -a     and  Δ=- 1

A6=

    a              -√(1-a2)

-√(1-a2)             -a     and  Δ=- 1

A7=

    -a              -√(1-a2)

-√(1-a2)             a     and  Δ=- 1

A8=

    -a            √(1-a2)

√(1-a2)             a     and  Δ=- 1

One gets another 4 reflection matrices by just interchanging a with √(1-a2) .

Thus for condition (2), we get 8 reflection matrices which are involutory.

For condition (1), A2 and A3 are reflection matrices which are special cases of A6 and A7 when a=1

Only A1,A4 emerge as unique rotational matrices which are involutory and satisfy condition (1) uniquely.

Thus we get in total 10 involutory matrix type with 8 reflection and 2 rotation(one identity and the other negative identity matrix).

Thus involutory matrices in 2-D are a subset of Orthogonal matrices. Since all orthogonal matrices preserve vector norm, in 2-D Eucledian space , the state vector traces a circle . Reflection matrices are traceless. They are functions of 1 variable. They do not constitute a group under multiplication since closure law is violated.

In 2x2 matrices, there are 3 special cases:

AT  =A-1 which implies Δ2 =1 and Δ=+1 or -1 (orthogonality case)

A'  =A-1 which implies Δ2 =1 and Δ=+1 or -1

A =A-1  which implies  Δ2 =1 and Δ=+1 or -1. This case has 3 conditions

condition 1: a=-d and d=± √[1-bc]

Example: ∓(1-bc)        b

                      c        ±(1-bc)

Δ = -d-bc=-(1-bc) - bc = -1; trace=0 ; eigen value=∓ 1

condition 2a: b=c=0 ; a=d and d=± √[1-bc]

Example: ± √[1-bc]          0      =   1   0     or  -1    0

                        0      ± √[1-bc]        0   1           0    -1

Δ =1 ; trace= 2 or -2 ;

condition 2b: b = c ≠ 0 ; a=-d and d=± √[1-bc]

Example: ∓ √[1-bc]          b        

                         c      ± √[1-bc]       

   Δ =-(1-bc) - bc =-1 ; trace=0

condition 3: b=c=0 ; a=-d and d=± √[1-bc]

Example:  ∓√[1-bc]          0      =   -1   0     or   1    0

                        0      ± √[1-bc]          0   1           0    -1

Δ =-1 ; trace=0;

In all 3 conditions, the common criterion is d=± √[1-bc] and in majority cases, Δ =-1

Suppose the matrix is

a            b

c        ± √[1-bc]

case 1: b=0 , c =0

Δ = ±a ; trace= a ± 1

case 2: b=0 , c ≠ 0

Δ = ± a ; trace= a ± 1

* Quaternions are represented as   a+bi+cj+dk  where a,b,c,d are real numbers.

*Quaternions / Pauli matrices can be used to represent vectors in 3-D and also rotations in 3-D space because SO(3) and SU(2) share the same Lie Algebra.

A =ai (iσi) where i=1,2,3

But they are not generalizable to rotations in space of more than 3-D as wedge product of 2 vectors is not the same as cross product in more than 3-D.

Quaternion Group (Q or Q8) & their Matrix Representation:

* It is a non-abelian group of order 8 under multiplication. (All groups of order 5 or less are abelian and every sub group of abelian group is normal)

* It is isomorphic to the 8 element subset {1,i,j,k,-1,-i,-j,-k} of quaternions

* Q8 =<e,i,j,k | e2 = e, i2 =j2=k2=ijk=e> where e is the identity element and e commutes with all the elements of the group. It has the same order as dihedral group D4 but different structure.

* Matrix representation

e =

-1   0

0   -1

e=1   0

    0   1

i=i   0

   0  -i

i=-i   0

      0  i

j=0   -1

   1    0

j=0   1

   -1  0

k=0  -i

   -i   0

k=0  i

   i   0

* All the above matrices have unit determinant. Hence the representation of Q8  is in the Special Linear Group SL2(C)

* Another representation of Q8  is <a.b | a4 =e , a2 = b2, ba=a-1 b>

*The quaternion group Q8 has five conjugacy classes, { e }, { e }, { i, i }, { j, j }, { k, k }

Imathematics, especially group theory, two elements a and b of a group are conjugate if there is an element g in the group such that b = g–1ag. This is an equivalence relation whose equivalence classes are called conjugacy classes.

*  has 4 proper sub groups.

proper subgroups - Z(Q8) ={1, -1} , <i>= { 1, -1, i, -i} , <j>= {1, -1, j, -j} , <k>={1, -1, k, -k}.)

* Every Proper Subgroup of Q8  is normal sub group.

*SU(2) is the group of all unit quaternions. It is an infinite group.

* Q8 is not a sub group of isometries in 3-D Eucledian space and hence there is no 3-D shape whose symmetry group is Q8 . There are lots of finite groups  that are not a sub group of isometries in 3-D Eucledian space and hence there are no corresponding 3-D shapes and Q8  is the smallest such group. However, Hart and Segerman found a 4-d object that does have that symmetry.(Qestion is - is there any shape whose symmetry group is the Q8 ) Follow this LINK.    Permutation groups of order n! include symmetric groups of order Sn (of order n!)

* There are 5 finite groups of order 8 out of which 2 are non-abelian and one of these is  Q8.

Pauli Matrices: Total no. of matrices in n dimension is n2- 1   excluding Identity matrix. These are self-adjoint matrices.

σ0 =  1  0       σ1 =   0  1    σ2 =  0  -i         σ3 =  1   0      σ*2 =  0  -1   

          0  1                  1  0               i    0                   0  -1                  1   0

σ0  eigen value +1 and eigen vector 1 ; eigen value 1 and eigen vector 1 

                                                             0                                                       0

σ1  eigen value +1 and eigen vector ψx+= 1 ; eigen value -1 and eigen vector  ψx-= 1 

                                                                        1                                                                   -1

σ2  eigen value +1 and eigen vector ψy+=1 ; eigen value -1 and eigen vector  ψy-= 1 

                                                                       i                                                                    -i

σ3  eigen value +1 and eigen vector ψz+=1 ; eigen value -1 and eigen vector ψz-=   0 

                                                                       0                                                                     1

We can also write |0> = 1   and |1> = 0

                                         0                    1

and |0,1> = |0>  ‌⊗   |1>

      |1,0> = |1>  ‌⊗   |0>

σ*2  eigen value +i and eigen vector ψ*y+=1 ; eigen value -i and eigen vector ψ*y-= 1 

                                                                         -i                                                                     i

σ+ =(σ1+iσ2) / 2   known as raising operator =

0  1

0  0

σ- =(σ1- iσ2) / 2   known as lowering operator =

0  0

1  0

together, these are known as ladder operators.

Raising and Lowering operators are mathematically completely analogous to the Creation and Annihilation operators respectively for bosons.

Creation/annihilation operators are different for bosons (integer spin) and fermions (half-integer spin). This is because their wave functions have different symmetry properties. In the context of quantum mechanics, we define

annihilation operator a as  a|n> = √n| n-1>

creation operator a as  a|n> = √(n+1) | n+1>

a cannot be a Hermitian operator because if a=a , then the same operator will raise as well as lower the quantum number at the same time  of a state spoiling its very definition.a also has to be a complex operator. Ladder operators arise in the context of the harmonic oscillator, angular momentum(such as e.g. the number of particles in an harmonic oscillator, the angular momentum for spins, etc...).... But in Quantum Mechanics, particle number is conserved (you could say something like it modifies the energy, hence it emits a photon or something like that). These operators create and destroy photons in Quantum Field Theory.

Let us assume that  a=a  or amn* =  anm  ;  or <m|a|n>* = ? <n|a|m>

LHS = √n<m|n-1> =√nδm,n-1=√nδm+1,n ;

RHS=<n|a|m>=√m<n-1|m>=√(n-1) δm-1,n ;

LHS ≠ RHS. Hence a is not hermitian.

If you wrote them out in matrix notation, LHS would have entries one slot above the diagonal, RHS would have entries one slot below.

* Creation Operator a for any state α of a fermion denoted by aα

satisfies (aα)2 = 0 ;Which imply that two creation operators for 2 different states ( say  α  and β) of the same fermion anti-commute .For similar situation in bosons, they commute.

Casimir Operator for Pauli basis:

* It is the matrix which commutes with all the generators of the group.

  C=Σiσ2i

    =3I

C here is a multiple of identity matrix.

We know

(a)

0  1   * a =  0

1  0     0     a

This is the special case of transformation

cosθ    sinθ    when θ = π/2 and this is reflection by angle π/4

sinθ   -cosθ

The same effect can be achieved by

0  -1  * a  = 0

1  0     0     a

This is the special case of transformation

cosθ    -sinθ    when θ = π/2 and this is rotation by  angle π/2 anti-clockwise

sinθ     cosθ

So rotation by π/2  degree  is equivalent to reflection by π/4 when starting point is in x-axis

We know

(b)

1  0   * 0 =  0

0 -1     a    -a

This is the special case of transformation

sinθ    cosθ    when θ = π/2 and this is reflection by angle π/4 about Y-axis .

cosθ   -sinθ

The same effect can be achieved by

-1  0  * 0  = 0

 0 -1    a    -a

This is the special case of transformation

-sinθ      cosθ    when θ = π/2 and this is rotation by  angle π/2 clockwise about y-axis

-cosθ    -sinθ

So rotation by π/2  degree  is equivalent to reflection by π/4 when starting point is in y-axis

(c)  Let us find the generalized matrix for 

0  -i

i   0

 σ2=iσ1σ3  ,=

i * 0  1 *1   0

   1  0 * 0  -1

Generalizing this, we get

cosθ   i*sinθ     * sinθ'    cosθ'     ( when we put θ =θ' =π/2 , we get 1 and 3 )( Part A is equivalent to the boost matrix along x-axis & Part B rotation about y-axis)

isinθ    cosθ       cosθ'   -sinθ'

Part  A      *     Part B

Multiplying and simplifying,

cosθ * sinθ'   cosθ'    +       i*sinθ *  cosθ'   -sinθ'

           cosθ'  -sinθ'                             sinθ'     cosθ'

         (reflection, y-axis)                                  (rotation, x-axis)

               Part C                                                       Part D

det part c =cos2 θ *(-1)       +           det part D =i2 * sin2 θ *(1) = -1 which is the determinant of σ2 .

When θ =θ' =π/2 , Part C disappears, and Part D becomes the σ2 matrix.

 ----------------------------------------------------------------------------------

Bijan Matrices

σ0' =  -1  0       σ1' =    0  -1    σ2' =  0  i         σ3' =  -1   0         σ*2' =  0  1   

          0  -1                  -1  0             -i   0                    0  1                    -1 0

σ0'  eigen value -1 and eigen vector 1 ; eigen value -1 and eigen vector 1 

                                                             0                                                        0

σ1'  eigen value +1 and eigen vector 1 ; eigen value -1 and eigen vector 1 

                                                             -1                                                        1

σ2'  eigen value +1 and eigen vector 1 ; eigen value -1 and eigen vector 1 

                                                             -i                                                         i

σ3'  eigen value +1 and eigen vector -1 ; eigen value -1 and eigen vector  1 

                                                                0                                                         0

σ*2'  eigen value +i and eigen vector 1 ; eigen value -i and eigen vector   1 

                                                               i                                                          -i

these 8 matrices excluding σ2'  and σ2 ,form a non-abelian group under the binary operation of matrix multiplication. The subset of rotation matrices(σ0 ,σ0' ,σ*2 , σ*2') form abelian sub group and (σ0 ,σ0') form another abelian subgroup.(σ0 ,σ0') is also center of the group. Barring σ*2 and  σ*2' , all other Pauli and Bijan matrices are involutory .σ*2 and  σ*2' are inverse of each other and square of these matrices is σ0'  .

σ.A =1-x2 +y2-z2 where σ (σ0 ,σ1 ,σ*2 , σ3)

σ'.A =1-x2 +y2-z2 where σ' (σ0' ,σ1' ,σ*2' , σ3')

 iσ.A=-(1-x2 +y2-z2)where σ (σ0 ,σ1 ,σ*2 , σ3)

iσ'.A=-(1-x2 +y2-z2)where σ' (σ0' ,σ1' ,σ*2' , σ3')

σ.A =1-x2 -y2-z2 where σ (σ0 ,σ1 ,σ2 , σ3)

σ'.A =1-x2 -y2-z2 where σ' (σ0' ,σ1' ,σ2' , σ3')

 iσ.A=-(1-x2 -y2-z2)where σ (σ0 ,σ1 ,σ2 , σ3)

iσ'.A=-(1-x2 -y2-z2)where σ' (σ0' ,σ1' ,σ2' , σ3')

 

The eigen vectors of  σ0 and σ0'   can remain the same where as for others they just change places i.e. the eigen vector of     + σ1'  becomes the eigen vector of - σ1and vice versa.

By applying 180 degree rotational transformation σ0' to Pauli matrices, we get corresponding Bijan matrices and vice versa..

If  σ =σ01î +σ2 ĵ+σ3

A=t+xi +yj+zk

σ.A=t.σ01x(i.i) +σ2 y(j.j)+σ3z(k.k)

if i,i=j.j=k.k=1 or i,i=j.j=k.k=-1

and scalar is +1 or -1,

σ.A= σ'.A = t2-x2 -y2-z2   in all cases.

Which means irrespective of whether the 3-D axis is all real or all imaginary, and scalar is positive or negative no. , the determinant of σ.A is t2-x2 -y2-z2

and is invariant with respect to 3-D real / imaginary axes.

The nature of determinant is decided by the Pauli matrices and not by the nature of 3-D.

If it is iσ, then iσ.A= iσ'.A = x2 +y2+z2  -1 in all cases.

If in the 3-D Hilbert Space where all the 3 axes are imaginary,

vector At=i'+i'xti +i'ytj+i'ztk (i'  is square root of -1)

            σ =σ01i +σ2 j+σ3k

σ.At=i'σ0 +i'σ1xt +i'σ2yt + i'σ3zt= xt2 +yt2+zt2  -1  in all cases.

X= x

      y

σ0 X =  1  0*  x   =  x      σ0' X =  -1  0*  x   =  -x

              0  1   y       y                      0  -1   y          -y

σ1 X =  0  1*  x   =   y      σ1' X =  0  -1*  x   =   -y

             1  0    y         x                    -1  0    y          -x

σ2 X = 0  -1 *  x   =  -y     σ2' X = 0  1 *  x   =  y    here we have removed the i .

            1   0     y         x                  -1 0     y      -x

σ3 X =  1  0*  x   =    x      σ3' X =  -1  0*  x   =    -x

             0  -1   y        -y                     0   1   y              y

The effect of σi  and σi'  on the vectors are inverse to each other.

commutation algebra remains the same. But σ0'1σ1' =σ1*Adj σ1 =I*detσ1 =-I ;or σ0'=-I=-σ0

                                                                                                                                      σ1'=Adj σ1 ;

                                                                                                                                      σ2'=Adj σ2  ;

                                                                                                                                      σ3'=Adj σ3    ;

The vector formed by application of operator represented by matrix σ3 on the original vector is inversion of the vector formed by application of operator represented by  by matrix σ3'  on the original vector. So also is the case of  σ2

 σ3  can be construed as a flipping operator which changes

ψx+ to ψx-    and ψx-   to ψx+

ψy+ to ψy-    and ψy-   to ψy+

σ1  can also  be construed as a flipping operator which changes

ψz+ to ψz-    and ψz-   to ψz+

The Chief Characteristic of above matrices are

(1) They are all traceless .

 Tr(σi)=0

Tr(σiσj)=Tr(σjσi)=2δij

 Tr(σiσjσk)=2iεijk

(1a) The Pauli matrices can be expressed as

  t+δa3                              δa1-iδa2               

  δa1 + a2                         t-δa3

(2) determinant =-1

(3) They are all orthogonal, involutory (except σ0 , all else are reflection matrices and hence traceless)

A 2-d rotation matrix

cos θ/2        -sin θ/2

sin θ/2          cos θ/2

and

x'    = cos θ/2        -sin θ/2 *    x

y'       sin θ/2          cos θ/2      y

or

x'    =  cos θ/2        -i sin θ/2 *    x      =    I*cos θ/2  +iσx *  sin θ/2   *   x

iy'     i sin θ/2            cos θ/2      iy                                                            iy

where rotation matrix  Q(θ) =I*cos θ/2  + iσx *  sin θ/2...... for rotation around x-axis.

or  Q(θ) =

cos θ/2       i sin θ/2 

i sin θ/2     cos θ/2

If we take

Q(θ) =I*cos θ/2  + iσy *  sin θ/2

then

Q(θ) =

cos θ/2       sin θ/2 

-sin θ/2       cos θ/2

If we take

Q(θ) =I*cos θ/2  + iσz *  sin θ/2

then

Q(θ) =

e(iθ/2      0

0         e(-iθ/2)

Here we have taken  θ/2 instead of θ because of the fact that original configuration is restored after 720 degree instead of 360 degree. I here is the identity matrix.

To put it simply

Q(θ) =

cos  θ/2       -sin θ/2

sin  θ/2        cos  θ/2 =

cos(θ -θ/2)       -sin(θ -θ/2)  

sin(θ -θ/2)         cos(θ -θ/2) =

  cos  θ cos  θ/2 + sin θ sin θ/2        cos θ sin θ/2   -  sin θ cos θ/2

-(cos θ sin θ/2   - sin θ cos θ/2)       cos θ cos θ/2  + sin θ sin θ/2 =

   cosθ cos θ/2     -sin θ cos θ/2         +   sin θ  sin θ/2     cosθ sin θ/2

   sinθ cos θ/2       cos θ cos θ/2        +  -cosθ sin θ/2      sin θ sin θ/2 =

Q(θ) =

cos θ/2 *  cosθ     -sinθ     +      sin θ/2 *  sinθ     cosθ 

                sinθ       cosθ                          -cosθ      sinθ  =

cos θ/2 *  cosθ     -sinθ     +      sin θ/2 *  cos( π/2 - θ)         sin( π/2 - θ)  

                sinθ      cosθ                             -sin( π/2 - θ)         cos( π/2 - θ)     

 

Q(θ) =cos θ/2 * R1(θ )     +   sin θ/2 *R2( π/2 - [-]θ) ............(1)

For R1, the angle is rotated anti-clockwise around positive x-axis where as for R2, the angle is rotated clockwise around positive x-axis

Let |A> =Ax i + Ay j ;

  |A> =cosφ i + sinφ j   When normalized

Then |A1> =Q(θ)|A> =  A1x    where A1x =Ax cos θ/2 - Ay sin θ/2

                                     A1y   where A1y =Ax sin θ/2 + Ay sin θ/2 =

when normalized

|A1> =Q(θ)|A> =         A1x   where  A1x =cos( φ +  θ/2)

                                    A1y  where   A1y =sin( φ +  θ/2)

|A1> = |A2>+|A3> Where |A2> is the blue part and |A3> is the red part of equation (1).

||A1|| = 1

|A2> = cos θ/2  * cos θ   -sin θ   *   Ax

                            sinθ     cos θ       Ay

A2x=cosθ/2   *(Ax cosθ - Ay sinθ)

A2y=cosθ/2   *(Ax sinθ + Ay cosθ)

when normalized,

A2x=cosθ/2   *cos (θ +φ)

A2y=cosθ/2   *sin (θ +φ)

||A2|| =cosθ/2

AND

|A3> = sin θ/2  * sin θ     cos θ   *   Ax

                         -cos θ     sin θ        Ay

A3x=sin θ/2   *(Ax sin θ + Ay cos θ)

A3y=sin θ/2   *(-Axcos θ + Aysin θ)

when normalized,

A3x=sinθ/2   *sin (θ +φ)

A3y=sinθ/2   *-cos (θ +φ)

||A3|| =sin θ /2

A2.A3 = 0 => A2 is perpendicular to A3

|A1>=|A4>=|A2> + |A3>

A4x=A2x+A3x =cos( φ+θ -θ/2 ) =cos( φ +  θ/2)

A4y=A2y+A3y =sin( φ+θ -θ/2 ) =sin( φ +  θ/2)

If

A2x=cosθ/2   *cos (θ/2 +φ) OR  A2x=cosθ/2   *cos (-θ/2 +φ)

A2y=cosθ/2   *sin (θ/2 +φ)  OR  A2y=cosθ/2   *sin (-θ/2 +φ)

AND

A3x=sinθ/2   *sin (θ/2 +φ)   OR  A3x=sinθ/2   *-sin (-θ/2 +φ)

A3y=sinθ/2   *-cos (θ/2 +φ) OR A3y=sinθ/2   *cos (-θ/2 +φ)

then

|A>=|A4>=|A2> + |A3>

Above, the state vectors |A2> and |A3> are perpendicular to each other. Hence, irrespective of value of    φ ,

||A2|| = cos θ/2  

||A3|| = sin θ/2

and

||A4||=√(square of ||A2||+square of ||A3||)= 1

If |A>=|A4>=|A2> - |A3>, then

Q(θ) =R(3θ/2)

cos  3θ/2       -sin 3θ/2

sin 3θ/2        cos 3θ/2

and original configuration is restored after rotation of 4π/3 =240 degree.

But let us suppose that

A2x=cosθ/2   *cos (θ +φ)

A2y=cosθ/2   *sin (θ +φ)

A3x=sinθ/2   *sin (θ +φ)

A3y=sinθ/2   *cos (θ +φ)

|A2> and  |A3> are say, at an angle γ with each other, then

norm |A4>=√(1 +sinθ*cosγ) =√(1 +sinθ*sin2[θ +φ])

norm |A4>= 1 when θ +φ=90° ;  γ=-90° 

norm ||A4||= 0 when θ +φ=45° ; φ=135°;θ=90°

||A4||=√2 when θ +φ=225° ; φ=135°;θ=90°

A4 maximum will be √2 and minimum will be zero.

Since R(θ1)*R(θ2)=R(θ2)*R(θ1)=R(θ1+θ2),

R(3θ/2)*R(θ/2)=R(2θ)
R(3θ/2)*R(-θ/2)=R(θ)

In complex variables this is very common like turning the whole space into half space or even a small sector of the plane. Indeed, the number of turns to go back to the origin in QM is given as the inverse of the number representing the spin. So that spin 1/2 requires two turns, while spin 1 requires only one turn. Exactly the same happens in the complex plane. If you map the plane into 1/8, then you get the whole of the plane by repeating (rotating) the sector 8 times. So spin 2(the graviton) requires only half a turn to to go back to where it was. People would be hard hit to reproduce this using ribbons! The graviton is like two vectors set back to back, or two of the same half of the complex plane stuck together- due to gravity being always attractive of course.

The turn of the electron is supposed to correspond to the states of spin up and down exclusive states. If the arrow is pointing up, then one rotation is expected to bring it back to ‘up’ again. But spin is represented instead using complex numbers- wherein up and down are the real and imaginary parts. This works well for finding the various results of spin manipulations (e.g. conjugation) and that is why it was used. But the angle between the real and imaginary axes is only 90 degrees- whereas that for up and down it is 180 degrees. That is why you need two flips/rotations to bring the up down and the second returns it up again and achieve a full turn. There is no magic

The concept of returning back to original configuration by rotation of 720 degree instead of 360 degree can be explained as under-

Any vector with beginning point as origin upon rotation of 360 degree completes a circle and comes back to the starting point. Here, there is only 1 recurring point. In the case of waves, there are 2 recurring point at zero, one while moving towards the trough and the other while moving towards the peak. Hence, the situation can be described as completing  2 circles in one complete cycle.

(4) Eigen value : ±1

(5) Vector norm is preserved.

(6) Inner Product of the column vectors is zero which means they are orthogonal and norm of each column vector is 1 (real or real part of imaginary no.)

      hence they form an ortho-normal basis. The same is true for row vectors.

they can, therefore , be characterized as Reflection matrices.

(7) Tr(σi σj)=Tr(σj σi) applicable for any 2 square matrices. But for 3 square matrices, Tr(ABC)=Tr(BCA)=Tr(CAB)

(8) Pauli Vector  σ= σ1i  + σ2 j  +σ3 k =  k     i - ij

                                                                 i+ij     -k

where i is square root of -1 and i ,j, k are unit vectors in x, y, z axis respectively. The determinant is -(i2 + j2 + k2 )=-1 or square of length of unit vector.

(9) if A= xi+yj+zk , then σ.A = z     x - iy = -(x2 + y2 + z2 ) =- square of the norm of a 3-D vector

                                                x+iy     -z    

(10) Pauli matrices in exponentiation form correspond to SU(2) symmetry.    

(11) Out of 3 Pauli matrices,σ3  is in diagonal form.  We can diagonalize σ1 and σ2  by similarity transformation            

 Sσ1 =             Sσ1-1 =                   Sσ2 =                   Sσ2-1 =

1   1                    1/2       1/2            1     1                         1/2      -i/2

1  -1                   -1/2      1/2            i      -i                        1/2        i/2

 Sσ13=( σ3*Sσ1  )' =  1  -1

                                        1    1

Where ' represents the adjoint.

trace = determinant=2

if matrix is of the form  a     b

                                     c      d

then a2 + b 2= c2 + d 2= a2 + c2 =b2 + d 2=2

       ab=-cd  and ac=-bd

It has all the characterstics of orthogonal matrix except

(a) sum is 2 in stead of 1 and so also is determinant.

(b) trace is 2 and equals determinant.

(c) Orthogonal matrices commute in 2 dimension, Here in stead of AB - BA=0 , there is AB-(BA)' =AB-A'B' =0

(12) Sσ1 =

1   1

1  -1

σ1d =

1   0

0  -1 =σ3

here Δ =-2 and  tr=0

dot product row / column matrices is zero.

square of norm of each row/col vector is 2.

We can normalize this matrix and

normal Sσ1 =                                                                              Sσ1-1 =  

1/√2        1/√2               &images/nbiiiisp;                                                             1/√2        1/√2

1/√2        -1/√2                                                                           1/√2        -1/√2

here Δ =-1 and  tr=0

dot product row / column matrices is zero.

square of norm of each row/col vector is 1.

Hence Sσ1  is orthogonal (reflection), involutory, traceless  matrix

If Sσ1

1/√2   1/√2

-1/√2  1/√2

S-1σ1

1/√2    -1/√2

1/√2     1/√2

σ1d =

-1  0

0   1

Hence Sσ1  is orthogonal (rotation) matrix with trace=2 and Δ =1

dot product row / column matrices is 0

square of norm of each row/col vector is 1.

If Sσ1

1/√2   1/√2

-1/√2  1/√2

similarity matrix of  Sσ1 is

1/√2        1/√2

i/√2        -i/√2 

and the Sσ1d =

(1+i)/√2           0   =   1/√2     0       +  i/√2        0          =  I/2   +  (i *σ3 ) /2

   0            (1-i)/√2       0     1/√2           0        -i/√2

(12a) Sσ2 =

1/√2   1/√2

i/ √2   -i/√2

here Δ =-i and  tr=(1-i)/√2

dot product row / column matrices is 1

square of norm of each row/col vector is 1.

σ2d =

1   0

0  -1 =σ3

We can normalize this matrix and

normal Sσ2 =                                                                              Sσ2-1 =  

1/√2        1/√2                                                                             1/√2        -i/√2

i/√2        -i/√2                                                                             1/√2         i/√2

here Δ =-i and  tr=(1-i)/ √2

dot product row / column matrices is 1.

square of norm of each row/col vector is 1.

Hence Sσ2  is non-orthogonal (akin to reflection)  matrix

The similarity matrix can also be

1/√2   1/√2

-i/√2   i /√2 

and inverse is

1/√2    i/√2

1/√2  -i/√2

and

σ2d =

-1   0

0    1 =σ3'

 

(12b) The non- diagonal Pauli Matrices  σ1 ,σ2   can both be diagonalized to σ3  and hence in an equivalent Hilbert space , we have σ0, σ3 only. Since double reflection of σ0 is σ0' and double reflection of σ3 is σ3'  , (σ0, σ3, σ0', σ3') form an abelian group under matrix multiplication and all are involutory with 2 being traceless and 2 having trace. with  0'0) being an abelian sub group as well as being the center of the group.Here the rotation and reflection matrices are same in number unlike Pauli matrices where there are 3 reflection matrices and 1 rotation matrix.

(13) The commutation algebra of iσ1 , iσ2 , iσ3   are isomorphic with the commutation algebra of Quaternions

(14) The three Pauli matrices along with the Identity matrix form the basis of 2x2 Hermitian vector space which means any 2x2 hermitian matrix can be expressed as a linear combination of these 3 matrices with real coefficients.

(15)Pauli matrices occur in partial differential equations called Pauli equation which describes the interaction of spin with external electro magnetic field. It is the non-relativistic limit of Dirac equation. They represent the interaction of charged spin 1/2 particles with external electro magnetic field.

(16) Exponential form of a Pauli vector is e*expo ia(n.σ) =Icos a + i(n.σ)sin a    where vector a= a*(vector n) & |n| =1 and a is a scalar. ā=a*n̂  i.e n is a unit vector in the direction of a vector.

        determinant [ia(n.σ)] =a*a ;

        f(a(n.σ) =I * [ f(a) +f(-a)]/2 + (n.σ) * [ f(a) -f(-a)]/2 ;  This is Sylvester's formula.

(17)Any 2 of the 3 Pauli matrices form an irreducible set of 2x2 matrices.

(18) Pauli Identity: If a,b are 2 vectors, then (a.σ)(b.σ) =(a.b) +(iσ).(axb)

       LHS=aiσibjσj =  σiσjaibj=(δijI+iΣi,j,kεijk σk) aibj=aibi I+ iεijkaibjσk=(a.b)I+(iσ).(axb) =RHS

Another Proof:

(a.σ) = a3       a1-ia2

          a1+ia2    -a3

(b.σ) = b3       b1-ib2

          b1+ib2    -b3

 

(a.σ)(b.σ) = a3b3+a1b1+ia1b2  -ia2b1 + a2b2                                               a3b1-ia3b2  -a1b3 + ia2b3     =   a1b1 + a2b2  +a3b3                        0           +

                    a1b3-ia2b3  -a3b1 - ia3b2                                                    a1b1-ia1b2+ia2b1 +a2b2 + a3b3                    0                            a1b1 + a2b2  +a3b3

      i(a1b2-a2b1)                         (a3b1-a1b3) -i (a3b2-a2b3)                  =

 -(a3b1-a1b3) -i (a3b2-a2b3)                     -i(a1b2-a2b1)

=(a.b)I +   i(a1b2-a2b1)                              (a3b1-a1b3) -i (a3b2-a2b3)

                -(a3b1-a1b3) -i (a3b2-a2b3)                     -i(a1b2-a2b1)

We have to prove that               

 i(a1b2-a2b1)                              (a3b1-a1b3) -i (a3b2-a2b3)                      =  i,j,kεijkaibj σk

-(a3b1-a1b3) -i (a3b2-a2b3)                     -i(a1b2-a2b1)

RHS=i[(ε123a1b2 σ3+ε213a2b1 σ3 )+(ε132a1b3 σ2+ε312a3b1 σ1 )+(ε231a2b3 σ1+ε321a3b2 σ1 )]=

i[(a1b2 σ3-a2b1 σ3 )+(a3b1 σ2-a1b3 σ2 )+(a2b3 σ1-a3b2 σ1 )]=i[(a1b2 -a2b1 )σ3+(a3b1 -a1b3 )σ2+(a2b3 -a3b2  )σ1]=LHS=

iz              (x-iy)

(x-iy)         -iz     where z=(a1b2 -a2b1 ) ;  x =(a3b1-a1b3) ; y =(a3b2-a2b3)

determinant=(x 2+ y 2+z 2)

Ambidextrous Features :

(1) signature ++xx , +-xx , x symbolizing signatures which are neither + nor -

(2) ac= bd and ac=-bd.

(3) {σ1  , σ3 } =0   whereas in general , reflection matrices neither commute or anti-commute. In fact, all Pauli matrices anti-commute.

X= x

      y

σ3 σ2 X = 1  0 * 0 -1 *  x   =  0 -1 * x =-y = σ3' σ2' X

                  0 -1   1  0     y       -1 0    y    -x

σ3 σ3' X = 1  0 * -1 0 *  x   =  -1 0 * x =-x          σ2 σ2' X =X since  σ2 σ2'  =I and σ3 σ3'  =- I

                   0 -1    0  1    y        0 -1   y    -y

We formulate the spin 1/2 representation as

J1=(1/2)σ1    J2=(1/2)σ2       J3=(1/2)σ3

With j,k,l =1,2,3 we have [Jj , Jk] = i εjkl Jl

*In classical physics, we have seen that an object ( a vector also) requires a rotation of 2π or 360° to return back to original configuration. But spin 1/2 particles have to be rotated  4π or 720°   to return to the original configuration, because the particle is represented not as a vector but as a spinor whose vector coefficients are complex numbers in stead of real numbers. Rotation of a spinor in 2-d complex plane by 360°  results in half a turn.

Characterstic Features of Pauli & Bijan matrics:

* (σ0,σ1 ,σ3  & σ0',σ1' ,σ3' are involutary matrices: 6 in number ---- *σ2 & *σ2'are inverse of each other )

* (σ0,*σ2) & (σ0',*σ2') are rotation matrices

   (σ1,σ3) & (σ1',σ3') are reflection matrices

*[σ0,σ1]=[σ0,*σ2]=[σ0,σ3]=0 ...... 6 in number , trivial commutation

  [σ0,σ1']=[σ0,*σ2']=[σ0,σ3']=0 ......6 in number , trivial commutation

  [σ0',σ1]=[σ0',*σ2]=[σ0',σ3]=0 ......6 in number ,  commutation

  [σ0',σ1']=[σ0',*σ2']=[σ0',σ3']=0 ...6 in number ,  commutation

  [σ0,σ0']=[σ1,σ1']=[*σ2,*σ2']=[σ3,σ3']=0 ...... 8 in number , commutation

 Total no. 32 out of 64. 12 are blue, rest 20 are black.

 [σ1 ,*σ2]= σ3- σ3' ;  [σ1' ,*σ2']= σ3- σ3' ;

  [*σ2 ,σ3]= σ1- σ1' ;  [*σ2' ,σ3']= σ1- σ1' ;

  [c ,σ3]= *σ2- *σ2' ;  [σ1' ,σ3']= *σ2- *σ2' ;...........12 in number

 [σ1 ,*σ2']= σ3'- σ3';  [σ1' ,*σ2]= σ3'- σ3 ;

  [*σ2 ,σ3']= σ1'- σ1 ;  [*σ2' ,σ3]= σ1'- σ1' ;

  [σ3 ,σ1']= *σ2- *σ2' ;  [σ3' ,σ1]= *σ2- *σ2' ;............12 in number

* σ0σ01σ13σ30'σ0'1'σ1'3'σ3'0

   *σ22=*σ2'2'0'       ---                           total 08 in number

σ1σ3=*σ2'

σ3σ1=*σ2

σ1*σ23'

2σ13

2σ31

σ3*σ21'

For easy remembrance, follow the following steps:

(1) if right hand numbers are say  1,2 - left hand number will be the other one i.e. 3

(2) If  σ1 is on the extreme left, rhs will be ', otherwise it will be non-prime. For example *σ2σ1= ? Here σ1 is not on the extreme left, hence rhs is non prime, on LHS, it is 2,1 so that on RHS it is 3 . So it is σ3

 

If A =am+xi+yj+zk  where m is unit vector along time t axis or along any 4th vector and magnitude is a.

   σ= σ0m1i + σ2j + σ3k

A.σ= σ0a1x + σ2y + σ3z = a+z     x-iy  =a2-(x2+y2 +z2 )

                                                x-iy     a-z

A.iσ= iσ0a+iσ1x + iσ2y + iσ3z = ia+iz     ix+y  =(x2+y2 +z2 ) - a2

                                                     ix-y       ia-iz

 

A.σ'= σ0'+σ1'x + σ2'y + σ3'z = -(1+z)     -(x-iy) =- [12-(x2+y2 +z2 )] =(x2+y2 +z2 )-1 =σ0'(A.σ)

                                                   -(x-iy)     -(1-z)

A.iσ'= iσ0'+iσ1'x + iσ2'y + iσ3'z =  -(i+iz)     -(ix+y) = 12-(x2+y2 +z2 )

                                                         -(ix-y)     -(i-iz)

The relation of 4th vector M (whether it is time or something else) with one of the space axis , here z-axis is very significant because this coupling has interesting ramifications.

In Friedmann's Equation in cosmology ,

ds2 =a(t)2 ds32 -c2 dt2 where ds3 is the 3 dimensional metric which can be either (a) flat or (b) closed sphere with constant positive curvature or (c) hyperbolic path with constant negative curvature and a(t) is the scale factor which relates to pressure and energy of matter in the universe, c is the velocity of light.

Compare this with

(x2+y2 +z2 ) -c2 dt2

where 1 has been replaced by c2 dt2

* The most interesting matrix is  *σ2 with i taken out of σ2 . It is not involutary as its square is -I. The only way to make it involutary is to multiply it by i or -i. Moreover, by this process, it becomes a reflection matrix being converted from a rotational matrix. What is the necessity ? If we keep it rotational , the set of 8 matrices form a non-abelian group. But by making it hermitian with multiplication by i or -i, the set no more forms a group. The role of σ2 is very interesting as it transforms a real vector to a vector in imaginary no. axis.

* We also find that σ1 and σ3  are reflection matrices as both have determinant -1 and in case of former the counter diagonal elements are equal and of same sign  and for the later, diagonal elements are same but of opposite sign. Not so for *σ2  .  It is a rotational matrix. We know that matrix product of 2 rotational matrices is rotational and we also know that in a 2-D Eucleadian space two consecutive reflections    is equivalent to a rotation. However, multiplication by i makes it  reflection matrix.

* The structure of  table I is similar to Table IV ; These tables are pure in the sense the multiplication is between either Pauli or Bijan matrices.

   The structure of Table II  is similar to Table III ; These tables are mixed because the multiplication is between Pauli AND Bijan matrices.

       Caley Table for Pauli Matrices & Bijan Matrices   

(these 8 matrices form a non-abelian group under the binary operation of matrix multiplication)

( 2 &2'  Pseudo Pauli & Pseudo Bijan Matrices)  

 I σ0 σ1 2 σ3 II σ0' σ1' 2' σ3'
  e= σ0 σ0 σ1 2 σ3   σ0' σ1' 2' σ3'
-iK=σ1 σ1 σ0 σ3 2   σ1' σ0' σ3' 2'
J=  *σ2 2 σ3' σ0' σ1   2' σ3 σ0 σ1'
-iI=σ3 σ3 2' σ1' σ0   σ3' 2 σ1 σ0'
III         IV        
e' = σ0' σ0' σ1' 2' σ3'   σ0 σ1 2 σ3
-iK'=σ1' σ1' σ0' σ3' 2'   σ1 σ0 σ3 2
J'= *σ2' 2' σ3 σ0 σ1'   2 σ3' σ0' σ1
-iI'=σ3' σ3' 2 σ1 σ0'   σ3 2' σ1' σ0

* Out of the above group of 8 matrices, 4 are having determinant +1 (σ0, 2, σ0',2' ) and 4 are having determinant -1 (σ1,σ3 ,σ1' , σ3' )

* Those (σ0,σ0',2,2') with determinant +1 (rotation matrices)are a sub-group w.r.t. matrix multiplication. And this group is an Abelian Group. Rotation matrices form an abelian group in even number of dimensions. They have a sub group (σ0',σ0) which is also the center of this group.  σ0 and σ0' are self conjugate.

 2 and 2' are self conjugates with respect to similarity transformation under  σ0'.

2 and 2' are conjugates of each other with respect to similarity transformation under σ1, σ3.

* σ0'σi   = σi'.

(In the above cycle, multiply any 2 pauli matrices clockwise and product is i times the third one.

multiply any 2 pauli matrices anti-clockwise and product is -i times the third one. )

Caley Table

  (e)σ0 (J)*σ2 (e')σ0' (J')*σ2'
(e)σ0 σ0 2 σ0' 2'
(J)*σ2 2 σ0' 2' σ0
(e')σ0' σ0' 2' σ0 2
(J')*σ2' 2' σ0 2 σ0'

* We take Ψ0= iσ0 ; Ψ1= iσ1; σ2=*Ψ2= i2 ; Ψ3= iσ3 ;

                Ψ0'= iσ0' ; Ψ1'= iσ1'; σ2'=*Ψ2'= i2' ; Ψ3'= iσ3' ;

                                          (Ψ131'3',σ0,*σ2,σ0',2') form non-abelian group called Quaternion Group Q8 under matrix multiplication of which

                                            Part-I (σ0,2,σ0',2') is abelian sub group  )

                                            (σ0, σ0') is the smallest sub group

                                            (σ0, σ1) is the smallest sub group

                                             (σ0, σ1') is the smallest sub group

                                            (σ0, σ3) is the smallest sub group

                                            (σ0, σ3') is the smallest sub group

                                            (σ0, σ0', σ3, σ3' ) is a group

* Sqare of all the Pauli matrices and all Bijan matrices is  σ0. However, the square of Pseudo Pauli and Pseudo Bijan matrices (*σ2,2') are σ0'. Similarly, all Pauli and Bijan Matrices barring (*σ2,2') are involutory and are their own inverse. *σ2 and *σ2' are not involutory and one is the inverse of the other, These anomalies are removed when *σ2   and *σ2' are made hermitian instead of real.

* Any matrix that commutes with all the 3 Pauli matrices is a multiple of the unit matrix.

* There is no matrix apart from zero matrix that anti-commutes with all the 3 Pauli matrices.

Caley Table   
 I σ0 2 σ0' 2' II Ψ1 Ψ3 Ψ1' Ψ3'
  e= σ0 σ0 2 σ0' 2'   Ψ1 Ψ3 Ψ1' Ψ3'
J=*σ2 2 σ0' 2' σ0   Ψ3' Ψ1 Ψ3 Ψ1'
e'= σ0' σ0' 2' σ0 2   Ψ1' Ψ3' Ψ1 Ψ3
J'=*σ2' 2' σ0 2 σ0'   Ψ3 Ψ1' Ψ3' Ψ1
III         IV        
K'=Ψ1= iσ1 Ψ1 Ψ3 Ψ1' Ψ3'   σ0' *σ2' σ0 2
I=Ψ3=iσ3 Ψ3 Ψ1 Ψ3' Ψ1   2' σ0 2 σ0'
K=Ψ1'= iσ1' Ψ1' Ψ3' Ψ1 Ψ3   σ0 2 σ0' 2'
I'=Ψ3'=iσ3' Ψ3' Ψ1' Ψ3 Ψ1'   2 σ0' 2' σ0

A singular n x n matrix collapses Rn into a sub-space of dimension less than n. Some information is destroyed on the way. It is not a one-to-one transformation. So there is no inverse. consider a projection from 3-D plane to 2-D x,y plane. i.e. mapping (x,y,z) to (x,y,0) such that (1,1,2) to x,y plane which is (1,1,0) . now reversing it, one does not know whether it is (1,1,1) or (1,1,2) or (1,1,5). Visually, it is like a paint where you are trying to resize a picture by dragging its corners but flatten the whole curve into a st. line. How perfectly can you recover a 3-D shape given a picture of its shadow from a single angle ?

* A singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   0

                                                 c    d           y      0

represents equation of a straight line passing through the origin. Its other features are

(1) slope or inclination w.r.t x-axis is given by m=tanθ =-a/b =-c/d

(2) the column , row vectors are linearly dependant.

(3) rank of the matrix is 1.

(4) Eigen value are λ1 =0 ;  λ2= trace=a+d;

(5) Eigen Vector 1 = 1*i + (c/a)j

      Eigen Vector 2=  1*i  -(c/d)j

Proof: e.vector=anti-tr /2b  ± √[(anti-tr/2b)2 + c/b ] = d-a /2b ± √[(d-a/2b)2 + c/b ]. since ad=bc, (d-a)/2b= (c/2)( 1/a - 1/d)

e vector (y/x)= (c/2)( 1/a - 1/d) ± √[(c/2)2(1/a - 1/d)2 + 4c2/4ad ] =(c/2)( 1/a - 1/d) ±(c/2) √[(1/a + 1/d)2   ] ==(c/2)( 1/a - 1/d) ±(c/2) [(1/a + 1/d) ]

e.vector(y/x)1= c/a

e.vector(y/x)2=-c/d

angle between 2 eigen vectors =cos-1  (<A,B> / ||A||*||B|| )=  (1 - |c/b|) / (√[1+(c/a)2]*√[1+(c/d)2])

in case of imaginary numbers, take only the real parts ignoring i.

when the matrix transforms the vector x    to  0

                                                              y         0  . Norm of new vector is zero irrespective of the magnitude of norm of old vector. The new vector is a null vector.

(6) b  a   *  x =0

     d   c      y

represents a st.line passing through the origin which is perpendicular to the above st.line

(7) d    -b  *  x

    -c     a      y

where the matrix which  is adjoint of matrix A represents equation of a st. line passing through origin making an angle θ with the st. line represented by first matrix equation A * x =0

                                        y    0

and  θ =tan-1  [b(d+a) / (b2 +ad)]

(8) The transpose matrix

a   c  * x = 0

b   d    y     0     represents a st.line passing through the origin having slope -a/c = -b/d.  

* A Non-singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   0

                                                 c    d           y      0

represents equation of a pair of different straight lines passing through the origin. Its other features are

(1a) 1st line: slope or inclination w.r.t x-axis is given by m1=tanθ =-a/b

(1b) 2nd line: slope or inclination w.r.t x-axis is given by m2=tanφ =-c/d

(2) the column , row vectors are linearly independant.

(3) rank of the matrix is 2.

(4) Eigen value are λ1 =tr/2  + [(tr/2)2 - Δ ];  λ2= tr/2  - [(tr/2)2 - Δ ];

(5) Eigen Vector 1 = 1*i + (d-a)/2b * j + √ [((d-a)/2)2 +c/b ]*j

     Eigen Vector 2=  1*i + (d-a)/2b * j - √ [((d-a)/2)2 +c/b ]*j

(6) ||EV1|| = =1 +√{ (d-a)/2b  + √ [((d-a)/2)2 +c/b ] }2 ;

     ||EV2|| = =1 -√{ (d-a)/2b  - √ [((d-a)/2)2 +c/b ] }2 ;

(7)<EV1,EV2> =(1 - |c/b| ) Angle between 2 eigen vectors given by cos-1  (<EV1,EV2> / (||EV1||*||EV2||))

(8) Angle between the 2 st.lines given by

     tan( φ - θ ) =( tanφ  - tanθ ) /(1+tanφ * tanθ) ; = Δ / (ac+bd); if ac=-bd, ( φ - θ )=90° . If matrix is singular , angle between 2 st.lines=0°

     tan( θ-φ ) =( tanθ  - tanφ ) /(1+tanφ * tanθ) ; = -Δ / (ac+bd); if ac=-bd, ( φ - θ )=90° . If matrix is singular , angle between 2 st.lines=0°

     which means there is only one straight line. If anti-determinant is zero ,Δ =2ad=-2bc. Then

     tan( θ-φ ) =-Δ / (ac+bd) =2ab / (a2-b2)

    tan( φ - θ ) =Δ / (ac+bd)=2ab / (b2-a2)

     tan( φ + θ ) =( tanφ  + tanθ ) /(1-tanφ * tanθ) ; = anti-Δ / (ac-bd); where anti-Δ =ad+bc .  if ac=bd, ( φ + θ )=90°.If matrix is anti-singular , sum of slope angles of the 2 lines is 0. anti-singular means anti-Δ =0;   

Suppose tanθ = m1=-a/b and

              tanφ =m2= -b/a

Now we construct the new matrix elements by mapping. Since m1 is as per the formula, a-->a

                                                                                                                                             b-->b

m2=-c/d = -b/a so                                                                                                                c-->b

                                                                                                                                             d-->a

the matrix is    a    b

                        b   a

sum of the angles of 2 st. lines w.r.t. x-axis is given by tan( φ + θ ) =anti-Δ / (ac-bd) = (a2+b2)/ab-ba= infinity (We have mapped old elements to new elements), hence φ + θ =90° if tanφ  tanθ =1. similarly, it can be proved that

                            φ - θ =90° if tanφ  tanθ =-1

(9) The general equation of a pair of straight lines passing through the origin is given by

   a1x2+b1y2+2h1xy=0 where a1=ac , b1=bd, h1=anti-Δ /2;

If m1 is the slope of first st. line,

    m2 is slope of 2nd st.line, then

    m1+m2=-2h/b1 =-anti-Δ / bd =-a/b -c/d; If anti-Δ =0, m1+m2=0 & a/b=-c/d

    m1m2=a1/b1=ac/bd

    m1-m2=√ [(m1+m2)2 -4m1m2] =(ad-bc) /bd =Δ / bd

    m2=-a/b , m1=-c/d

(10) If anti-Δ =0, then the equation reduces to a1x2+b1y2=0 which represents 2 st. lines passing through the origin whose slopes sum up to zero.

* A singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   k       = k *  1

                                                 c    d           y      kc/a             c/a

represents equation of a straight line not passing through the origin where k is any real number. Its other features are

(1) slope or inclination w.r.t x-axis is given by m=tanθ =-a/b =-c/d

(2) y-intercept = k/b

      x-intercept=  k/a

(2) the column , row vectors are linearly dependant.

(3) rank of the matrix is 1.

(4) Eigen value are λ1 =0 ;  λ2= trace=a+d;

(5) Eigen Vector 1 = 1*i + (c/a)j

      Eigen Vector 2=  1*i  -(c/d)j

angle between 2 eigen vectors =cos-1  (<A,B> / ||A||*||B|| )=  (1 - |c/b|) / (√[1+(c/a)2]*√[1+(c/d)2])

in case of imaginary numbers, take only the real parts ignoring i.

when the matrix transforms the vector x    to k* 1

                                                              y              c/a  . Norm of new vector is k√[1+(c/a)2]. If norm of old vector is √[1+(c/a)2] , norm gets amplified by k if old vector is an eigen vector, otherwise irrespective of the status of old vector , norm of new vector is constant at k√[1+(c/a)2] . Moreover, the intercepts create a vector M=(k/a)i + (k/b)j with norm k√( 1/a 2   +   1/b2) . This is an Euclidean triangle which transforms a  right angled triangle with sides 1/a , 1/b to a triangle with sides k/a, k/b , k being the amplification factor and both the triangles are similar. This is equivalent to a triangle whose sides were a,b, then got amplified by k. and the same is divided by 2 times the area of the original triangle i.e.( k/ab ) √(a2 + b2 )

* A non-singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   k1      

                                                 c    d           y      k2           

represents equation of a pair of different straight lines not passing through the origin where k1,k2 are any real numbers. But if k2=ck1/a, the matrix has to be singular for the equation to become consistent. Its other features are

(1a) 1st line: slope or inclination w.r.t x-axis is given by m1=tanθ =-a/b

(1b) 2nd line: slope or inclination w.r.t x-axis is given by m2=tanφ =-c/d

(2a) 1st line:y-intercept = k1/b

                    x-intercept=  k1/a

(2b)2nd line:y-intercept = k2/d

                    x-intercept=  k2/c

 

(3) the column , row vectors are linearly independant.

(4) rank of the matrix is 2.

(5) Eigen value are λ1 =tr/2  + [(tr/2)2 - Δ ];  λ2= tr/2  - [(tr/2)2 - Δ ];

(6) Eigen Vector 1 = 1*i + (d-a)/2b * j + √ [((d-a)/2)2 +c/b ]*j

     Eigen Vector 2=  1*i + (d-a)/2b * j - √ [((d-a)/2)2 +c/b ]*j

(7) ||EV1|| = =1 +√{ (d-a)/2b  + √ [((d-a)/2)2 +c/b ] }2 ;

     ||EV2|| = =1 -√{ (d-a)/2b  - √ [((d-a)/2)2 +c/b ] }2 ;

(8)<EV1,EV2> =(1 - |c/b| )

(9) Angle between the 2 st.lines given by

     tan( φ - θ ) =( tanφ  - tanθ ) /(1+tanφ * tanθ) ; = Δ / (ac+bd);

(10) Point of intersection of 2 st.lines (x1,y1) given by

       x1=(dk1-bk2)/Δ ;

       y1=-(ck1-ak2)/Δ ;

(11) Norm of vector V, connecting (0,0) & (x1,y1) is given by

||V|| = (1/Δ) √[(dk1-bk2)2  + (ck1-ak2)2 ] =(1/Δ) √[k12(c+d)2+k22(a+b)2-2k1k2(ac+bd)]

(a) If a2 + b2 =1 , c2 + d2 =1, ac=-bd, then Δ=±1 and

||V|| = (1/Δ) √(k12+k22) =± √(k12+k22) (here, there is no trace of matrix elements)

if Vis normalized, ||V|| =[ 1 / √(k12+k22) ][k1+k2]

(b) if a2 + b2 =1 , c2 + d2 =1, ac=bd, then

||V|| = (1/Δ) √ (k12+k22-4k1k2bd)

if |b|= sinθ, |d|= cos θ or vice versa, then

||V|| = (1/cos2θ) √ (k12+k22-2k1k2sin2θ) if b,d are of same sign

        =(1/cos2θ) √ [k12+k22+2k1k2cos(π/2 + 2θ)]

||V|| = -(1/cos2θ) √ (k12+k22+2k1k2sin2θ) if b, d are of different sign.

       = -(1/cos2θ) √ [k12+k22-2k1k2cos(π/2 + 2θ)]

Thus if we put k1,k2 as two vectors with angle (π/2 + 2θ) between them, then resultant vector V1 & difference vector V2 are given by

 |V1| =√ [k12+k22+2k1k2cos(π/2 + 2θ)] as per law of parallelogram of vector addition

 |V2|= √ [k12+k22-2k1k2cos(π/2 + 2θ)] as per law of parallelogram of vector subtraction

 

thus ||V|| = (1/cos2θ)*|V1| if b,d are of same sign

thus ||V|| =-(1/cos2θ)*|V2| if b,d are of opposite sign

Nature of |V1| & |V2|

When θ = 0° ,     |V1| = |V2|,

When θ = 45° ,   |V1| < |V2|,

When θ = 90° ,   |V1| =|V2|,

When θ = 135° , |V1|  >|V2|,       

(12) if a=d, then (y/x) of eigen vectors is  ± √(c/b) and ratio of resultant vector is (k2/k1). If √(c/b) =k2/k1, then what happens ??

* A singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   0  and  a    b  *  x = k

                                                 c    d           y      k          c    d      y    0

represents equation of two  straight lines one  passing through the origin and the other not through the origin which is parallel to first straight line . Its other features are-

(a) 1st line slope=-a/b; 2nd line slope=-c/d =-a/b since matrix is singular.

(b) intercept of 2nd line on x-axis =k/c

                         intercept on y-axis=k/d

* A non-singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   0  and  a    b  *  x = k

                                                 c    d           y      k          c    d      y    0

represents equation of two  straight lines one  passing through the origin and the other not through the origin . Its other features are-

(a) 1st line slope=-a/b; 2nd line slope=-c/d 

(b) intercept of 2nd line on x-axis =k/c

                         intercept on y-axis=k/d

( c ) point of intersection between 2 lines

       x=-bk/ Δ

       y=ak/ Δ

(d) angle between 2 lines given by tanθ =- Δ / (ac+bd)

      if slope of first line is m1, 2nd line is m2, then tanθ = (m1-m2) / (1+m1m2)

* How to construct the equation of a pair of straight lines passing through the origin from a 2x2 matrix ?

Let A=

a    b

c    d

then   (ac)x2 + (bd)y2 +(ad+bc)xy = 0 represents a pair of st. lines passing through the origin.

(a) when ad=bc , ac=-bd, Equn is b(y2-x2) + 2axy=0   & the matrix is singular. slope dy/dx= (bx-ay)/(ax+by)

      when ad=-bc, ac=bd, Equn. is (y2+x2)=0 & if either of b,d is not zero and  if out of (x,y) , one is real , the other is imaginary. slope is dy/dx=-x/y

(b) when ad=bc, ac=bd, b(y2+x2) + 2axy=0   & the matrix is singular. Slope is  dy/dx=- (bx+ay)/(ax+by)

      when ad=-bc, ac=-bd, Equn. is (y2-x2)=0 if either of b,d is not zero. Then either both x, y are real or both are imaginary. slope is dy/dx=x/y

We have seen that (ac)x2  + (ad+bc)xy + (bd)y2= 0 represents a pair of st. lines passing through the origin.

The usual equation of a pair of st. lines passing through the origin is a1x2 +2hxy + b1y2  =0 where

a1 = ac , 2h=ad+bc , b1=bd

now 2h=ad+bc=ad+(b1/d)(a1/a) or a2d2 -2had + a1b1=0 or ad=h ± √(h2 - a1b1) =k

Hence given the equation a1x2 +2hxy + b1y2  =0, one can construct the 2x2 matrix A where one can arbitrarily choose a.

A =     a              b1a/k

           a1/a         k/a         and determinant Δ = k - (a1b1/k). if k2 =a1b1, then the determinant is singular.

Equation a1x2 +2hxy + b1y2  =0 represents a pair of straight lines passing through the origin where a1=ac, b1=bd,  h1=(anti-Δ)/2 . When (anti-Δ) =0, it reduces to a pair of straight lines sum of whose  angles w.r.t x-axis is zero as m1=-m2 or m1+m2=0 where m1,m2 are the slopes of st.line 1 and 2 respectively and the equation is

a1x2  + b1y2  =0  or a2x2 -b2y2 =0

or (ax+by)(ax-by)=0

 

                                       

* matrix A =

a      b

c      d

and the conditions are

1. a2 + b2 =1.....(1)

2. c2 + d2 =1.....(2)

3.ac=± bd  .......(3)

then it follows that

(a) a=±d

(b) b=±c

Proof: ac=± bd ,a2c2= b2d2 or (1-b2 )(1-d2)=b2d2 ;    or b2 + d2 =1....(4).similarly, it can be proved that a2 + c2 =1....(5). From equn (1) & (5), it follows

that b=±c. Similarly from equn.(1) &(4), it follows  a=±d.

(c) |a|,|b|,|c|,|d|  each lie between [0,1]

(d) determinant lies in  [-1,1]

(e)  A= f (either a or b or c or d) i.e.  function of a single variable.

1st  case : if a=d, b=-c ,then ac=-bd, Δ=b2 + d2 =1  ; AT =A-1 , hence matrices are orthogonal. Eigen values are complex numbers for real a,b,c,d.

if a=-d, b=c , then ac=-bd, Δ=-(b2 + d2 )=-1  ; AT =A-1 , hence matrices are orthogonal. Eigen values ±1.

where |b|, |d|, |a|,|c| is less than or equal to 1.

2nd case:But if  a=d,b=c, ac=bd,    Δ =d2 - b2  =1-2b2 ; E.value ±d± b since d,b are both numerically less than / equal to 1, Δ lies between 0 and 1. AT=A. (y/x) of eigen vectors=±1 (++++,----,+-+-,-+-+ cases)

But if  a=-d,b=-c, ac=bd, Δ =-(d2 - b2)=-(1-2b2) ; E.value:±√(d2 - b2)since d,b are both numerically less than / equal to 1, Δ lies between 0 and -1. (y/x) of eigen vectors=±d/b ± √[(d/2)2 -1]. Since |d| <=1, (y/x) shall be a complex number.(++--,+-+-,-+-+,--++ case) . Orange patterns are observed in both sub-groups of 2nd case.

If we take a=cosθ=c, then b=sinθ=d, Δ =± ( cos2θ - sin2θ )=± cos2θ. Its periodicity is 180 degree & pattern is similar to cosθ. At θ=0° & θ=180° , Δ =+1 and midway in between i.e. at θ=90° , it is -1.If one plots  Δ =f(cos2θ), one gets the blue graph.If one plots  Δ =f(cosθ), one gets the red graph.

Remark: The main difference between 1st case and 2nd case is that in the first case , the determinant is +1 or -1 irrespective of value of a,b and is a constant. However, in the 2nd case, the determinant varies with variation of value of a,b and lies [-1,0] or [0,+1] . so the overall range is [-1,+1]. For 2nd case, typical examples are :-

1 0                                1/√2     1/√2

0 1  has Δ =1  and         1/√2     1/√2  has Δ =0

similarly

0 1                                  -1/√2      -1/√2

1 0  has Δ =-1  and         -1/√2     -1/√2  has Δ =0

* matrix A =

a      b

c      d

and the conditions are

1. a2 + b2 =1.....(1)

2. c2 + d2 =1.....(2)

3.ac=± bd  .......(3)

From Equn.(3)  √(1-d2)*a = ±√(1-a2)*d  or

 (1-d2)*a = ±√(1-d2)*√(1-a2)*d  ....(6)

Δ =ad-bc = ad - √(1-d2)*√(1-a2) or

dΔ =ad2 -d*√(1-d2)*√(1-a2)=ad2 ± (1-d2)*a which implies that

dΔ =ad2+a-ad2 or

dΔ =a.....(7) or

dΔ =ad2 -a+ad2  =2ad2-a  or 2ad2 -Δd -a =0 ......(8) . This is a quadratic equation of d and solution can be found out if a,Δ are known.

if a=d, from (7) Δ = +1 (rotation matrix)

if a=-d, from(7) Δ = -1 (reflection matrix)

if a=d, from (8) Δ = 2d2-1   (non-orthogonal matrix akin to rotation)

                          d =± [(1+Δ)/2 ] .....(9)

if a=-d, from (8) Δ = -(2d2-1)   (non-orthogonal matrix akin to reflection)

                           d =± [(1-Δ)/2 ]......(10)

Pl. remember that  to find either d or Δ given any one, |d| <=1 or Δ [0,1] for equn. (9)

and                                                                                  |d| <=1 or Δ [-1,0] for equn. (10)

Accordingly, appropriate value to be put to get correct result.

Dividing  Equation (8) by a, and putting   x=d √2, Δ =a√2, then equn. (8) becomes x 2 - x -1 =0......(11)

Solutions are

φ1 =(1+√5)/2 = 1.6080

φ2 =(1-√5)/2 = -0.6080

φ1φ2 =-1

φ1 + φ2=1

φ1 - φ2=√5

φ - 1/φ =1

The matrix M=  1   1

                          1   0   has eigen values  φ1 and φ2 . Corresponding eigen vectors are

μ =  φ1

        1

ν = φ2

       1

M matrix can be diagonalized by a Similar matrix

S=   φ1    φ2

        1       1

and diagonal matrix is

Λ =  φ1    0

         0     φ2

Fibonacci Numbers are represented by matrix equation

Fk+2    = M * Fk+1

Fk+1                 Fk

If a +b =0

φ1*a + φ2*b = 1

then a=-b=1/√5

* We take up the case of matrix A=

x  y

y  x

with condition that x2 + y2 =1 and the matrix is akin to rotational matrix.

If we write the matrix equation

A * x = 1

      y     1, then it represents a pair of conics , one a unit circle and the other a unit rectangular hyperbola ( A rectangular hyperbola is a special case of general hyperbola whose asymptotes are perpendicular to each other. The circle touches the vertex of the hyperbola. On Expanding,

x  y  * x =1

y  x     y   1

or  x2 + y2 =1 (equation of a circle with center at origin and unit radius)

and 2xy= 1 (equation of rectangular hyperbola with unit semi-major axis) & foci at (√2,0) & (-√2,0) , eccentricity √2, and directrix x=1/√2

Rectangular hyperbola is the locus of the point M such that difference of its distance from two foci is √2  times the distance between the foci.

We can rewrite the above matrix equation

cosθ   sinθ  *  cosθ = 1

sinθ   cosθ      sinθ     1

If take vector X=cosθ* i +sinθ*j with vector norm as 1

           vector Y=  i+j with vector norm √2

 which means the operator acting on the vector rotates the vector from inclination θ (w.r.t. x-axis) to 45 degree and stretches it by √2 irrespective of the initial angle and initial norm. If we apply a scaling factor K, the equation becomes

K2 * cosθ   sinθ  * cosθ = K2  * 1

        sinθ   cosθ     sinθ               1

The eigen value of the matrices are λ1 = cosθ + sinθ , λ2 = cosθ - sinθ

to find maximum value of λ1, dλ / dθ =0 or -sinθ +cosθ =0 or θ=45°

to find minimum value of λ2 , dλ / dθ =0, or sinθ +cosθ =0 or θ=135°

so maximum value is √2 and minimum value is -√2.

(y/x) of eigen vectors are ±1 .so vectors are

1  or 1

1     -1

Hence eigen equations are

cosθ  sinθ   *  1  = (cosθ +sinθ) * 1

sinθ  cosθ       1                             1  and

cosθ  sinθ   *  1  = (cosθ -sinθ) * 1

sinθ  cosθ      -1                          -1

with eigen value hovering between √2 and -√2.

The matrix

-x  -y

-y  -x

is similar in behavior to the above matrix.

If we take the adjoint matrix i.e.

x  -y  * x =1

-y  x     y   0

or  x2 - y2 =1 (equation of a hyperbola with center at origin and unit semi-major axis)

cosθ  -sinθ   *  1  = (cosθ -sinθ) * 1

-sinθ  cosθ       1                             1  and

cosθ  -sinθ   *  1  = (cosθ +sinθ) * 1

-sinθ  cosθ      -1                          -1

It shall be seen that eigen value of the adjoint one is λ2 if that of the original one was λ1 and vice versa.

 

 

Figure below showing 2 rectangular hyperbolas 2xy= ±a2 and 2 hyperbolas  x2 - y2 =a2  and circle  x2 + y2 =a2   .

 

* matrix A =

a      b       

c      d

and the conditions are

1. a2 + b2 =1.....(1)

2. c2 + d2 =1.....(2)

3.ac=± bd  .......(3)

then it follows that

(a) a=±d

(b) b=±c

( c) A= f(either or b or c or d) i.e. function of a single variable

(d) a,b,c,d ∈ [-1,1]  subject to condition (1),(2)

(e) Δ ∈ [-1,1]

When ac = bd, it is non-orthogonal matrix

When ac = -bd, it is orthogonal matrix.

Comparison of Orthogonal & Non-Orthogonal Matrices

Orthogonal                                                                         Non- Orthogonal

(1) Δ = ± 1 irrespective of a / θ value                                 (1) Δ=±(a2 - b2)=±cos2θ & is dependant on  of a / θ value, limiting values being +1 or -1

(2) signature is +++- or  ---+                                              (2) signature is ++++ , ---- , ++--

(3) vector norm is preserved                                               (3) vector norm is not  preserved

Norm before application of the operator                               Norm before application of the operator

√(x2 + y2)                                                                               √(x2 + y2)

Norm after application of the operator                                  Norm after application of the operator

√[(xcosθ-ysinθ)2 + (xsinθ+ycosθ)2]=√(x2 + y2)                 √[(xcosθ+ysinθ)2 + (xsinθ+ycosθ)2]=√[(x2 + y2)+2xysin2θ]

                                                                                              minimum value=    x-y at θ=135° ;

                                                                                              maximum value = x+y at θ=45° ; the phase difference is 90°                                                   

                                                                                              only when θ=0°   or θ=90°   , it is  √(x2 + y2)

(4) vector anti-norm is not preserved                                  (4) vector anti-norm is not preserved

Anti- Norm before application of the operator                    Anti- Norm before application of the operator

√(y2 - x2)                                                                               √(y2 - x2)

Anti-Norm after application of the operator                        Anti-Norm after application of the operator

√[(xsinθ+ycosθ)2 - (xcosθ-ysinθ)2]=                                   √[(xsinθ+ycosθ)2 - (xcosθ+ysinθ)2]=

√[cos2θ(y2 - x2) +2xysin2θ]                                                 √[cos2θ(y2 - x2)]

(5) Rotational Matrix1*Rotational Matrix2=                      (5) akin to Rotational Matrix1*akin to Rotational Matrix2=

     Rotational Matrix3                                                              akin to Rotational Matrix3

a1   -b1  * a2   -b2 = a1a2-b1b2    -a1b2-a2b1                     a1   b1  * a2    b2 = a1a2+b1b2    a1b2+a2b1

b1    a1     b2    a2    a1b2+a2b1    a1a2 - b1b2                     b1   a1     b2   a2    a1b2+a2b1    a1a2 + b1b2

= X1    -Y1                                                                             = X2    Y1

   Y1     X1                                                                                 Y1    X2

(6) Reflection Matrix1*Reflection Matrix2=                      (6) akin to Reflection Matrix1*akin to Reflection Matrix2=

      Rotational Matrix                                                                akin to Rotational Matrix

a1   b1  * a2   b2 = a1a2+b1b2     a1b2-a2b1                          a1   b1  * a2   b2 = a1a2-b1b2     a1b2-a2b1

b1 -a1     b2  -a2    -a1b2+a2b1   a1a2+b1b2                          -b1 -a1   -b2  -a2    a1b2-a2b1     a1a2-b1b2

=X2    Y2                                                                                  =X1    Y2

-Y2     X2                                                                                   Y2     X1

(7) Rotation matrices commute .                                         (7) Akin to Rotation matrices commute .

(8) Reflection matrices do not commute/anti-commute      (8) Akin to Reflection matrices do not commute/anti-commute

a1 b1 * a2   -b2  = a1a2-b1b2   -(a1b2+a2b1)                         One can work out in similar manner

b1-a1  -b2  -a2      a1b2+a2b1     a1a2-b1b2 

a2   -b2 *a1   b1  =a1a2-b1b2     (a1b2+a2b1)

-b2 -a2    b1 -a1  -(a1b2+a2b1)     a1a2-b1b2

(9) Rotation * Reflection =Reflection                                  (9)akin to Rotation *akin to  Reflection =akin to Reflection

(10) Reflection  * Rotation =Reflection                               (10) akin to Reflection  *akin to  Rotation =akin to Reflection

(11) Rotation matrices do not commute with Reflection matrices           (11) akin to Rotation matrices do not commute with akin to Reflection matrices

a1 -b1 * a2 b2  = a1a2-b1b2      a1b2+a2b1                                                 a1 b1 * a2 b2  = a1a2-b1b2      a1b2-a2b1                                                  

b1 a1    b2 -a2     x1y2+x2y1     -a1a2+b1b2                                                b1 a1  -b2 -a2     -a1b2+a2b1     -a1a2+b1b2

a2   b2 *    a1 -b1  =  a1a2+b1b2     a1b2 -a2b1                                           a2   b2 *   a1 b1 = a1a2+b1b2     a1b2 +a2b1

b2 -a2       b1  a1       a1b2-a2b1    -(a1a2+b1b2)                                        -b2 -a2    b1  a1    -x1y2-x2y1    -(a1a2+b1b2)

(12) Commutation of above rotation & Reflection matrices is               (12) Commutation  of above akin to rotation &akin to  Reflection matrice is

-2b1b2   2a2b1                                                                                            -2b1b2   -2a2b1

2a2b1    2b1b2 which is reflection                                                              2a2b1    2b1b2 which is akin to reflection

(13) anti- commutation  of above rotation & reflection is                       (13) anti- commutation  of above rotation & reflection is

2a1a2   2a1b2                                                                                             2a1a2   2a1b2

2a1b2  -2a1a2 which is reflection                                                             -2a1b2  -2a1a2   which is akin to reflection

(14) AT = A-1                                                                                          (14)AT ≠ A-1;

 In addition, for reflection matrices A=AT                                                    However, for akin to rotation matrices,  A=AT

hence A-1   =A, so these matrices are involutary.                                        For akin to reflection matrices, A*A=±cos2θ*I where I is identity matrix.Hence A*A is scalar  matrix

                                                                                                                                                 means repetition of reflection scales the unit  matrix by a factor of cos2θ

(15) E.value Rotation Matrix:           x ± iy                           (15) E.value Akin to Rotation Matrix:           x ± y

                     Reflection Matrix:        ±1                                                   Akin to Reflection Matrix:        ±√(x2 - y2)

(16) (y/x)E.vector Rotation Matrix:  ± i                                (16) (y/x) E.vector Akin to Rotation Matrix:     ±1

                     Reflection Matrix:  ± x/y ±√[ x2/y2 +1]                                           Akin to Reflection Matrix:  ± x/y ±√[ x2/y2 -1] 

(17) Reflection matrices are traceless.                                   (17) Akin to Reflection matrices are traceless.

        Rotation matrices are anti-traceless.                                      Akin to Rotation matrices are anti-traceless. 

(18) All orthogonal matrices form a group with respect to     (18) All non-orthogonal matrices as defined above do not form any group under matrix multiplication ,

       matrix multiplication. n x n matrices form O(n) group .            neither the  akin to rotational or akin to reflection. Exa:-

       All rotational matrices form special Orthogonal group            A=   cosθ1    sinθ1    B= cosθ2      -sinθ2     C=A*B=   cos(θ1-θ2)      -sin(θ1+θ2)

       called SO(n). Group is abelian when n=even.                                  sinθ1    cosθ1          sinθ2       -cosθ2                     -sin(θ1+θ2)      cos(θ1-θ2)  

                                                                                                        Here in C matrix , following rule is violated because  a2 + b2   ≠ 1 and  c2 + d2   ≠ 1 . The

                                                                                                         situation becomes different if angle is replaced by hyperbolic angle. however, at θ -->0 

                                                                                                         the matrices can be approximated to be forming a group w.r.t. matrix multiplication. In

                                                                                                         above case,a2 + b2   =  c2 + d2 =± [cos2(θ1-θ2) +sin2(θ1+θ2)] . If we replace

                                                                                                         a2 + b2   ≠ 1 and  c2 + d2   ≠ 1 with above, other 2 things remaining the same, the matrices

                                                                                                         form a pseudo-group with matrix multiplication. However, they are not abelian pseudo

                                                                                                         group because

                                                                                                         C1=B*A =  cos(θ1-θ2)           sin(θ1+θ2)

                                                                                                                             sin(θ1+θ2)           cos(θ1-θ2) and hence C1  ≠ C.

                                                                                                          This group can be termed as a Pseudo Group w.r.t. Orthogonal Group because only 1

                                                                                                           condition out of the 3 conditions is different in group formation i.e a2 + b2   ≠ 1 and

                                                                                                                                 c2 + d2   ≠ 1 for the product matrix.

                                                                                                          = ± [cos2(θ1θ2) +sin2(θ1±θ2)] [1,2] or[-2,-1] i.e between  [-2,+2]. The value of

                                                                                                          Δ of A,B  as well as C,C1 hover between [-1,+1]

(19) Orthogonal matrices of dimension n form group                   (19) For matrices akin to rotation, form Lorentz Group.

       O(n) which is abelian in even no. of dimensions.

       Only rotations form a Special orthogonal Group

       SO(n)

Definition of Pseudo Group (2x2 matrix): A set of elements along with a binary operation form a pseudo group w.r.t main group  if 1. the elements follow closure law & associativity law. Moreover, corresponding to each element, there is an adjoint element which belongs to the group . The binary operation of the element on its adjoint produces a scalar matrix whose scalar element is square of the matrix determinant & the same also belongs to the group..                                                                                                         

Further Analysis of Non-Preservation of Norm in case of Non-Orthogonal Matrices (item no. 3 above):

Vector Norm after application of operator is ||V|| =√[(x2 + y2)+2xysin2θ]=√[(x2 + y2)+2xycos(π/2 - 2θ)]

Let φ =π/2 - 2θ, then

||V|| =√[(x2 + y2)+2xycosφ] which is nothing but law of parallelogram of addition of vectors. Further treatment of this resultant are given in parallelogram1.htm.

When  φ = 180°  or θ=-π/4 =-45° ,  ||V|| =(x - y)--- it contracts up to minimum value  upon being multiplied by the operator.( Vector contraction)

When  φ = 90°  or θ=0° ,  ||V|| =√(x2 + y2)--- it remains unchanged upon being multiplied by the operator.( It is identity transformation)

When  φ = 0°  or θ=π/4 =45° ,  ||V|| =(x + y)--- it expands up to maximum value  upon being multiplied by the operator.( Vector dilation)

the cycle is 180° for  θ and 360° for φ .

Matrix Equation of the Vector V is given by

1     cosφ    *   x    =  x+cosφ

0     sinφ         y           ysinφ

where the non-orthogonal matrix

cosθ     sinθ   gets mapped to   1     cosφ     where      φ =(π/2 - 2θ)

sinθ     cosθ                               0     sinφ

The matrix M= 1    cosφ

                         0     sinφ    which is a Upper triangular matrix has the property of eigen values being the value of diagonal elements. It also continuously evolves from a singular matrix to an identity matrix I as φ changes from 0 degree to 90 degree. The matrix behaves in similar manner if it is a lower triangular matrix which it can be.

Whether Representation of SO(2) group is Fully Reducible ?

Let Rotation matrix R(x) =

cos x   -sinx

sin x    cosx

Character of the group representation =ξ(x) = Tr[R(x)] = 2cosx = e ix +e -ix ;

If the entries are real, the representation is irreducible. If the entries are complex, the representation is completely reducible being a direct sum of 2 irreducible representations.

ξ(x) =ξ1(x)   + ξ-1(x)

The representation of su(2) = The representation of so(3)

Identity Matrix: those square matrices whose diagonal elements are all same and are either 1 or -1 and determinant 1. If determinant is -1, then these are negative identity matrices.

If matrix order is even, i.e n=2,4,6, ...... It is I when each diagonal element is +1/-1 , but the order of the matrix is odd  i.e. n=3,5,7 ,....,& if the diagonal elements are all  -1, then it is -I.

or -I =(-1)2n+1 * I  where n is an integer.

Then if

A=  1  0   and B= -1   0 =-A

       0  1                 0  -1

then B= -1  0  =(-1) * 1   0  =-A = -I

               0  -1               0   1

 

Operating A on a 2-d vector having coefficient (x,y) yields (x,y) where as operating B on the same vector yields (-x,-y) which is inversion of A. Both become the same only when both the reference axis are also inverted (that's what is number n=2) or one reference axis is inverted twice whereas the other reference axis is unchanged.

Let us go deeper into the nature of matrix B.

B=

(a) cos π   sinπ    ..... akin to translation i.e Lorentz boost ; determinant Δ = cos 2 π - sin2 π = 1 or cosh2 π - sinh2 π = 1

      sinπ    cosπ

(b) cos π   -sinπ   ..... rotation anti-clockwise ; determinant Δ = cos 2 π + sin2 π = 1

      sinπ    cosπ

(c) cos π   sinπ    ...... rotation clockwise ; determinant Δ = cos 2 π + sin2 π = 1

     -sinπ    cosπ

(d) cos π   -sinπ

     -sinπ    cosπ   ..... akin to translation i.e Lorentz  boost ; determinant Δ = cos 2 π - sin2 π = 1 or cosh2 π - sinh2 π = 1

The Operator B acting on a 2-d state vector can affect it in 4 different ways , but all leading to the same result when θ=π which is analogous to triple point of water , a point where all 3 phases i.e. the gaseous, liquid and solid phases meet while otherwise charting their individual behavioral curves.

Involutory Matrices:

Matrix A is involutory if A2 =A

* If A,B are 2 involutory matrices and commute with each other i.e. [A,B]=0, then AB is also involutory.

* A square matrix is involutory and A2 =I iff  (A+I) / 2 is an idem potent matrix.

* Determinant of an involutory matrix is either +1 or -1.

* Signature matrices are involutory. Signature matrices are diagonal matrices whose diagonal elements are either +1 or -1.

* A 2x2 real matrix   a     b      is involutory if a2 +bc =1

                                  c    -a

* Pauli matrices  are involutory.

*Following matrices are also involutory ( discovered while studying similarity matrix of Z3 )

1    0           0       ω      1     0 

ω  -1  ,       ω2      0   ,  ω2  -1

* All involutory matrices are square root of identity matrix.

* A symmetric involutory matrix is an orthogonal matrix and it represents an isometry i.e. preserves Euclidean distance upon linear transformation.

* If A is an involutory matrix of order n, then

   trace=1 if n is odd

   trace=0 if n is even.

Exchange Matrix:J: The matrices whose counter diagonal elemwents are all 1 and rest of the elements are zero. Trace is 1 if order n is odd and 0 if the same is even.

Any matrix A satisfying the condition AJ=JA is centrosymmetric

                                                             AJ=JAT is persymmetric

* 2x2 centrosymmetric matrices are    a      b

                                                             b      a

   3x3 centrosymmetric matrices are    a    b    c

                                                             d    e    d

                                                             c    b    a

 

Lorentz Transformation Boost Matrix :

L=

γ      βγ

βγ    γ     and trace=2γ   , determinant=Δ = 1

where  β = v/c     and  γ =1 / √( 1-  v2/c2 )

β = (-1, 1 ) ;   γ = (1, ∞ ), (-1, -∞ )and βγ = [-∞,  ∞]

When β > 1 or  β < -1 ,  γ=± i(β2 -1)-1  and range of γ is   γ=i(±∞,0) and βγ = i(±∞  0)

When  β =± i |v| / |c|, γ= (-1, +1)

when |v|=0,      γ=±1

when v=±∞,     γ= 0

irrespective of value of c.

Eigen values are

λ1 =(tr/2) +√[(tr/2)2 -Δ] = γ +√[γ2 -1]

λ2=(tr/2) -√ [(tr/2)2 -Δ]=  γ -√[γ2 -1]

Range of

λ1 = [ 1 ,∞+∞]

λ2 = [ 1, ∞-∞]

L is a matrix of the form

a    b

c   d   where

(a) determinant =1                     orthogonal determinant is +1 or -1

(b) a2 - b2=a2 - c2   =1              orthogonal   a2 + b2=a2 + c2   =  1

(c) d2 - b2=d2 - c2   =1             orthogonal   d2 + b2=d2 + c2   =  1

(d) L =LT L-1 ;                         orthogonal   L LT = L-1 ;  

 ab=cd and ac=bd                      orthogonal ab=-cd and ad=-bc

In special theory of relativity, the transformation of space and time co-ordinates is given by

γ      βγ  *  x  =  x'

βγ    γ       t        t'

since the dimension of space and time are different, we bring both x and t to same dimension and rewrite the equation as

γ      βγ  *  x  =     x'

βγ    γ       ct       ct'

If we can recast L as a rotational matrix , it shall be easier to analyse and fortunately there is a way to do it by multiplying off diagonal elements by i and equation becomes

γ      -iβγ  *  x  =     x'

iβγ    γ       ict       ict'

But this brings about an interesting facet of the equation. Time is cast in the imaginary axis and L becomes a Hermitian orthogonal matrix. The eigen values remain the same as above real matrix since there is no change of trace and determinant.

λ1 =(tr/2) +√[(tr/2)2 -Δ] = γ +√[γ2 -1]

λ2=(tr/2) -√ [(tr/2)2 -Δ]=  γ -√[γ2 -1]

γ      -iβγ  =   γ *  1   0   + βγ *  0    -i  =  γσ0  +  βγ * σ2  = γ [σ0 + βσ2 ]

iβγ    γ                 0   1               i     0

Hence T =γ [σ0 + βσ2 ] ..........(1)

If  A is a 3-D vector with components (x,y,z), then A=xi +yj+zk

and A.σ σ1x+ σ2y + σ3z.

When A is a 1-D vector having component y, K =A.σ=σ2y ..........(2)

Comparing (1) & (2) , T = γ [σ0 + yσ2 ] =γ [σ0 + K ]   where y=β i

[σ0 + K ] is a  1-D quaternion type structure with σ0 as  a scalar component of a 1-D quaternion.

Matrices of the form

1  0   0  0           where R ∊ SO(3)

0      R

0

0

represent a class of Lorentz transformations where action on a 4-vector is a rotation of the spatial components while keeping time component intact. These matrices form a sub group of the Lorentz group . The sub group is a faithful, reducible representation of SO(3).

The second class of Lorentz transformations consist of a boost or spatial Lorentz transformation. For such a boost along a fixed axis say, x-axis ,with reduced velocity β (v/c) ∊ ]-1,1[, the transformation matrix

Λ =

coshξ      sinhξ   0     0

sinhξ      coshξ   0     0

0               0       1     0

0              0        0     1    where ξ (rapidity)= artanhβ ∊ R

The Lorentz boosts do not form a group. Successive Lorentz boosts along non-parallel directions do not yield a boost but a combination of boost and spatial rotation.However, Lorentz boosts along any fixed direction do form a sub group of the Lorentz group which is isomorphic to (R,+).

Let us mention 3 particular Lorentz transformations :--

1) The Space Inversion transformation

Λp =

1  0   0   0

0 -1  0   0

0  0  -1  0

0  0   0  -1

2) Time Reversal transformation

ΛT =

-1  0   0   0

0   1  0    0

0  0   1   0

0  0   0   1

3) Combination of 1) * 2)

 ΛpΛT = ΛTΛp = -I4   ;

All the above 4 transformations (ΛpTpΛT ,I4) are involutory matrices and form a sub group which is isomorphic to the Klein Group V4 .Lorentz transformations with determinant +1 are called proper Lorentz transformations and form a sub group of the Lorentz group denoted by SO(3,1). Lorentz transformations with determinant -1 are called improper Lorentz transformations

The introduction of a Pseudo - Euclidean metric with signature (-,+,+,+) corresponding to a metric tensor η  with components

 ημν= -1 for μ=ν=0

=+1 for μ=ν ∊ {1,2,3}

=0 for  μ ≠ ν

turns R  into pseudo metric space called Minkowski spacetime denoted by M4 . It is convenient to associate with the metric tensor a 4x4 matrix η defined as

η =diag(-1,1,1,1) such that its entries are precisely the components of  ημν .

 

Dadekind Groups:

Groups whose all sub groups are normal are called Dadekind Groups. Abelian groups are Dadekind Groups and also referred to as Quasi Hamiltonian Groups.. Non-abelian Dadekind groups are called Hamiltonian Groups. The smallest Hamiltonian Group is the Quaternion Group of oeder 8, Q8. Sub groups of Hamiltonian Groups are self-conjugate. Every Hamiltonian Group contains a sub-group isomorphic to Q8 .

Algebric & Geometric Multiplicity of Eigen Values:

* No. of times an eigen value occurs in a matrix, is called its algebraic multiplicity. If there is a nxn matrix, algebric multiplicity cannot exceed n.

* Geometric multiplicity is the no. of linearly independent vectors associated with an eigen value. Geometric multiplicity cannot exceed algebric multiplicity & its minimum value is 1.

* If for any matrix, the geometric multiplicity is equal to the algebraic multiplicity for each eigen value, then the matrix is non-singular and hence can be diagonalized.

Kernel of a Matrix:

Solution of a set of simultaneous linear equations consists of specific solution + homogeneous solution .

Kernel of the co-efficient matrix of simultaneous linear equations tells us about the homogeneous solution part . This is a rough measure of " how much of the domain vector space is shrunk to the "zero vector", i.e. how much collapsing or condensation of information takes place. An extreme example is the zero matrix which annihilates everything , obliterating any useful information the system of equation might reveal to us (It is all kernel and no range)

Example : A

1 1 0

1 0 2

2 1 2

and B=

x

y

z

such that AB=Z where Z=

2

3

5

First we put AB=0 (homogeneous equation )where A is a singular matrix

here x /Δ1 = -y / Δ2 = z / Δ3

Δ1 = 0  2    Δ2 =  1   2     Δ3 = 1   0

        1  2               2  2              2   1

hence  x / -2 = -y/-2 = z/1  or x=-2z, y=2z

Kernel is {t(-2,2,1); t ∊ R} . This is a sub-space of R3 of dimension 1 ( a line passing through the origin)

Put z=1,then  x=1, and y=1 (specific solution)

The Solution is - specific solution (1,1,1) +  t(-2,2,1); t ∊ R

* The matrix M which transforms A matrix into its transpose AT   :

Let A=  a   b   and M=  e  f     such that MA=AT

              c  d                  g  h

then e = (ad-c2 ) / ad-bc) ;    f=-a(b-c)/(ad-bc) ; g= d(b-c)/(ad-bc) ; h= (ad-b2 ) /(ad-bc) if ad-bc=Δ = determinant, then

M=( 1/Δ2 )   * ad-c2         -a(b-c)

                       d(b-c)          ad-b2

e / h= ad-c2 / ad-b2  = say n

f/g=-a /d = say n

if n=1, then M= h  f

                         -f   h and with real numbers, its eigen value is complex i.e. h±if like a rotation matrix if determinant is +1.

With M as above Matrix A can have either of 2 forms;

A1=   d    b

          b   -d  with eigen value= ± √( d2+b2) which is sinilar to reflection matrix provided determinant is -1.

A2=  d    -b

         b    -d with eigen value ± √( d2-b2) which can be real or imaginary depending on value of d,b. This is not orthogonal even if determinant is -1 and akin to non-orthogonal reflection matrix. A2 is involutary if determinant is -1.

One can work with general value of n=n.

* Similarly matrix N which transforms matrix A  to its adjoint A' is given by

N=

( 1/Δ2 )   * bc+d2         -b(d+a)

                -c(a+d)          bc+a2

* Similarly matrix O which transforms matrix A  to its inverse A-1 is given by

O=

( 1/Δ4 )   * bc+d2         -b(d+a)

                -c(a+d)          bc+a2

* Matrix T which transforms a typical orthogonal rotation matrix (where Or-1 = OrT )Or to non-orthogonal akin to rotation matrix(where NOr = NOrT )   NOr is

T = 1               0           where  Or =    cosθ    -sinθ     and   NOr =     cosθ    -sinθ

    -sin2θ      cos2θ                               sinθ     cosθ                              sinθ     cosθ     and T *Or = NOr   .

where determinant of T and NOr are cos2θ. This implies both are periodic functions of θ with values in the range [-1,+1]. this contrasts with the determinant of  Or which is fixed at +1 or -1. In the instant case, T is a lower triangular matrix. It can also be a upper triangular matrix.

Roto -Translation Matrix:

Any 2x2 square matrix when operates on 2-D vector produces another 2-D vector whose origin remains the same, and the co-ordinate in 2-D space is different. This displacement can be broken into 2 types of movements, rotational and translational , following in succession. These 2 movements are not necessarily commutative, i.e rotation-translation may or may not be equal to translation-rotation. The question is how to split the above matrix to rotation and translation.

a  b  0   * x1  = ax1+bx2 =z1

c  d  0      x2     cx1+dx2   z2

0  0  1      1         1             1

This is equivalent to

L *  cosθ    -sinθ   0  * x1  =L * x1cosθ -x2sinθ  

       sinθ      cosθ   0     x2           x1sinθ+x2cosθ

       0          0        1     1                    1

where L is translation part & the matrix M is rotational part (anti-clockwise) where M=

cosθ    -sinθ   0   

sinθ      cosθ  0    

   0          0    1   

 

Now z1=Lx1cosθ -Lx2sinθ=L(x1cosθ -x2sinθ) =Ly1

        z2=Lx1sinθ +Lx2cosθ=L(x1sinθ +x2cosθ)=Ly2

here there are 2 variables L,   and 2 equations. Solving,

cosθ =(z1x1+z2x2) /L(x12 +x22) =[x1/√(x12 +x22)]*[z1/√(z12 +z22)]  +[x2/√(x12 +x22)]* [z2/√(z12 +z22)]=cosb*cosa +sinb*sina=cos(b-a)

hence, θ =b-a

Here, cos b=x1/√(x12 +x22)    cos a= z1/√(z12 +z22)=(ax1+bx2)/√[(ax1 +bx2)2+(cx1 +dx2)2=  y1/√(y12 +y22)

          sin b=x2/√(x12 +x22    sin a= z2/√(z12 +z22)=(cx1+dx2)/√[(ax1 +bx2)2+(cx1 +dx2)2=  y2/√(y12 +y22)

So M=

cosθ    -sinθ   0 = cos(b-a)    -sin(b-a)   0   

sinθ      cosθ  0     sin(b-a)     cos(b-a)   0    

   0          0    1         0               0           1

M can be a product of 2 rotation matrices M=Ma * Mb where

 Ma=  cos a   sin a  0

         -sin a   cos a  0

            0        0       1

 Mb=  cos b   -sin b  0

           sin b   cos b  0

            0        0       1

[Ma , Mb] = 0 in 2 dimensions.

 

L = √{ (z12 +z22) /(x12 +x22)} which is nothing but the Lorentz ratio.

L2=x12(a2+c2) / (x12+x22)   +x22(b2+d2) / (x12+x22)   +2x1x2(ab+cd)/(x12+x22) ; ............

    =[( ||col1|| / ||vecX|| ) *x-component of vecX]2 +[( ||col2|| / ||vecX|| ) *y-component of vecX]2  +2 *(|x-component of vecX/||vecX||) *(y-component of vecX/||vecX|| )*<col1,col2>

= L2part a +L2 part b +L2part c ; ..........(1)

In real cases, the matrix may be the result of any number of discrete rotation and translation in any order or it may be a continuous process.

Now L can be L=1 ......(a) invariance of norm

                       L < 1..... (b) contraction of norm

                       L > 1......(c) dilation of norm

suppose  a    b  * x1 =  z1 = ax1+bx2

               c   d      x2     z2    cx1+dx2

Upon the action of the operator , x1 transforms to z1 and z1 is not  necessarily equal to x1 and so also transformation of x2 to z2. But what are the special

conditions under which ||x|| = ||Z|| or L=1

It is not difficult to observe that if  a2 +c2 =1 , b2+d2=1, ac+bd=0, then L=1 irrespective of whether x1,x2 are both real or both imaginary or one real & one imaginary if the matrix is a real matrix, here L2part c vanishes and other 2 parts sum up to 1. If out of x1 and x2, one is real and the other is imaginary, then the two parts subtract to 1. one part is dilation and the other is contraction, magnitude of both being greater than 1 and both differ in magnitude by 1. On the other hand, if both are real / imaginary, each part contributes towards either dilation/contraction and both sum up to 1.

We know a  b  * x1 = z1

                c  d     x2     z2

If the reference frame also moves proportionally, then the new origin is given by

               a-1   b    * x1 =O1

                c    d-1     x2   O2 ( this can be derived from Δx =  z1-x1  and Δy = z2-x2

Example: matrix is A= 2  3   vector X= 1     AX = 8   (click submit on the top)

                                    4   5                    2              14

now for shift of origin, click submit2B above the writing " clicking  here, A matrix becomes a-1,b,c,d-1." and is filled up against B matrix. Now put vector Y = 1

       2   i.e.  same as X and click submit at the top of the page . The BY vector is the new co-ordinate of the origin.

Generalizing , we write  a  b  * x1 ± m = AX

                                       c  d     x2 ± n               where (m,n) are the coordinates of initial origin . then coordinates of transformed origin is

                                     a-1  b    *  x1 ± m

                                      c   d-1      x2 ± n

= a-1   b     * x1  ±  a-1   b  *  m  

    c    d-1      x2       c    d-1     n

 

various configuration of Pauli Matrices

8 matrices are having determinant +1 and another 8 determinant -1

 

There is no better difference between σ0 & σ0'  than between right hand and left hand.  σ0' =(-1) * σ0

 
e0 1 0

0 1

σ1= 0 1

1 0

J=2 0 -1

1 0

σ3= 1  0

0 -1

  Δ = 1   Δ = -1   Δ = 1   Δ =-1
  tr=2   tr=0   tr=0   tr=0
norm*norm 1,1   1,1   1,1   1,1
inner product 0   0   0   0
similarity matrix S     1/√2  1/√2

1/√2  -1/√2

  1        1

-i       i

   
S-1     1/√2  1/√2

1/√2  -1/√2

  1/√2     i/√2

1/√2    -i/√2

   
norm*norm of S     1 , 1   0, 0    
inner product S     0   2    
diagonal     1         0

0       -1

  i         0

0       -i

   
e'0'=0

 

-1 0

0 -1

σ1'=1 0 -1

-1 0

J'=-*σ2= 2'  0  1

-1 0

σ3'=3 -1  0

 0  1

  Δ = 1   Δ = -1   Δ = 1   Δ = -1
  tr=-2   tr=0   tr=0   tr=0
norm*norm 1,1   1,1   1,1   1,1
inner product 0   0   0   0
similarity matrix S     1      1

1     -1

  1        1

i       - i

   
S-1     1/√2    1/√2

1/√2   -1/√2

  1/√2  -i/√2

1/√2   i/√2

   
norm*norm of S     2,2   0,0    
inner product S     0   0    
diagonal     -1       0

 0       1

  i         0

0       -i

   
0= i 0

0 i

K'=iσ1 0 i

i 0

i*σ2=σ2 0 -i

i  0

I=iσ3 i  0

0 -i

  Δ = -1   Δ = 1   Δ =-1   Δ = 1
  tr=2i   tr=0   tr=0   tr=0
norm*norm -1,-1   -1,-1   -1,-1   -1,-1
inner product 0   0   0   0
similarity matrix S     1   1

-i   i

  1       1

i       -i

   
S-1     1/√2  -i/√2

1/√2   i/√2

  1/√2  -i/√2

1/√2   i/√2

   
norm*norm of S     0,0   0,0    
inner product S     2   2    
diagonal     1      0

0     -1

  1      0

0     -1

   
0'=-iσ0 -i 0

 0-i

K=iσ1'=-iσ1 0 -i

-i 0

i*σ2'=σ2'  0  i

-i 0

I'=iσ3'=-iσ3 -i  0

 0  i

  Δ =-1   Δ =1   Δ =-1   Δ =1
  tr=-2i   tr=0   tr=0   tr=0
norm*norm -1,-1   -1,-1   -1,-1   -1,-1
inner product 0   0   0   0
similarity matrix S     1       1

i       -i

  1       1

i       -i

   
S-1     1/√2  -i/√2

1/√2   i/√2

  1/√2  -i/√2

1/√2   i/√2

   
norm*norm of S     0,0   0,0    
inner product S     2   2    
diagonal     1      0

0     -1

  -1     0

0       1

   
               

 

             
  S-1 A S = A (diagonal)  
1a 1/2    -i/2

1/2     i/2

0  i

i  0

1     1

-i    i

  1     0

0     -1

 
1b 1/2    -1/2

1/2     1/2

 

0  1

1  0

1    1

-1   1

  -1     0

0      1

 

 
             
2a 1/√2  1/√2

1/√2  -1/√2

 

0  i

i  0

 

1/√2  1/√2

1/√2  -1/√2

 

  i    0

0   -i

 
2b 1/√2  1/√2

1/√2  -1/√2

 

0  1

1  0

 

1/√2  1/√2

1/√2  -1/√2

 

  1   0

0  -1

 
             
3a 1/2  -i/2 =(1/2)* 1  -i 

1/2   i/2              1   i

 

0  - i

i    0

 

1       1

i       -i

  1      0

0     -1

 

 
3b 1/2    i/2

1/2   -i/2 

0  - 1

1    0

 
1       1

-i       i 

  i      0

0     -i 

 
             
4a 1/2  -i/2

1/2   i/2

0   i

-i  0

 

1       1

i       -i

  -1      0

0        1

 
4b 1/2    i/2

1/2   -i/2  

0    1

-1  0

1       1

-i       i  

  -i      0

0       i  

 
             
             
             
             
              

 

 

Matrix Description eigen value e.vector(Ψ) S matrix S-1 D matrix e.vector (E)

corresponding to D

Transformation Matrix : T

(E=TΨ)

Remark
σ2 0  - i

i    0

 

+1 (1/√2)* 1

            i

1       1

i       -i

 

(1/2)*1  -i

          1   i

1      0

0     -1

 

1

0

1/√2     -i/√2      Δ = +1

-i/√2      1/√2

hyperbolic rotation
    +1 (1/√2)* 1

            i

      1

0

1/√2     -i/√2      Δ = +i  ; T=S-1

1/√2      i/√2

this is the transpose conjugate of similarity matrix.

 
    -1 (1/√2)* 1

           -i

 

      0

1

i/√2       1/√2      Δ = - 1

1/√2      i/√2

 

hyperbolic reflection
    -1 (1/√2)* 1

           -i

      0

1

1/√2     -i/√2      Δ = +i ; T=S-1

1/√2      i/√2

this is the transpose conjugate of similarity matrix.

 
σ2 0  - i

i    0

 

+1 (1/√2)* 1

            i

i       1

1      i

 

(1/2)* -i  1

           1  -i

-1      0

0       1

 

i*1

   0

i/√2       1/√2      Δ = - 1

1/√2      i/√2

hyperbolic rotation
    -1 (1/√2)* 1

           -i

 

      1

0

1/√2     -i/√2      Δ = +1

-i/√2      1/√2

 

hyperbolic reflection
    -1 (1/√2)* 1

           -i

      0

1

1/√2     -i/√2      Δ = +i ; T=S-1

1/√2      i/√2

this is the transpose conjugate of similarity matrix.

 
                   
                   
                   
                   
  Here, we define retro-determinant (Δ'), retro-adjoint (A")and retro-inverse(A-1') of a 2x2 matrix A

Let A=  a    b

            c    d,     then   Δ' =bc-ad=-(ad-bc) =-Δ ;

     A" = -a    c

               b  -d

 A-1' =(1/Δ) * a   -c

                    -b    d

det A-1'=1/Δ

 AA-1'=(1/Δ) * a2 - b2      -(ac-bd)

                       ac-bd        d2 - c2      

det(AA-1') =1

 
σ2 0  - i

i    0

 

+i (1/√2)* 1

            1

 

 S=

 1      1

-1      1

S-1'=

(1/2)*1     1

        -1     1

S-1'AS=

i      0

0      i

i*1

   0

   
σ2' 0     i

-i    0

 

+1 (1/√2)* 1

           -i

 

1       1

i       -i

 

  -1      0

 0       1

 

0

1

 

i/√2       1/√2      Δ = - 1

1/√2      i/√2

hyperbolic reflection
    -1 (1/√2)* 1

            i

      1

0

1/√2     -i/√2      Δ = +1

-i/√2      1/√2

hyperbolic rotation
2 0  -1

1   0

 
+i (1/√2)* 1

           -i

 
1       1

-i       i  

  i      0

0     -i 

i

0

i/√2       -1/√2      Δ = - 1

-1/√2      i/√2

 
hyperbolic reflection
    -i (1/√2)* 1

            i

      0

i

1/√2      i/√2      Δ = +1

i/√2       1/√2 

hyperbolic rotation
2'  0   1

-1  0

 
+i (1/√2)* 1

            i 

1       1

-i       i  

  -i      0

0       i 

0

i

1/√2      i/√2      Δ = +1

i/√2       1/√2 

hyperbolic rotation
    -i (1/√2)* 1

           -i

 

      i

0

i/√2       -1/√2      Δ = - 1

-1/√2      i/√2

 

hyperbolic reflection
                   
σ1 0   1

1   0

 

+1 (1/√2)* 1

            1

 

1    1

1   -1

  1   0

0  -1 

1

0

1/√2      1/√2      Δ = -1

1/√2      -1/√2  

the above are the col. matrices associated with eigen value 1,-1.

reflection
    -1 (1/√2)* 1

           -1

      0

1

1/√2      1/√2      Δ = -1

1/√2       -1/√2  

 

reflection
σ1'  0   -1

-1   0

 

+1 (1/√2)* 1

           -1 

1    1

1   -1

  -1   0

0    1 

0

1

1/√2      1/√2      Δ = -1

1/√2       -1/√2  

reflection
    -1 (1/√2)* 1

            1

      1

0

1/√2      1/√2      Δ = -1

1/√2     -1/√2  

reflection
                   
                   
                   
                   

Further analysis of σ2

-i  1 *  0  -i  *  i   1 = -1  0

1  -i     i   0     1   i      0  1

and

1  -i *  0  -i  *  1  -i =  1  0

i  -1     i   0      i   -1    0 -1

above are 2 similarity matrices which diagonalize σ2   .

 

We express σ2 matrix as   σ2=X * Y where X is rotation matrix and Y is a reflection matrix

X =1/√2        -1/√2     S x =   1/√2        1/√2     S-1x =    1/√2        i/√2     X d =   (1+i)/  √2          0

      1/√2         1/√2               -i /√2        i /√2                    1/√2      -i/√2                       0             (1-i)/ √2

 

Y =i/√2        -i/√2     S y =   1/√2        1/√2     S-1y =    1/√2        i/√2     Yd =   (i - 1)/  √2          0

      i/√2         i/√2               -i /√2        i /√2                    1/√2      -i/√2                       0             (i+1)/ √2

It will be observed that both X and Y are normalized , orthogonal, and are same except for the fact that Y =iX. Similarity matrices are same for both X and Y. Signature of both X and Y are +++- .

We can think of X as a space like transformation matrix consisting of sub space of 3 space coordinates and 1 time coordinate

                     and Y as a   time like transformation matrix consisting of sub space of 3 time coordinates and 1 space coordinate where the matrix resides in imaginary axis

The universe has 2 segments , a space like segment where we live and a time like segment which remains imperceptible to us. So also is the case with life in time like segment who (if they exist ?) perceive us in imaginary axis.

σ2 matrix is interesting because  column vectors of its similarity matrix cannot be normalized because norm square is zero. However only row vectors can be normalized. Then the matrix becomes x  as well as S y .which is the

similarity matrix for time/space  like transformation matrix.

Pauli Matrices

Δ

nature σ.A where

σ = σ1x+σ2y +σ3z

Δ(σ.A) comments  
 σ0 1 identity matrix z           x-iy

x+iy      -z

-(x2+y2 +z2 ) Space like universe having 3 space and 1 time coordinate. x,y,z are space coordinates  
 σ1 -1 reflection matrix  
 σ2 -1 reflection matrix  
 σ3 -1 reflection matrix  
      σ.A where

σ = σ0+ σ1x+σ2y +σ3z

Δ(σ.A)    
      z+1           x-iy

x+iy         -z+1

if one puts variable t for 1, it changes accordingly.

12-(x2+y2 +z2 )    
Pauli Matrix (time like) Δ Model-1 σ.A where

σ = σ1txt+σ2tyt +σ3tzt

Δ(σ.A)    
  -1    0  =σ0t

  0    -1

 

1 -identity matrix izt           ixt-yt

ixt+yt      -izt

 

(xt2+yt2 +zt2 )

Time like universe having 3 time and 1 space coordinate. x,y,z are time coordinates.

 σ0t *σ0t   =I

σ1t *σ1t   =-I=σ0t

σ2t *σ2t   =-I=σ0t

σ3t *σ3t   =-I=σ0t

 0    i  =σ1t

 i     0

 
1 rotation matrix σ1t =iσ1   = σ2σ3

σ2t =iσ2   = σ3σ1

σ3t =iσ3   = σ1σ2

 0    -1  =σ2t

 1     0

 
1 rotation matrix  
 i    0  =σ3t

 0  -i

 
1 rotation matrix  
      σ.A where

σ = σ0t1txt2tyt3tzt

12-(xt2+yt2 +zt2 )

Time like universe having 3 time and 1 space coordinate. x,y,z are time coordinates. 

 
      1+izt           ixt-yt

ixt+yt         1-izt

 
Pauli Matrix (time like) Δ Model-2 σ.A where

σ = σ1xt+σ2yt +σ3zt

     
  1    0  =σ0t

  0    1

 

1 -identity matrix -xt -iyt          zt

- zt           - xt+i yt

(xt2+yt2 +zt2 ) Time like universe having 3 time and 1 space coordinate. x,y,z are time coordinates. σ0t *σ0t   =I

σ1t *σ1t   =I

σ2t *σ2t   =-I

σ3t *σ3t   =-I

 -1    0 =σ1t

  0    -1

 
1 rotation matrix σ0t =I=σ0

σ1t =-I=0

σ2t    =-σ1σ2

σ3t    =-σ1σ3

 

 -i    0  =σ2t

 0    i

 
1 rotation matrix  
 0   1  =σ3t

-1  0

 
1 rotation matrix  
      σ.A where

σ = σ0t1txt2tyt3tzt

     
      1- xt -i yt           zt

- zt           1- xt+i yt

1-(xt2+yt2 +zt2 ) -2xt    
             
             

 

     

Pauli & Bijan Matrices which are diagonal

         
  matrix type diagonal   eigen value eigen value    
σ0 1    0

0    1

rotation same as matrix   +1 +1    
          eigen vector eigen vector    
          1

0

1

0

   
          eigen value eigen value    
σ0' -1    0

0    -1

rotation same as matrix   -1 -1    
          eigen vector eigen vector    
          1

0

1

0

   
          eigen value eigen value    
σ3 1    0

0   -1

 

reflection same as matrix   +1 -1    
          eigen vector eigen vector    
          1

0

0

1

   
          eigen value eigen value    
σ3' -1    0

0     1

 

reflection same as matrix   +1 -1    
          eigen vector eigen vector    
          0

1

1

0

   
     

Pauli & Bijan Matrices which are not diagonal

  eigen value eigen value    
σ1 0  1

1  0

reflection 1   0

0  -1

  +1  -1    
          eigen vector eigen vector    
          (1/√2)* 1

            1

(1/√2)* 1

           -1

 
   
          eigen vector of diagonal eigen vector of diagonal    
      eigen vector of diagonal is same as that of  σ3   1

0

0

1

   
          eigen value eigen value    
σ1' 0   -1

-1  0

reflection -1   0

0    1

  +1  -1    
          eigen vector eigen vector    
          (1/√2)* 1

           -1

(1/√2)* 1

            1

 

   
          eigen vector of diagonal eigen vector of diagonal    
      eigen vector of diagonal is same as that of  σ3'   0

1

1

0

   
      Diagonal Similarity Matrix S eigen value eigen value    
σ2 0  -i

i  0

reflection 1   0 = σ3

0  -1 

-1   0 = 3

0    1

 

S=(1/2)*  1      1

                i      -i

S-1=(1/2)*1    -i

                 1     i

S is orthogonal , reflecting

S=(1/2)*  i      1

                i      i

S-1=(1/2)*-i    1

                  1   -i

S is non-orthogonal , akin to reflection

 

 

+1  -1    
          eigen vector eigen vector    
       

(eigen vectors are the column vectors of similarity matrix)

(1/√2)* 1

            i

(1/√2)* 1

            -i

 
   
          eigen vector of diagonal eigen vector of diagonal    
      eigen vector of diagonal is same as that of  σ3

and

eigen vector of diagonal is same as that of  σ3'

 

eigen values are +i and -i.

1

0

and

 0

 1

0

1

and

1

0

   
          eigen value eigen value    
σ2' 0   i

-i  0

reflection -1   0

 0    1

  +1  -1    
          eigen vector eigen vector    
          (1/√2)* 1

            -i

(1/√2)* 1

             i

 
   
          eigen vector of diagonal eigen vector of diagonal    
      eigen vector of diagonal is same as that of  σ3'   0

1

1

0

   
      Pseduo-Pauli & Pseudo-Bijan Matrices which are not diagonal   eigen value eigen value    
2

=

-iσ2

0  -1

1  0

rotation i   0 = 3

0  -i

S=(1/2)*  1      1

                -1     1

S-1=(1/2)*1      1

                 -1     1

S  is involutory.

+i  -i    
          eigen vector eigen vector    
          (1/√2)* 1

            -i

(1/√2)* 1

            i

 
   
          eigen vector of diagonal eigen vector of diagonal    
      eigen vector of diagonal is same as that of  3   0

1

1

0

   
      i   0 = iσ3

0  -i

S=(1/2)*  1      1

                -i      i

S-1=(1/2)*1     i

                 1    -i

S is orthogonal , rotating

eigen vector

(1/√2)* 1

             i

eigen vector

(1/√2)* 1

            -i

 
   
          eigen vector of diagonal

1

0

eigen vector of diagonal

0

1

   
                 
                 
                 
                 
2'  0    1

-1   0

rotation -i      0 ==- iσ3

0      i

S=(1/2)*  1      1

                -1     1

S-1=(1/2)*1      1

                 -1     1

S  is involutory.

+i  -i    
                 
                 
                 
     

 

         
                 
     

If one takes a reflection pauli matrix and multiply it by i, it becomes a rotation matrix and vice versa. We know that a reflection in 2 dimension is equivalent to a rotation in 3 dimension and also a rotation in 2-dimension where one dimension is imaginary,

Equivalent Representations with new state vectors and diagonalized linear operators differ only in the choice of basis vectors.

The similarity matrix which transforms a linear operator to a diagonal one also transforms the old state vector to the new one. Thus equivalent representations are created. The new representation differs from the old one only on the choice of basis vectors.

In case of  eigen vector of  σ2 & σ2'  , one component is in real axis and the second on imaginary axis. On diagonalization, one is on real axis (the one which was earlier on real axis) and the component on imaginary axis vanishes.

In case of  eigen vector of  *σ2 & *σ2'  , one component is in real axis and the second on imaginary axis like the earlier case. However ,on diagonalization, one  on real axis (the one which was earlier on real axis) vanishes  and the component on imaginary axis remains.

Vector Norm :

When the vector is in 3-D real space A = ai+bj+ck and a,b,c are real components , i2 =j2 = k2 =+1 and

||A|| = (a2 + b2 + c2)

When the vector is in 3-D imaginary space A = ai+bj+ck and a,b,c are real components , i2 =j2 = k2 =-1 and

||A|| = i√(a2 + b2 + c2)

For an observer in 3-D real space, norm of A  in 3-D imaginary space shall be imaginary and not measurable.

Similarly, For an observer in 3-D imaginary space, norm of A  in 3-D real  space shall be imaginary and not measurable. 

         
      Dirac matrices (Pauli-Dirac representation):

known as gamma matrices. these are 4x4 matrices expressed in block matrices of the form

γ0 =

σ0      0

0     -σ0

 γ1=

  0       σ1       

1    0

γ2=

  0       σ2       

2     0

γ3=

  0       σ3       

3     0

γ0 is time like while the other 3 are space like. Form a group known as gamma group.

We define  γ5=iγ0γ1γ2γ3 =

0  I

I  0

This is called Dirac basis.

The γ matrices have the following anti-commutation rule:

{γμ,γν}=2ημν I; where ημν = Minkowoski metric having signature (+++-) and I4 =

1   0   0     0

0  -1  0     0

0   0 -1    0

0  0   0   -1

If U , V are 2 space-time 4 vectors,

U.V =η(U,V) = bilinear form=UTηV

assuming homogeneity of spacetime and isotropy of space, it follows that the spacetime interval between two arbitrary events is U.V

[γμ,γν] =2γμγν  -2ημν   ;

The matrix defined by [γμ,γν] actually has a purpose. It forms a representation of the Lorentz algebra. We can define Sμν as  1/4th of the commutator.

 γμγν   = 1/2 {γμ,γν} +1/2 [γμ,γν]

If we define

 σμν= i/2[γμ,γν] , it shall be helpful in describing the action of a spin  3/2 particle. which then can be used to describe the superpartner to the graviton, namely the gravitino, thus making it necessary for supergravity theories.

In some sense, the Dirac gamma matrices can be identified with mutually orthogonal unit vectors (orts) of the Cartesian basis in 3+1 spacetime, with their anticommutators corresponding to scalar products of the orts (this approach is used, e.g., in the book "The Theory of Spinors" by Cartan, the discoverer of spinors). Then the non-vanishing commutators of gamma-matrices (say, in the form σμν= 1/2[γμ,γν] can be identified with the so called bi-vectors (2-dimensional planes in the spacetime spanned by two orts).

The algebra of Pauli matrices, or Pauli algebra, is actually defined by the commutation relations. Then it can be represented by matrices, and this representation is not unique. Any transformation that preserve the commutation relations gives an equivalent representation. That is the case of a unitary transformation. The Pauli algebra is analogous of the algebra of Dirac matrices, or gamma matrices.

In fact, the algebra of Pauli matrices is not only defined by the commutation relations but also by rules for products of Pauli matrices ( as a linear combination of Pauli matrices and the unit matrix, i.e. as an associative algebra with unit). See the Wikipedia article on Pauli matrices. Correspondingly also the Dirac algebra is defined as an associative algebra with unit. With this understanding, all irreducible representations of the Pauli algebra are by 2x2 matrices and all irreducible representations of the Dirac algebra are by 4x4 matrices. If one would consider only the commutation relations (i.e.define the Pauli algebra as a Lie algebra) one gets irreducible representations as nxn matrices for any integer n>1.

If some set of 4x4 marices γ may be obtained from an "original representation" of γ with a unitary transformation, then it is as good as the original set of γ, because it obeys the same Dirac algebra and leads to the Klein-Gordon equation, as usual.

In practice there are more or less convenient particular γ matrix choices, but it is subjective and related to the volume of calculations for a human being

We define sigma matrices operating on (1x4) or (4x1) spinors  as

Σk=

σk    0

0      σk           ;

and αk=

0    σk

σk  0   ;

 

         
                 

 

Caley Table

(σ0, σ1) ,(σ0, σ3),(σ0, σ2),(σ0, σ1') ,(σ0, σ3'),(σ0, σ2'),(σ0,σ0')  , are the smallest sub groups (all abelian)

(σ0,σ0' , σ3 , σ3') form sub group (abelian)

(σ0,σ0' , σ1 , σ1') form sub group (abelian)

(σ0,σ0' , σ2 , σ2') form sub group (abelian)

                                        (there are total 256 cells, out of which 128 cells  have matrices with determinant 1 and and another 128 with determinant -1. e,e', I, I',J,J',K,K' constitute matrices with det. +1)

                                        (here, * does not mean multiplication;*σ2  is pseudo pauli matrix     and  *σ2'  is pseudo Bijan matrix )

01,*Ψ23,Ψ0'1',*Ψ2'3'01,*σ23,σ0'1',*σ2'3',form non-abelian group of order 16 w.r.t matrix multiplication)

  e=σ0 σ1 J=2 σ3   e'=σ0' σ1' J'=2' σ3'   Ψ0=0 Ψ1=K'=1 2=i*σ2 Ψ3=I=3   Ψ0'=0' Ψ1'=K=1' 2'=i*σ2' Ψ3'=3'
e=σ0 e=σ0 σ1 J=*σ2 σ3   e'=σ0' σ1' J'=2' σ3'   0 K'=1 i*σ2 I=3   0' K=1' i*σ2' I'=3'
σ1 σ1 e=σ0 σ3 J=*σ2   σ1' e'=σ0' σ3' J'=2'   K'=1 0 I=3 i*σ2   K'=1' 0' I'=3' i*σ2'
J=*σ2 J=*σ2 σ3' e'0' σ1   J'=2' σ3 e=σ0 σ1'   i*σ2 I'=3' 0' K'=1   i*σ2' I=3 0 K'=1'
σ3 σ3 J'=*σ2' σ1' e=σ0   σ3' J=2 σ1 e'=σ0'   I=3 i*σ2' K=1' 0   I'=3' i*σ2 K'=1 0'
                                       
e'=σ0' e'=σ0' σ1' J'=2' σ3'   e=σ0 σ1 J=*σ2 σ3   0' K=1' i*σ2' I'=3'   0 K'=1 i*σ2 I=3
σ1' σ1' e'=σ0' σ3' J'=2'   σ1 e=σ0 σ3 J=*σ2   K=1' 0' I'=3' i*σ2'   K'=1 0 I=3 i*σ2
J'=2' J'=2' σ3 e=σ0 σ1'   J=*σ2 σ3' e'=σ0' σ1   i*σ2' I=3 0 K=1'   i*σ2 I'=3' 0' K'=1
σ3' σ3' J=2 σ1 e'=σ0'   σ3 J'=*σ2' σ1' e=σ0   I'=3' i*σ2 K'=1 0'   I=3 i*σ2' K=1' 0
                                       
Ψ0=iσ0 0 K'=1 i*σ2 I=3   0' K=1' i*σ2' I'=3'   e'=σ0' σ1' J'=2' σ3'   e=σ0 σ1 J=*σ2 σ3
Ψ1=K'=1 K'=1 0 I=3 i*σ2   K=1' 0' I'=3' i*σ2'   σ1' e'=σ0' σ3' J'=2'   σ1 e=σ0 σ3 J=*σ2
σ2=*Ψ2=i*σ2 i*σ2 I'=3' 0' K'=1   i*σ2' I=3 0 K=1'   J'=2' σ3 e=σ0 σ1'   J=*σ2 σ3' e'=σ0' σ1
Ψ3=I=3 I=3 i*σ2' K=1' 0   I'=3' i*σ2 K'=1 0'   σ3' J=2 σ1 e'=σ0'   σ3 J'=*σ2' σ1' e=σ0
                                       
Ψ0'=iσ0' 0' K=1' i*σ2' I'=3'   0 K'=1 i*σ2 I=3   e=σ0 σ1 J=*σ2 σ3   e'=σ0' σ1' J'=2' σ3'
Ψ1'=K=1' K=1' 0' I'=3' i*σ2'   K'=1 0 I=3 i*σ2   σ1 e=σ0 σ3 J=*σ2   σ1' e'=σ0' σ3' J'=*σ2'
σ2'=*Ψ2'=i*σ2' i*σ2' I=3 0 K=1'   i*σ2 I'=3' 0' K'=1   J=*σ2 σ3' e'=σ0' σ1   J'=2' σ3 e=σ0 σ1'
Ψ3'=I'=3' I'=3' i*σ2 K'=1 0'   I=3 =i*σ2' K=1' 0   σ3 J'=*σ2' σ1' e=σ0   σ3' J=2 σ1 e'=σ0'

Tracing linkage of Pauli matrices

to

Rotation/Reflection matrices

Matrix Structure Matrix Structure              θ=2φ    
A cosθ     -isinθ

isinθ      cosθ

B cos 2φ      sin2φ

sin2φ      -cos2φ

   
Δ(det): cos2 θ - sin2 θ Δ(det): -1    
tr: 2cosθ tr: 0    
Eigen value  cosθ+sinθ  and  cosθ-sinθ Eigen value +1   and -1    
Eigen Vector 1                1

i        and   -i

Eigen Vector 1                                                 1

cosec2φ -cot2φ           -cosec2φ -cot2φ

   
Diagonal matrix cosθ+sinθ        0

      0             cosθ-sinθ

Diagonal matrix

 

1                0       or    cosec2φ -cot2φ          0

0              -1                        0           -cosec2φ -cot2φ

   
A+ cosθ     -isinθ

isinθ      cosθ

A =A+

BT cos 2φ      sin2φ

sin2φ      -cos2φ 

   
A-1 (1/Δ2)* cosθ      isinθ

            -isinθ      cosθ

B-1 cos 2φ      sin2φ

sin2φ      -cos2φ 

   
  this matrix is not orthogonal in general. Orthogonal only when

 Δ= +1 or -1

  this matrix is orthogonal and reflecting.    
When θ =0°    A=

                  

                                 A +    =

                 

 

                  A-1   =

1   0

0   1   = σ0   ;  Δ= +1 ,

1   0

0   1

 

1   0

0   1    = σ0   ;

 

When φ =0°    B=

                  

                                 B T    =

                 

 

                  B-1   =

1    0

0   -1   = σ3   ;  Δ= +1 ,

1    0

0   -1

 

1   0

0   -1    = σ3   ;

 

   
When θ =π/2    A=

                  

                  A  +   =

                  

                   A-1  =

0   -i

i    0   = σ2   ;  Δ= -1 ,

0   -i

i    0

0   -i

i    0    = σ2   ;

When φ =π/4    B=

                  

                  B  T   =

                  

                   B-1  =

0   1

1   0   = σ1   ;  Δ= -1 ,

0   1

1   0

0   1

1   0    = σ1   ;

   
When θ =π    A=

                  

                                 A +    =

                 

 

                  A-1   =

-1   0

0   -1   = σ0'   ;  Δ= +1 ,

-1   0

0   -1

 

-1   0

0   -1    = σ0'   ;

When φ =π/2    B=

                  

                  B  T   =

                  

                   B-1  =

-1   0

0    1   = σ3'   ;  Δ= -1 ,

-1   0

0    1

-1   0

 0   1    = σ3'   ;

   
When θ =3π/2        A =           

 

                 A  +   =

                  

                   A-1  =

0    i

-i    0   = σ2'   ;  Δ= -1 ,

0     i

-i    0

0    i

-i    0    = σ2'   ;

When φ =3π/4    B=

                  

                  B  T   =

                  

                   B-1  =

0   -1

-1   0   = σ1'   ;  Δ= -1 ,

0   -1

-1   0

0   -1

-1   0    = σ1'   ;

   
 

We observe that σ0 and σ2  have originated from a non orthogonal matrix structure with continuous matrix elements for those special values of θ when the matrix is orthogonal. Whereas σ1 and σ3  have originated from orthogonal , reflecting matrices for corresponding values of φ governed by the relation θ =2φ

       
           
*Any 2-component complex vector is a spinor of rank 1.

 vector A=(a+bi)*i  +(c+di)*j where a,b,c,d are real numbers is a complex vector and its complex conjugate is A*'

vector A* =(a-bi)*i  +(c-di)*j

Pauli Matrices as 2-component Spinors:

   |Ψ> =  C+Ψz+ + C-Ψz-   ; C+ , C- are complex numbers  ..>

 |Ψ> =

C+

C-

Column vectors

<Ψ | =    C+*(Ψz+)⊺ + C-* (Ψz-)⊺   ;C+*  and C-*   are complex conjugates of  C+   and C-  ;

<Ψ | = { C+* , C-* }

row vectors

Let |A'> =Sk |A> where Sk is the spin operator represented by (h/2π)σk

Expectation value of Sk  is

 

*As Hermitian matrices span over complex space in correspondence to symmetric matrices spanning real space, unitary matrices are counter parts of orthogonal matrices in complex space. A 2x2 unitary matrix U has the general structure

a                b

-e b*    e a*

where a,a* are complex conjugates of each other and so also b,b* and |a|2 +|b|2 = 1

det U= e ;

|detU| =√ (UU*) =+1 or -1 . If it is +1, it is special unitary matrix which form a group SU(2)

* Any 2x2 matrx L of the form L=exp(iσ .θ/2  -σ. ρ/2 ) where ρ is the rapidity

  lorentz-transforms a spinor if |L| =1.

If L is unitary, the transformation is rotation in space . If L is hermitian ,it is a boost.

 

Variance -Covariance Matrix :

* These are square , symmetric, positive, semi-definite matrices .

* Diagonal elements represent variances. Off diagonal elements represent co-variance  which can be +,- or zero.

* All eigen values are real, non negative.

 

A = a11      a12

      a21        a22

where a11, a22 are variance  and a12,a21 represent co-variance

Example --

Age      Experience

25              2

32             6

37             9

Ā            Ē

31.33    5.66

N=3 and hence N-1=2

var A =[(25-31.33)2 +(32-31.33)2 +(37-31.33)2] / 2  =36.33;

Var E =[(2-5.66)2 +(6-5.66)2 +(9-5.66)2] / 2  =12.33;

CoVar(A,E) =Var(E,A) =[(25-31.33)(2-5.66) +(32-31.33)(6-5.66)+(37-31.33)(9-5.66)] /2 =21.166

Matrix=

36.33      21.16

21.16      12.33

If dataset is of 2 types, the matrix is 2x2. If dataset is n type, matrix is nxn.

It is a symmetric matrix. Covariance measures the directional relationship between the variables. It does not show strength of relationship.

For 2x2 matrix,

var x             cov(x.y) 

cov(x.y)          var y

For 3x3 matrices,

var x       cov(x,y)     cov(x,z)

cov(x,y)   var y         cov(y,z)

cov(x,z)   cov(y,z)     var z