2x2 Real Matrix :  Matrix Division , Finding Square Root (+ve determinant)

  Matrix A   Matrix B  
  (a) (b)   (e) (f)  
  (c) (d)   (g) (h)  
       (K1)   (K2)        
  putting value of K1,K2 is optional. K1,K2 are required for matrix equation AX=K where X= x &K=K1

               y         K2

       
  Δ(det): tr:   Δ(det): tr:  
  anti-det :   anti-tr:   anti-det :   anti-tr:  
  (tr/2)2   (tr/2)2  
  b/d: a/c:   f/h: e/g:  
  c-b: a/d:   g-f: h/e:  
  c/b:     g/f:    
  normcol1   normcol1  
  normcol2   normcol2  
  <col1,col2>   <col1,col2>  
  angle between col1 & col2 in degree   angle between col1 & col2 in degree  
  angle of col1 wrt x-axis(-a/c)in deg   angle of col1 wrt x-axis(-e/g)  
  angle of col2 wrt x-axis(-b/d)in deg   angle of col2 wrt x-axis(-f/h)  
        trB/trA  
    anti-tr/2b     anti-tr/2f  
    tr/2b     tr/2f  
  (anti-tr/2b)2+c/b   (anti-tr/2f)2+g/f  
         
           
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  λ1 +i   λ1 +i  
  |λ1|   |λ1|  
  λ2 +i   λ2 +i  
  |λ2|   |λ2|  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
  <evector1,evector2> (1-|c/b|)   <evector1,evector2> (1-|g/f|)  
  ||evA1||   ||evB1||  
  ||evA2||   ||evB2||  
 

Angle of vector 1 w.r.t x-axis  in degree

 

Angle of vector 1 w.r.t x-axis  in degree

 
  Angle of vector 2 w.r.t x-axis in degree   Angle of vector 2 w.r.t x-axis in degree  
  Angle of vector 2 & 1 (in degree)   Angle of vector 2 & 1 (in degree)  
             
  normalize A & B  

*after normalization, the imaginary part is omitted and calculation done on real part only.

   
  Matrix C=A/B BC=A   Matrix D=B/A AD=B  
  (m) (n)   (q) (r)  
  (o) (p)   (s) (t)  
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2  
  anti-tr: anti-tr/2n   anti-tr: anti-tr/2r  
  o/n: tr/2n   s/r: tr/2r  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  p: m:   t: q:  
  θ=tan-1 √(p/m) in °   θ=tan-1 √(t/q) in °  
  (anti-tr/2n)2+o/n   (anti-tr/2r)2+s/r  
  λ1 +i   λ1 +i  
  |λ1|   |λ1|  
  λ2 +i   λ2 +i  
  |λ2|   |λ2|  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
  <evector1,evector2> (1-|o/n|)   <evector1,evector2> (1-|s/r|)  
  ||evC1||   ||evD1||  
  ||evC2||   ||evD2||  
 

Angle of vector 1 w.r.t x-axis  in degree

 

Angle of vector 1 w.r.t x-axis  in degree

 
  Angle of vector 2 w.r.t x-axis in degree   Angle of vector 2 w.r.t x-axis in degree  
  Angle of vector 2 & 1 (in deg)   Angle of vector 2 & 1 (in deg)  
             
          Link- 1,  
  Matrix C1=A/B C1B=A   Matrix D1=B/A D1A=B  
  (m1) (n1)   (q1) (r1)  
  (o1) (p1)   (s1) (t1)  
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2 -&DeDelta;  
  anti-tr: anti-tr/2n1   anti-tr: anti-tr/2r1  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  p1: m1:        
  θ=tan-1 (p1/m1) in °        
  Δθ=(θC1 -θC) in        
  RC =Rotation Matrix for Δθ        
  (RC11) (RC12)        
  (RC21) (RC22)        
             
  λ1 +i   λ1 +i  
  λ2 +i   λ2 +i  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
 

If C,C1 are same,then B,C are commuting matrices

    If D,D1 are same,then A,D are commuting matrices    
             
  Matrix X CX=C1        
  (x11) (x12)        
  (x21) (x22)        
  Δ(det): tr:        
             
             
             
  Matrix E=A+B     Matrix F=A-B    
  (E11) (E12)   (F11) (F12)  
  (E21) (E22)   (F21) (F22)  
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2  
  anti-tr: anti-tr/2E12   anti-tr: anti-tr/2F12  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  λ1 +i   λ1 +i  
  λ2 +i   λ2 +i  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
             
  Matrix G=√A     Matrix H=√B    
  (G11) (G12)   (H11) (H12)  
  (G21) (G22)   (H21) (H22)  
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2  
  anti-tr: anti-tr/2G12   anti-tr: anti-tr/2H12  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  λ1 +i   λ1 +i  
  λ2 +i   λ2 +i  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
             
  Matrix G=√A          
  (G11b) (Gd12)        
  (Gd21) (G22b)        
  Δ(det): tr:        
  (tr/2)2        
  anti-tr: anti-tr/2Gd12        
  √[(tr/2)2 -Δ] +i        
  λ1 +i        
  λ2 +i        
  (y/x)1: +i        
  (y/x)2: +i        
             
  Matrix AB     Matrix BA    
     
     
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2  
  anti-tr(a): anti-tr(b):   anti-tr(a): anti-tr(b):  
  anti-tr(a+b): anti-tr/2AB12   anti-tr(a+b): anti-tr/2BA12  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  λ1 +i   λ1 +i  
  λ2 +i   λ2 +i  
  (y/x)1a: (y/x)1b:   (y/x)1a: (y/x)1b:  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
             
  Matrix[A,B]     Matrix{A,B}    
     
     
  Δ(det): tr:   Δ(det): tr:  
  (tr/2)2   (tr/2)2  
  anti-tr(a): anti-tr(b):   anti-tr(a): anti-tr(b):  
  anti-tr(a+b): anti-tr/2AcB12   anti-tr(a+b): anti-tr/2AaB12  
  √[(tr/2)2 -Δ] +i   √[(tr/2)2 -Δ] +i  
  λ1 +i   λ1 +i  
  λ2 +i   λ2 +i  
  (y/x)1a: (y/x)1b:   (y/x)1a: (y/x)1b:  
  (y/x)1: +i   (y/x)1: +i  
  (y/x)2: +i   (y/x)2: +i  
             
  vector X vectorAX=AX   vector Y vectorBY=BY  
     
     
  ||X||: ||AX||:   ||Y||: ||BY||:  
  lorentz ratio Lx:        ||AX|| /||X||   lorentz ratio (Ly)  
  lorentz ratio part A:   lorentz ratio part A:  
  lorentz ratio part B:   lorentz ratio part B:  
  lorentz ratio part C:   lorentz ratio part C:  
  LorenzRatio(A+B+C)   LorenzRatio(A+B+C)  
  L2A/L2B   L2A/L2B  
         
 

clicking above, A matrix becomes a-1,  b, c , d-1

  clicking above, B matrix becomes e-1,  f, g , h-1  
             
             
  rotomatrix X     rotomatrix Y    
  (rx11) (rx12)   (ry11) (ry12)  
  (rx21) (rx22)   (ry21) (ry22)  
  roto-angle in degree anti-clockwise(b-a)   roto-angle in degree anti-clockwise(b-a)  
  roto-angle a   roto-angle a  
  roto-angle b   roto-angle b  
  det: tr:   det: tr:  
             
  roto  matrix Xa     roto  matrix Ya    
  (rxa11) (rxa12)   (rya11) (rya12)  
  (rxa21) (rxa22)   (rya21) (rya22)  
  det: tr:   det: tr:  
             
  roto  matrix Xb     roto  matrix Yb    
  (rxb11) (rxb12)   (ryb11) (ryb12)  
  (rxb21) (rxb22)   (ryb21) (ryb22)  
  det: tr:   det: tr:  
             
  trans-rotomatrix X   trans-rotomatrix Y  
  (trx11) (trx12)   (try11) (try12)  
  (trx21) (trx22)   (try21) (try22)  
  det: tr:   det: tr:  
             
             
             
  x2+y2 +(2*)y+(2*)x+(2*)xy +  =0

  (a1)x2  + (b1)y2 +2*   (f1)y +2*   (g1)x +2* (h1)xy +  c1      =0

This is the general equation of a conic. a1=ac,b1=bd,h1=(anti-Δ)/2           g1=- (aK2+cK1)/2 ,f1=-(bK2+dK1)/2  , c1=K1K2

δ =

δ =a1b1c1+2f1g1h1-af12 -b1g12-c1h12

δ= +=

   =abcdK1K2+(1/4)(bK2+dK1)(aK2+cK1)(anti-Δ) -(ac/4)(bK2+dK1)2         - (bd/4)(aK2+cK1)2 - (K1K2/4)(anti-Δ)2

=(1/4)(bK2+dK1)(aK2+cK1)(anti-Δ) - (K1K2/4)(anti-Δ)2     +

   abcdK1K2 -(ac/4)(bK2+dK1)2         - (bd/4)(aK2+cK1)2 =partA+partB  ;

Grey part (PartA) is first component containing anti-determinant & 2nd part is the rest.

PartA=anti-Δ / 4 [(bK2+dK1)(aK2+cK1) - K1K2*anti-Δ].

If anti-Δ =0, (bK2+dK1)(aK2+cK1) =0 or

a      c   * K2  =  0

b      d     K1       0 .   

The matrix above is AT .

if anti-Δ =0, ac ≠bd when a,b,c,d are real numbers

if at all ac=bd, then 2 out of 4 matrix elements become imaginary. For convenience, we think (b,c) become imaginary with b = -a2 , c = -d2 ;

 
             
             
             
             
  matrix M   matrix N  
 

Enter the figures in Matrix A. M shall appear at matrix B upon pressing submit at the side of matrix M. Press submit above matrix A. BA shall be the transpose of A.

   

Enter the figures in Matrix A. N shall appear at matrix B upon pressing submit at the side of matrix N. Press submit above matrix A. BA shall be the adjoint of A.

   
  matrix O        
 

Enter the figures in Matrix A. O shall appear at matrix B upon pressing submit at the side of matrix O. Press submit above matrix A. BA shall be the inverse of A.

         
             

 

To find A/B =C or BC=A

em+fo=a....(1)

en+fp=b... (3)

gm+ho=c..(2)

gn+hp=d..(4)

from equn (1) & (2), we get m=(ah-cf)/(eh-fg)=(ah-cf)/detB

                                              o=(a-em)/f

from equn. (3) & (4), we get n=(bh-df)/detB

                                               p=(b-en)/f;

To find square root of the matrix A:

Let the matrix be G.

Then detG=G11*G22-G21*G12=(detA) .......(1)

G112 +G12*G21 =a........(2)

(1)+(2)= G11(G11+G22)= a+(detA)........(2a)

Similarly G22*G22+G12*G21=d.........(3)

                 G11*G22-G12*G21=(detA)....(4)

Adding (3) & (4), we get

G22(G11+G22)=d+(detA) ......(4a)

2(a) / 4(a), we get  [a+(detA)] /[d+(detA)] =G11/G22=k...(5)

since k is known, G11=G22*k ......(6)

Now a-d= G112- G222=k2G222 -G222=G222 ( k2-1) or G22= [(a-d)/(k2-1)] .....(7)

G11=G22*k

b/c= G12/G21 =m or G12=G21*m.........(8)

d=G12*G21+G222 .........(9)

or d -G222  =G21*m*G21 =G212 m

or G21= [(d -G222 ) / m ]

From equn (8), G12=m*G21.......(10)

IDEM Potent Matrix: If A is such a matrix, then A2 = A .

If Matrix A= a  b

                      c  d

then, a +d =1 , a (1-a)=bc or ad=bc which implies that the determinant is zero. Hence such a matrix is singular.

If a+d=-1, then A2 = -A .

The matrix is rewritten as         Example is

                     a         b                3      -6

               a(1-a)/b   1-a              1      -2

The eigen values can be found out from Characteristic Equn, and they are 0,1.The matrix is said to be positive, semi definite since one of the eigen values is positive and the other is zero.

Ratio of y,x component of eigen vector is y/x = 1/2b  ± 1/2b = 1/b, 0

Projection operator is represented by a matrix which is IDEM Potent and the projection is either orthogonal (if the projection matrix is symmetric, it is orthogonal) or oblique.

Oblique projection 2x2 matrix is given by

0 0

x 1

if x=0, it is orthogonal projection. Condition of orthogonalty in projection matrix, P2 =P = PT

Orthogonal Projection of vector X=(x,y,z) on X-Y plane is x=(x,y,0)

If T is the linear transformation that maps every vector in R3 to its orthogonal projection in x-y plane, the corresponding matrix A is

1 0 0

0 1 0

0 0 0   and A X = x

*A representation is completely reducible if all the matrices in the representation D(Ai) can be simultaneously  brought into block diagonal form by the same similarity transformation matrix S such that S(Di)S-1 . In other words, all the group actions can be realized in some sub-space.

Commutative Property of matrix [A,B] i.e. AB-BA :

Condition 1: (If anti-trace / 2b) of matrix A=(anti-trace / 2f) of matrix B or (y/x)1 of A = (y/x)1 of B, then

(y/x)2 of AB =(y/x)2 of BA . Also AB12=BA12;

Condition 2: If (b/c) of Matrix A =(f/g) of Matrix B, then AB11=BA11, AB22=BA22. But (y/x)1 and (y/x)2 of AB may not have any relation with that of BA .

Condition 1 + Condition 2 : If both conditions are fulfilled, then [A,B] =0

anti-trace of AB & BA: It consists of  2 parts, part a and part b. Both the parts are same in AB as well as BA. But in AB total anti-trace is sum of 2 parts whereas in BA, it is difference of 2 parts.

anti- trace of AB part a =(dh-ae)=anti- trace of BA part a ; total anti-trace=part a + part b

anti- trace of AB part b =(cf-bg)=anti- trace of BA part b ; total anti-trace=part a - part b

(y/x) part 1 of AB and BA: Due to splitting of anti-trace , the (y/x) 1 is also split into part a and part b. Both are identical in AB and BA. In AB, both parts are added whereas in BA, these are subtracted. Their value not only depends on anti-trace of originators i.e matrix A & B but also on AB12, BA12.  If AB12=BA12 =k,

then they have the form (X +Y) /k and (X-Y)/k respectively for AB, BA.

Possible Relation With Uncertainty Principle: Heisenberg's uncertainty principle states that two attributes or observables of a physical system cannot be measured simultaneously with exact precision if they are canonical conjugates of each other, i.e one is a Fourier transform of another. We find that for 2 observables to interact , their (y/x)1 of Eigen vectors of A and B should be same. This ensures that (y/x)2 of the commuting products AB and BA remains same. Now coming to (y/x)1 of AB,BA, their components remain the same in magnitude but differ by a minus sign. It may so happen that in case of non-commuting observables, when the corresponding operators interact in 2 different ways i.e. AB and BA, there is a reflection of y component of part 1 by 180 degree whereas for commuting operators, this phase shift does not occur.

Another Approach to Commutative Property of matrix [A,B] i.e. AB-BA :

[A,B] =AB -BA=X=

ae+bg-ae-cf         af+bh -be-df     =     bg - cf                   -f(d-a)+b(h-e)

ce+dg-ag-ch        cf+dh-bg-dh            g(d-a)-c(h-e)         -(bg-cf)

Trace of X=[A,B] = 0 which implies that any square matrix with zero trace can be expressed  as a linear combination of commutative relation between a pair of same order square matrices.It also may be noted that trace of a product of equal sized matrices functions similar to the dot product of vectors. Trace is also a map of Lie Algebra gln --> k,(k is a scalar; n is the order of the matrix) or a mapping from operators to scalars.

Condition a :Now we put b/c =f/g , the condition 2 as above. Then the matrix becomes -

0                             -f(d-a)+b(h-e)

g(d-a)-c(h-e)              0

det[A,B]=det X=fg(d-a)2 + bc(h-e)2 - (d-a)(h-e)(bg+fc) = fg(d-a)2 + bc(h-e)2 - 2(d-a)(h-e)bg =[√(fg)*(d-a)-√(bc)*(h-e)]2

and determinant is either zero or positive since there is a square term and hence X is a positive, semi-definite matrix.

Condition a1: If d=a, h=e, then det X =0; or if   

√(fg)*(d-a)-√(bc)*(h-e)=0 or (h-e)/√(fg)  =(d-a)/√(bc), then det X=0;(see link 1). If X is a positive, semi-definite matrix, then its square root is a unique semi-definite matrix.

Condition a2: If fg=bc, then det X=bc(fg)[(d-a) -(h-e)]2 . This condition implies that g=±c, f=b.

Condition b: if d-a=h-e, then

det X =-(bg-cf)2 + (d-a)2(b-f)(c-g)

          =-(bg-cf)2 + (h-e)2(b-f)(c-g)

Condition C:

If d=a, then det X=-(bg-cf)2 + (h-e)2(bc)

If h=e, then det X=-(bg-cf)2 + (d-a)2(fg)

When matrix A & Matrix B Commute ?

when  b/c =f/g  .....(a)

(anti-trace of A )/2b = (anti-trace of B )/2f ....(b)

both conditions (a) and (b) are satisfied.

Anti-Commutative property of Matrix {A,B} i.e. AB+BA:

{A,B} =AB +BA=Y=

2ae + (bg+cf)                     b(h+e) +f(d+a)

g(d+a) + c(e+h)                  2dh +( bg+cf)

Condition for A,B to anti commute

a/d=h/e  ....(1)

trA/2b + trB/2f=0 ....(2)

(b/c) + (f/g) +2(a/c)(e/g) =0 .......(3)

More transparent condition:

a/d=h/e  ....(1)    [Choose a,d,h,e accordingly]

bg + cf = -2ae .......(2)

f/b  =g/c  = - tr2/tr1 = -k ....(3)

choose b

f=-bk

c=-ae/f

g=-ck

It will be observed that bg=cf=-ae

proof: bg+cf=-2ae; or b(-ck) +c(-bk) =-2ae which implies that bg=cf=-ae;

With the above conditions, it is found that both AB & BA are null matrices and hence A,B commute as well as anti-commute at this value. Moreover, both A and B are singular matrices.

When matrix A is divided by  Matrix B, the results are matrix C such that BC=A and C1 such that C1B=A. If B,C matrices commute, then only C1=C, otherwise not. In general case, when C is on the right side, result is A. When on the left side, to get A, in stead of C, C1 matrix is multiplied. Thus depending on the choice of direction, C is modified to C1, and CX=C1 where X is the modification matrix. CX & XC do not necessarily commute.

X21=(m*o1-m1*o)/detC   ; X11=(m1-nX21)/m; X22=(m*p1-n1*o)/detC; X12=(n1-n*X22)/m ;

The equn are

m    n   * X11   X12    =   m1    n1

o     p      X21   X22         o1     p1

m    n   * X11   =  m1       and   m   n   *   X12   = n1

o     p      X21       o1                 o    p        X22      p1

* If A,B are both symmetric matrices, then {A,B} is a symmetric matrix and [A,B] is skew-symmetric matrix.
* While finding out the Eigen Vector of the 2x2 matrix a   b

                                                                                         c   d

(eigen vector y component) / (eigen vector x-component) = (anti-trace/2b) (1/b)√[(tr/2)2 - det ]

We already know that (y/x)part2 = (1/b)√[(tr/2)2 - det ] =(1/b)√[a2 /4 +d2/4+ad/2-ad+bc] = √[((d-a)/2b)2 +4c/4b ]=(1/2)√[((d-a)/b)2 +4*c/b ]. The blue portion is the (y/x)part1.

If  ((d-a)/2b)2 > c/b,  (y/x)part2 is +ve

     ((d-a)/2b)2 = c/b,  (y/x)part2 is +ve

     ((d-a)/2b)2 < c/b,  (y/x)part2 is -ve

If the matrix is hermitian, c/b=1,

     ((d-a)/2b)2   > 1,  (y/x)part2 is +ve & greater than √2

     ((d-a)/2b) = 1,   (y/x)part2 is √2

     0 < ((d-a)/2b)2 <  1,  (y/x)part2 is greater than 1 , less than √2

     ((d-a)/2b) = 0  , (y/x)part2 is + 1 or -1

(y/x) =(d-a)/2b (1/b)√[((d-a)/2)2 - (ad-bc) ] if we take x=1

then y = (d-a)/2b (1/b)√[(d-a/2)2 - (ad-bc) ] =(d-a)/2b (1/2) √[((d-a)/b)2 +4c/b ]

which is the solution of the quadratic equation

Ay2 -By - C=0 or y2 -By - C=0

where A=1

           B=(d-a)/b

           C= c/b

If B=1, C=1(as in case of hermitian matrices)

y2 -y - 1=0   ( y2 =2.618 ; y=1.618 )

This is a polynomial of degree 2 having roots y1=1.6180 , y2=0.6180

1/y =y+1 where 1/y is called the Golden Ratio.

y=2cos36

These correspond to the fact that the length of the diagonal of a regular pentagon is y times the length of its side, and similar relations in a pentagram.

The number φ=y turns up frequently in geometry, particularly in figures with pentagonal symmetry. The length of a regular pentagon's diagonal is φ times its side. The vertices of a regular icosahedron are those of three mutually orthogonal golden rectangles.

The right triangle with sides 1, y, y is called the Kepler triangle and is the only right triangle whose sides are in Geometric Progression.

* A linear fractional transformation is, roughly speaking, a transformation of the form

Z-> (az+b) / (cz+d) where ad-bc not equal to zero..  In other words, a linear fractional transformation is a transformation that is represented by a fraction whose numerator and denominator are linear.

*A'*A=A*A' =Δ2 I where Δ= determinant of square matrix A, A' is adjoint of A. The proof lies with the fact that A-1A=I and A-1 =A' /Δ2  .
*A is called a normal matrix if AAT=ATA i.e. [A,AT]=0. This happens when in A= a  b

                                                                                                                                      c  d        b=c i.e. A is a 2x2 symmetric matrix.

* Normalization of the matrix is done by dividing each matrix element by square root of determinant.
Properties of different types of matrices:

matrix                   eigen value                           eigen vectors

Hermitian                  real                                   orthogonal

Anti-hermitian   pure imaginary or zero            orthogonal

real                        imaginary                 

unitary                   unit magnitude                      orthogonal

Normal              A has eigen value λ

                          A+ has eigen value λ+                   A, A+ have same eigen vectors

Eigen Value of 2x2 square matrices

λ = tr/2 √[(tr/2)2 - Δ] =tr/2 √[(anti-tr/2)2 +bc]

(y/x) = (anti-tr/2b)  √[(anti-tr/2)2 +bc]  / b = (anti-tr/2b)  √[(anti-tr/2b)2 + c/b ]

EV1=Eigen vector 1 = i + ((anti-tr/2b)  + √[(anti-tr/2b)2 + c/b ]) j

EV2=Eigen vector 2 = i + ((anti-tr/2b)  - √[(anti-tr/2b)2 + c/b ]) j

<EV1,EV2> =1 - |(c/b)|

Absolute value of c/b is taken because if c/b is negative and / or  square root of 2nd part of co-efficient of j is imaginary number, say

EV1=1i'+i*j'

EV2=1i'-i*j'

then multiplication of co-efficients including i yields 1-i2 =1+1=2 which is incorrect as we take only real coefficients which yields 1-1=0. So absolute value of c/b is taken.

||EV1|| =√(1+ [( anti-tr /2b)  +√( (anti-tr/2b) + c/b)]2 ;

||EV2|| =√(1+ [( anti-tr /2b)  -√( (anti-tr/2b) + c/b)]2 ;

In 2x2 matrices, there are 3 special cases:

AT  =A-1 which implies Δ2 =1 and Δ=+1 or -1 (orthogonality case)

A'  =A-1 which implies Δ2 =1 and Δ=+1 or -1

A =A-1  which implies  Δ2 =1 and Δ=+1 or -1. This case has 3 conditions

condition 1: a=-d and d= √[1-bc]

Example: ∓(1-bc)        b

                      c        (1-bc)

Δ = -d-bc=-(1-bc) - bc = -1; trace=0 ; eigen value=∓ 1

condition 2a: b=c=0 ; a=d and d= √[1-bc]

Example: √[1-bc]          0      =   1   0     or  -1    0

                        0      √[1-bc]        0   1           0    -1

Δ =1 ; trace= 2 or -2 ;

condition 2b: b = c ≠ 0 ; a=-d and d= √[1-bc]

Example: ∓ √[1-bc]          b        

                         c      √[1-bc]       

   Δ =-(1-bc) - bc =-1 ; trace=0

condition 3: b=c=0 ; a=-d and d= √[1-bc]

Example:  ∓√[1-bc]          0      =   -1   0     or   1    0

                        0      √[1-bc]          0   1           0    -1

Δ =-1 ; trace=0;

In all 3 conditions, the common criterion is d= √[1-bc] and in majority cases, Δ =-1

Suppose the matrix is

a            b

c        √[1-bc]

case 1: b=0 , c =0

Δ = a ; trace= a 1

case 2: b=0 , c ≠ 0

Δ = a ; trace= a 1

 

Pauli Matrices

σ0 =  1  0       σ1 =   0  1    σ2 =  0  -i         σ3 =  1   0

          0  1                 1  0              i    0                  0  -1

The Chief Characteristic of above matrices are

(1) They are all traceless .

(2) determinant =-1

(3) They are all orthogonal, involutary

(4) Eigen value : 1

(5) Vector norm is preserved.

they can, therfore , be characterized as Reflection matrices.

Ambidextrous Features :

(1) signature ++xx , +-xx , x symbolizing signatures which are neither + nor -

(2) ac= bd and ac=-bd.

(3) {σ1  , σ3 } =0   whereas in general , reflection matrices neither commute or anti-commute. In fact, all Pauli matrices anti-commute.

A singular n x n matrix collapses Rn into a sub-space of dimension less than n. Some information is destroyed on the way. It is not a one-to-one transformation. So there is no inverse. consider a projection from 3-D plane to 2-D x,y plane. i.e. mapping (x,y,z) to (x,y,0) such that (1,1,2) to x,y plane which is (1,1,0) . now reversing it, one does not know whether it is (1,1,1) or (1,1,2) or (1,1,5). Visually, it is like a paint where you are trying to resize a picture by dragging its corners but flatten the whole curve into a st. line. How perfectly can you recover a 3-D shape given a picture of its shadow from a single angle ?

* A singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   0

                                                 c    d           y      0

represents equation of a straight line passing through the origin. Its other features are

(1) slope or inclination w.r.t x-axis is given by m=tanθ =-a/b =-c/d

(2) the column , row vectors are linearly dependant.

(3) rank of the matrix is 1.

(4) Eigen value are λ1 =0 ;  λ2= trace=a+d;

(5) Eigen Vector 1 = 1*i + (c/a)j

      Eigen Vector 2=  1*i  -(c/d)j

Proof: e.vector=anti-tr /2b  √[(anti-tr/2b)2 + c/b ] = d-a /2b √[(d-a/2b)2 + c/b ]. since ad=bc, (d-a)/2b= (c/2)( 1/a - 1/d)

e vector (y/x)= (c/2)( 1/a - 1/d) √[(c/2)2(1/a - 1/d)2 + 4c2/4ad ] =(c/2)( 1/a - 1/d) (c/2) √[(1/a + 1/d)2   ] ==(c/2)( 1/a - 1/d) (c/2) [(1/a + 1/d) ]

e.vector(y/x)1= c/a

e.vector(y/x)2=-c/d

angle between 2 eigen vectors =cos-1  (<A,B> / ||A||*||B|| )=  (1 - |c/b|) / (√[1+(c/a)2]*√[1+(c/d)2])

in case of imaginary numbers, take only the real parts ignoring i.

when the matrix transforms the vector x    to  0

                                                              y         0  . Norm of new vector is zero irrespective of the magnitude of norm of old vector. The new vector is a null vector.

(6) b  a   *  x =0

     d   c      y

represents a st.line passing through the origin which is perpendicular to the above st.line

(7) d    -b  *  x

    -c     a      y

where the matrix which  is adjoint of matrix A represents equation of a st. line passing through origin making an angle θ with the st. line represented by first matrix equation A * x =0

                                        y    0

and  θ =tan-1  [b(d+a) / (b2 +ad)]

(8) The transpose matrix

a   c  * x = 0

b   d    y     0     represents a st.line passing through the origin having slope -a/c = -b/d.  

* A Non-singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   0

                                                 c    d           y      0

represents equation of a pair of different straight lines passing through the origin. Its other features are

(1a) 1st line: slope or inclination w.r.t x-axis is given by m1=tanθ =-a/b

(1b) 2nd line: slope or inclination w.r.t x-axis is given by m2=tanφ =-c/d

(2) the column , row vectors are linearly independant.

(3) rank of the matrix is 2.

(4) Eigen value are λ1 =tr/2  + [(tr/2)2 - Δ ];  λ2= tr/2  - [(tr/2)2 - Δ ];

(5) Eigen Vector 1 = 1*i + (d-a)/2b * j + √ [((d-a)/2)2 +c/b ]*j

     Eigen Vector 2=  1*i + (d-a)/2b * j - √ [((d-a)/2)2 +c/b ]*j

(6) ||EV1|| = =1 +√{ (d-a)/2b  + √ [((d-a)/2)2 +c/b ] }2 ;

     ||EV2|| = =1 -√{ (d-a)/2b  - √ [((d-a)/2)2 +c/b ] }2 ;

(7)<EV1,EV2> =(1 - |c/b| ) Angle between 2 eigen vectors given by cos-1  (<EV1,EV2> / (||EV1||*||EV2||))

(8) Angle between the 2 st.lines given by

     tan( φ - θ ) =( tanφ  - tanθ ) /(1+tanφ * tanθ) ; = Δ / (ac+bd); if ac=-bd, ( φ - θ )=90 . If matrix is singular , angle between 2 st.lines=0

     tan( θ-φ ) =( tanθ  - tanφ ) /(1+tanφ * tanθ) ; = -Δ / (ac+bd); if ac=-bd, ( φ - θ )=90 . If matrix is singular , angle between 2 st.lines=0

     which means there is only one straight line. If anti-determinant is zero ,Δ =2ad=-2bc. Then

     tan( θ-φ ) =-Δ / (ac+bd) =2ab / (a2-b2)

    tan( φ - θ ) =Δ / (ac+bd)=2ab / (b2-a2)

     tan( φ + θ ) =( tanφ  + tanθ ) /(1-tanφ * tanθ) ; = anti-Δ / (ac-bd); where anti-Δ =ad+bc .  if ac=bd, ( φ + θ )=90.If matrix is anti-singular , sum of slope angles of the 2 lines is 0. anti-singular means anti-Δ =0;   

Suppose tanθ = m1=-a/b and

              tanφ =m2= -b/a

Now we construct the new matrix elements by mapping. Since m1 is as per the formula, a-->a

                                                                                                                                             b-->b

m2=-c/d = -b/a so                                                                                                                c-->b

                                                                                                                                             d-->a

the matrix is    a    b

                        b   a

sum of the angles of 2 st. lines w.r.t. x-axis is given by tan( φ + θ ) =anti-Δ / (ac-bd) = (a2+b2)/ab-ba= infinity (We have mapped old elements to new elements), hence φ + θ =90 if tanφ  tanθ =1. similarly, it can be proved that

                            φ - θ =90 if tanφ  tanθ =-1

(9) The general equation of a pair of straight lines passing through the origin is given by

   a1x2+b1y2+2h1xy=0 where a1=ac , b1=bd, h1=anti-Δ /2;

If m1 is the slope of first st. line,

    m2 is slope of 2nd st.line, then

    m1+m2=-2h/b1 =-anti-Δ / bd =-a/b -c/d; If anti-Δ =0, m1+m2=0 & a/b=-c/d

    m1m2=a1/b1=ac/bd

    m1-m2=√ [(m1+m2)2 -4m1m2] =(ad-bc) /bd =Δ / bd

    m2=-a/b , m1=-c/d

(10) If anti-Δ =0, then the equation reduces to a1x2+b1y2=0 which represents 2 st. lines passing through the origin whose slopes sum up to zero.

* A singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   k       = k *  1

                                                 c    d           y      kc/a             c/a

represents equation of a straight line not passing through the origin where k is any real number. Its other features are

(1) slope or inclination w.r.t x-axis is given by m=tanθ =-a/b =-c/d

(2) y-intercept = k/b

      x-intercept=  k/a

(2) the column , row vectors are linearly dependant.

(3) rank of the matrix is 1.

(4) Eigen value are λ1 =0 ;  λ2= trace=a+d;

(5) Eigen Vector 1 = 1*i + (c/a)j

      Eigen Vector 2=  1*i  -(c/d)j

angle between 2 eigen vectors =cos-1  (<A,B> / ||A||*||B|| )=  (1 - |c/b|) / (√[1+(c/a)2]*√[1+(c/d)2])

in case of imaginary numbers, take only the real parts ignoring i.

when the matrix transforms the vector x    to k* 1

                                                              y              c/a  . Norm of new vector is k√[1+(c/a)2]. If norm of old vector is √[1+(c/a)2] , norm gets amplified by k if old vector is an eigen vector, otherwise irrespective of the status of old vector , norm of new vector is constant at k√[1+(c/a)2] . Moreover, the intercepts create a vector M=(k/a)i + (k/b)j with norm k√( 1/a 2   +   1/b2) . This is an Euclidean triangle which transforms a  right angled triangle with sides 1/a , 1/b to a triangle with sides k/a, k/b , k being the amplification factor and both the triangles are similar. This is equivalent to a triangle whose sides were a,b, then got amplified by k. and the same is divided by 2 times the area of the original triangle i.e.( k/ab ) √(a2 + b2 )

* A non-singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   k1      

                                                 c    d           y      k2           

represents equation of a pair of different straight lines not passing through the origin where k1,k2 are any real numbers. But if k2=ck1/a, the matrix has to be singular for the equation to become consistent. Its other features are

(1a) 1st line: slope or inclination w.r.t x-axis is given by m1=tanθ =-a/b

(1b) 2nd line: slope or inclination w.r.t x-axis is given by m2=tanφ =-c/d

(2a) 1st line:y-intercept = k1/b

                    x-intercept=  k1/a

(2b)2nd line:y-intercept = k2/d

                    x-intercept=  k2/c

 

(3) the column , row vectors are linearly independant.

(4) rank of the matrix is 2.

(5) Eigen value are λ1 =tr/2  + [(tr/2)2 - Δ ];  λ2= tr/2  - [(tr/2)2 - Δ ];

(6) Eigen Vector 1 = 1*i + (d-a)/2b * j + √ [((d-a)/2)2 +c/b ]*j

     Eigen Vector 2=  1*i + (d-a)/2b * j - √ [((d-a)/2)2 +c/b ]*j

(7) ||EV1|| = =1 +√{ (d-a)/2b  + √ [((d-a)/2)2 +c/b ] }2 ;

     ||EV2|| = =1 -√{ (d-a)/2b  - √ [((d-a)/2)2 +c/b ] }2 ;

(8)<EV1,EV2> =(1 - |c/b| )

(9) Angle between the 2 st.lines given by

     tan( φ - θ ) =( tanφ  - tanθ ) /(1+tanφ * tanθ) ; = Δ / (ac+bd);

(10) Point of intersection of 2 st.lines (x1,y1) given by

       x1=(dk1-bk2)/Δ ;

       y1=-(ck1-ak2)/Δ ;

(11) Norm of vector V, connecting (0,0) & (x1,y1) is given by

||V|| = (1/Δ) √[(dk1-bk2)2  + (ck1-ak2)2 ] =(1/Δ) √[k12(c+d)2+k22(a+b)2-2k1k2(ac+bd)]

(a) If a2 + b2 =1 , c2 + d2 =1, ac=-bd, then Δ=1 and

||V|| = (1/Δ) √(k12+k22) = √(k12+k22) (here, there is no trace of matrix elements)

if Vis normalized, ||V|| =[ 1 / √(k12+k22) ][k1+k2]

(b) if a2 + b2 =1 , c2 + d2 =1, ac=bd, then

||V|| = (1/Δ) √ (k12+k22-4k1k2bd)

if |b|= sinθ, |d|= cos θ or vice versa, then

||V|| = (1/cos2θ) √ (k12+k22-2k1k2sin2θ) if b,d are of same sign

        =(1/cos2θ) √ [k12+k22+2k1k2cos(π/2 + 2θ)]

||V|| = -(1/cos2θ) √ (k12+k22+2k1k2sin2θ) if b, d are of different sign.

       = -(1/cos2θ) √ [k12+k22-2k1k2cos(π/2 + 2θ)]

Thus if we put k1,k2 as two vectors with angle (π/2 + 2θ) between them, then resultant vector V1 & difference vector V2 are given by

 |V1| =√ [k12+k22+2k1k2cos(π/2 + 2θ)] as per law of parallelogram of vector addition

 |V2|= √ [k12+k22-2k1k2cos(π/2 + 2θ)] as per law of parallelogram of vector subtraction

 

thus ||V|| = (1/cos2θ)*|V1| if b,d are of same sign

thus ||V|| =-(1/cos2θ)*|V2| if b,d are of opposite sign

Nature of |V1| & |V2|

When θ = 0° ,     |V1| = |V2|,

When θ = 45° ,   |V1| < |V2|,

When θ = 90° ,   |V1| =|V2|,

When θ = 135° , |V1|  >|V2|,       

(12) if a=d, then (y/x) of eigen vectors is  √(c/b) and ratio of resultant vector is (k2/k1). If √(c/b) =k2/k1, then what happens ??

* A singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   0  and  a    b  *  x = k

                                                 c    d           y      k          c    d      y    0

represents equation of two  straight lines one  passing through the origin and the other not through the origin which is parallel to first straight line . Its other features are-

(a) 1st line slope=-a/b; 2nd line slope=-c/d =-a/b since matrix is singular.

(b) intercept of 2nd line on x-axis =k/c

                         intercept on y-axis=k/d

* A non-singular matrix  A=

a    b

c    d

given by the matrix equation   a    b     *    x =   0  and  a    b  *  x = k

                                                 c    d           y      k          c    d      y    0

represents equation of two  straight lines one  passing through the origin and the other not through the origin . Its other features are-

(a) 1st line slope=-a/b; 2nd line slope=-c/d 

(b) intercept of 2nd line on x-axis =k/c

                         intercept on y-axis=k/d

( c ) point of intersection between 2 lines

       x=-bk/ Δ

       y=ak/ Δ

(d) angle between 2 lines given by tanθ =- Δ / (ac+bd)

      if slope of first line is m1, 2nd line is m2, then tanθ = (m1-m2) / (1+m1m2)

* How to construct the equation of a pair of straight lines passing through the origin from a 2x2 matrix ?

Let A=

a    b

c    d

then   (ac)x2 + (bd)y2 +(ad+bc)xy = 0 represents a pair of st. lines passing through the origin.

(a) when ad=bc , ac=-bd, Equn is b(y2-x2) + 2axy=0   & the matrix is singular. slope dy/dx= (bx-ay)/(ax+by)

      when ad=-bc, ac=bd, Equn. is (y2+x2)=0 & if either of b,d is not zero and  if out of (x,y) , one is real , the other is imaginary. slope is dy/dx=-x/y

(b) when ad=bc, ac=bd, b(y2+x2) + 2axy=0   & the matrix is singular. Slope is  dy/dx=- (bx+ay)/(ax+by)

      when ad=-bc, ac=-bd, Equn. is (y2-x2)=0 if either of b,d is not zero. Then either both x, y are real or both are imaginary. slope is dy/dx=x/y

We have seen that (ac)x2  + (ad+bc)xy + (bd)y2= 0 represents a pair of st. lines passing through the origin.

The usual equation of a pair of st. lines passing through the origin is a1x2 +2hxy + b1y2  =0 where

a1 = ac , 2h=ad+bc , b1=bd

now 2h=ad+bc=ad+(b1/d)(a1/a) or a2d2 -2had + a1b1=0 or ad=h √(h2 - a1b1) =k

Hence given the equation a1x2 +2hxy + b1y2  =0, one can construct the 2x2 matrix A where one can arbitrarily choose a.

A =     a              b1a/k

           a1/a         k/a         and determinant Δ = k - (a1b1/k). if k2 =a1b1, then the determinant is singular.

Equation a1x2 +2hxy + b1y2  =0 represents a pair of straight lines passing through the origin where a1=ac, b1=bd,  h1=(anti-Δ)/2 . When (anti-Δ) =0, it reduces to a pair of straight lines sum of whose  angles w.r.t x-axis is zero as m1=-m2 or m1+m2=0 where m1,m2 are the slopes of st.line 1 and 2 respectively and the equation is

a1x2  + b1y2  =0  or a2x2 -b2y2 =0

or (ax+by)(ax-by)=0

 

                                       

* matrix A =

a      b

c      d

and the conditions are

1. a2 + b2 =1.....(1)

2. c2 + d2 =1.....(2)

3.ac= bd  .......(3)

then it follows that

(a) a=d

(b) b=c

Proof: ac= bd ,a2c2= b2d2 or (1-b2 )(1-d2)=b2d2 ;    or b2 + d2 =1....(4).similarly, it can be proved that a2 + c2 =1....(5). From equn (1) & (5), it follows

that b=c. Similarly from equn.(1) &(4), it follows  a=d.

(c) |a|,|b|,|c|,|d|  each lie between [0,1]

(d) determinant lies in  [-1,1]

(e)  A= f (either a or b or c or d) i.e.  function of a single variable.

1st  case : if a=d, b=-c ,then ac=-bd, Δ=b2 + d2 =1  ; AT =A-1 , hence matrices are orthogonal. Eigen values are complex numbers for real a,b,c,d.

if a=-d, b=c , then ac=-bd, Δ=-(b2 + d2 )=-1  ; AT =A-1 , hence matrices are orthogonal. Eigen values 1.

where |b|, |d|, |a|,|c| is less than or equal to 1.

2nd case:But if  a=d,b=c, ac=bd,    Δ =d2 - b2  =1-2b2 ; E.value d b since d,b are both numerically less than / equal to 1, Δ lies between 0 and 1. AT=A. (y/x) of eigen vectors=1 (++++,----,+-+-,-+-+ cases)

But if  a=-d,b=-c, ac=bd, Δ =-(d2 - b2)=-(1-2b2) ; E.value:√(d2 - b2)since d,b are both numerically less than / equal to 1, Δ lies between 0 and -1. (y/x) of eigen vectors=d/b √[(d/2)2 -1]. Since |d| <=1, (y/x) shall be a complex number.(++--,+-+-,-+-+,--++ case) . Orange patterns are observed in both sub-groups of 2nd case.

If we take a=cosθ=c, then b=sinθ=d, Δ = ( cos2θ - sin2θ )= cos2θ. Its periodicity is 180 degree & pattern is similar to cosθ. At θ=0° & θ=180 , Δ =+1 and midway in between i.e. at θ=90 , it is -1.If one plots  Δ =f(cos2θ), one gets the blue graph.If one plots  Δ =f(cosθ), one gets the red graph.

Remark: The main difference between 1st case and 2nd case is that in the first case , the determinant is +1 or -1 irrespective of value of a,b and is a constant. However, in the 2nd case, the determinant varies with variation of value of a,b and lies [-1,0] or [0,+1] . so the overall range is [-1,+1]. For 2nd case, typical examples are :-

1 0                                1/√2     1/√2

0 1  has Δ =1  and         1/√2     1/√2  has Δ =0

similarly

0 1                                  -1/√2      -1/√2

1 0  has Δ =-1  and         -1/√2     -1/√2  has Δ =0

* matrix A =

a      b

c      d

and the conditions are

1. a2 + b2 =1.....(1)

2. c2 + d2 =1.....(2)

3.ac= bd  .......(3)

From Equn.(3)  √(1-d2)*a = √(1-a2)*d  or

 (1-d2)*a = √(1-d2)*√(1-a2)*d  ....(6)

Δ =ad-bc = ad - √(1-d2)*√(1-a2) or

dΔ =ad2 -d*√(1-d2)*√(1-a2)=ad2 (1-d2)*a which implies that

dΔ =ad2+a-ad2 or

dΔ =a.....(7) or

dΔ =ad2 -a+ad2  =2ad2-a  or 2ad2 -Δd -a =0 ......(8) . This is a quadratic equation of d and solution can be found out if a,Δ are known.

if a=d, from (7) Δ = +1 (rotation matrix)

if a=-d, from(7) Δ = -1 (reflection matrix)

if a=d, from (8) Δ = 2d2-1   (non-orthogonal matrix akin to rotation)

                          d = [(1+Δ)/2 ] .....(9)

if a=-d, from (8) Δ = -(2d2-1)   (non-orthogonal matrix akin to reflection)

                           d = [(1-Δ)/2 ]......(10)

Pl. remember that  to find either d or Δ given any one, |d| <=1 or Δ [0,1] for equn. (9)

and                                                                                  |d| <=1 or Δ [-1,0] for equn. (10)

Accordingly, appropriate value to be put to get correct result.

Dividing  Equation (8) by a, and putting   x=d √2, Δ =a√2, then equn. (8) becomes x 2 - x -1 =0......(11)

Solutions are

φ1 =(1+√5)/2 = 1.6080

φ2 =(1-√5)/2 = -0.6080

φ1φ2 =-1

φ1 + φ2=1

φ1 - φ2=√5

φ - 1/φ =1

The matrix M=  1   1

                          1   0   has eigen values  φ1 and φ2 . Corresponding eigen vectors are

μ =  φ1

        1

ν = φ2

       1

M matrix can be diagonalized by a Similar matrix

S=   φ1    φ2

        1       1

and diagonal matrix is

Λ =  φ1    0

         0     φ2

Fibonacci Numbers are represented by matrix equation

Fk+2    = M * Fk+1

Fk+1                 Fk

If a +b =0

φ1*a + φ2*b = 1

then a=-b=1/√5

* We take up the case of matrix A=

x  y

y  x

with condition that x2 + y2 =1 and the matrix is akin to rotational matrix.

If we write the matrix equation

A * x = 1

      y     1, then it represents a pair of conics , one a unit circle and the other a unit rectangular hyperbola ( A rectangular hyperbola is a special case of general hyperbola whose asymptotes are perpendicular to each other. The circle touches the vertex of the hyperbola. On Expanding,

x  y  * x =1

y  x     y   1

or  x2 + y2 =1 (equation of a circle with center at origin and unit radius)

and 2xy= 1 (equation of rectangular hyperbola with unit semi-major axis) & foci at (√2,0) & (-√2,0) , eccentricity √2, and directrix x=1/√2

Rectangular hyperbola is the locus of the point M such that difference of its distance from two foci is √2  times the distance between the foci.

We can rewrite the above matrix equation

cosθ   sinθ  *  cosθ = 1

sinθ   cosθ      sinθ     1

If take vector X=cosθ* i +sinθ*j with vector norm as 1

           vector Y=  i+j with vector norm √2

 which means the operator acting on the vector rotates the vector from inclination θ (w.r.t. x-axis) to 45 degree and stretches it by √2 irrespective of the initial angle and initial norm. If we apply a scaling factor K, the equation becomes

K2 * cosθ   sinθ  * cosθ = K2  * 1

        sinθ   cosθ     sinθ               1

The eigen value of the matrices are λ1 = cosθ + sinθ , λ2 = cosθ - sinθ

to find maximum value of λ1, dλ / dθ =0 or -sinθ +cosθ =0 or θ=45°

to find minimum value of λ2 , dλ / dθ =0, or sinθ +cosθ =0 or θ=135

so maximum value is √2 and minimum value is -√2.

(y/x) of eigen vectors are 1 .so vectors are

1  or 1

1     -1

Hence eigen equations are

cosθ  sinθ   *  1  = (cosθ +sinθ) * 1

sinθ  cosθ       1                             1  and

cosθ  sinθ   *  1  = (cosθ -sinθ) * 1

sinθ  cosθ      -1                          -1

with eigen value hovering between √2 and -√2.

The matrix

-x  -y

-y  -x

is similar in behavior to the above matrix.

If we take the adjoint matrix i.e.

x  -y  * x =1

-y  x     y   0

or  x2 - y2 =1 (equation of a hyperbola with center at origin and unit semi-major axis)

cosθ  -sinθ   *  1  = (cosθ -sinθ) * 1

-sinθ  cosθ       1                             1  and

cosθ  -sinθ   *  1  = (cosθ +sinθ) * 1

-sinθ  cosθ      -1                          -1

It shall be seen that eigen value of the adjoint one is λ2 if that of the original one was λ1 and vice versa.

 

 

Figure below showing 2 rectangular hyperbolas 2xy= a2 and 2 hyperbolas  x2 - y2 =a2  and circle  x2 + y2 =a2   .

 

* matrix A =

a      b       

c      d

and the conditions are

1. a2 + b2 =1.....(1)

2. c2 + d2 =1.....(2)

3.ac= bd  .......(3)

then it follows that

(a) a=d

(b) b=c

( c) A= f(either or b or c or d) i.e. function of a single variable

(d) a,b,c,d ∈ [-1,1]  subject to condition (1),(2)

(e) Δ ∈ [-1,1]

When ac = bd, it is non-orthogonal matrix

When ac = -bd, it is orthogonal matrix.

Comparison of Orthogonal & Non-Orthogonal Matrices

Orthogonal                                                                         Non- Orthogonal

(1) Δ = 1 irrespective of a / θ value                                 (1) Δ=(a2 - b2)=cos2θ & is dependant on  of a / θ value, limiting values being +1 or -1

(2) signature is +++- or  ---+                                              (2) signature is ++++ , ---- , ++--

(3) vector norm is preserved                                               (3) vector norm is not  preserved

Norm before application of the operator                               Norm before application of the operator

√(x2 + y2)                                                                               √(x2 + y2)

Norm after application of the operator                                  Norm after application of the operator

√[(xcosθ-ysinθ)2 + (xsinθ+ycosθ)2]=√(x2 + y2)                 √[(xcosθ+ysinθ)2 + (xsinθ+ycosθ)2]=√[(x2 + y2)+2xysin2θ]

                                                                                              minimum value=    x-y at θ=135° ;

                                                                                              maximum value = x+y at θ=45 ; the phase difference is 90                                                   

                                                                                              only when θ=0   or θ=90   , it is  √(x2 + y2)

(4) vector anti-norm is not preserved                                  (4) vector anti-norm is not preserved

Anti- Norm before application of the operator                    Anti- Norm before application of the operator

√(y2 - x2)                                                                               √(y2 - x2)

Anti-Norm after application of the operator                        Anti-Norm after application of the operator

√[(xsinθ+ycosθ)2 - (xcosθ-ysinθ)2]=                                   √[(xsinθ+ycosθ)2 - (xcosθ+ysinθ)2]=

√[cos2θ(y2 - x2) +2xysin2θ]                                                 √[cos2θ(y2 - x2)]

(5) Rotational Matrix1*Rotational Matrix2=                      (5) akin to Rotational Matrix1*akin to Rotational Matrix2=

     Rotational Matrix3                                                              akin to Rotational Matrix3

a1   -b1  * a2   -b2 = a1a2-b1b2    -a1b2-a2b1                     a1   b1  * a2    b2 = a1a2+b1b2    a1b2+a2b1

b1    a1     b2    a2    a1b2+a2b1    a1a2 - b1b2                     b1   a1     b2   a2    a1b2+a2b1    a1a2 + b1b2

= X1    -Y1                                                                             = X2    Y1

   Y1     X1                                                                                 Y1    X2

(6) Reflection Matrix1*Reflection Matrix2=                      (6) akin to Reflection Matrix1*akin to Reflection Matrix2=

      Rotational Matrix                                                                akin to Rotational Matrix

a1   b1  * a2   b2 = a1a2+b1b2     a1b2-a2b1                          a1   b1  * a2   b2 = a1a2-b1b2     a1b2-a2b1

b1 -a1     b2  -a2    -a1b2+a2b1   a1a2+b1b2                          -b1 -a1   -b2  -a2    a1b2-a2b1     a1a2-b1b2

=X2    Y2                                                                                  =X1    Y2

-Y2     X2                                                                 nbsp;                   Y2     X1

(7) Rotation matrices commute .                                         (7) Akin to Rotation matrices commute .

(8) Reflection matrices do not commute/anti-commute      (8) Akin to Reflection matrices do not commute/anti-commute

a1 b1 * a2   -b2  = a1a2-b1b2   -(a1b2+a2b1)                         One can work out in similar manner

b1-a1  -b2  -a2      a1b2+a2b1     a1a2-b1b2 

a2   -b2 *a1   b1  =a1a2-b1b2     (a1b2+a2b1)

-b2 -a2    b1 -a1  -(a1b2+a2b1)     a1a2-b1b2

(9) Rotation * Reflection =Reflection                                  (9)akin to Rotation *akin to  Reflection =akin to Reflection

(10) Reflection  * Rotation =Reflection                               (10) akin to Reflection  *akin to  Rotation =akin to Reflection

(11) Rotation matrices do not commute with Reflection matrices           (11) akin to Rotation matrices do not commute with akin to Reflection matrices

a1 -b1 * a2 b2  = a1a2-b1b2      a1b2+a2b1                                                 a1 b1 * a2 b2  = a1a2-b1b2      a1b2-a2b1                                                  

b1 a1    b2 -a2     x1y2+x2y1     -a1a2+b1b2                                                b1 a1  -b2 -a2     -a1b2+a2b1     -a1a2+b1b2

a2   b2 *    a1 -b1  =  a1a2+b1b2     a1b2 -a2b1                                           a2   b2 *   a1 b1 = a1a2+b1b2     a1b2 +a2b1

b2 -a2       b1  a1       a1b2-a2b1    -(a1a2+b1b2)                                        -b2 -a2    b1  a1    -x1y2-x2y1    -(a1a2+b1b2)

(12) Commutation of above rotation & Reflection matrices is               (12) Commutation  of above akin to rotation &akin to  Reflection matrice is

-2b1b2   2a2b1                                                                                            -2b1b2   -2a2b1

2a2b1    2b1b2 which is reflection                                                              2a2b1    2b1b2 which is akin to reflection

(13) anti- commutation  of above rotation & reflection is                       (13) anti- commutation  of above rotation & reflection is

2a1a2   2a1b2                                                                                             2a1a2   2a1b2

2a1b2  -2a1a2 which is reflection                                                             -2a1b2  -2a1a2   which is akin to reflection

(14) AT = A-1                                                                                          (14)AT ≠ A-1;

 In addition, for reflection matrices A=AT                                                    However, for akin to rotation matrices,  A=AT

hence A-1   =A, so these matrices are involutary.                                        For akin to reflection matrices, A*A=cos2θ*I where I is identity matrix.Hence A*A is scalar  matrix

                                                                                                                                                 means repetition of reflection scales the unit  matrix by a factor of cos2θ

(15) E.value Rotation Matrix:           x iy                           (15) E.value Akin to Rotation Matrix:           x y

                     Reflection Matrix:        1                                                   Akin to Reflection Matrix:        √(x2 - y2)

(16) (y/x)E.vector Rotation Matrix:  i                                (16) (y/x) E.vector Akin to Rotation Matrix:     1

                     Reflection Matrix:   x/y √[ x2/y2 +1]                                           Akin to Reflection Matrix:  x/y √[ x2/y2 -1] 

(17) Reflection matrices are traceless.                                   (17) Akin to Reflection matrices are traceless.

        Rotation matrices are anti-traceless.                                      Akin to Rotation matrices are anti-traceless. 

(18) All orthogonal matrices form a group with respect to     (18) All non-orthogonal matrices as defined above do not form any group under matrix multiplication ,

       matrix multiplication. n x n matrices form O(n) group .            neither the  akin to rotational or akin to reflection. Exa:-

       All rotational matrices form special Orthogonal group            A=   cosθ1    sinθ1    B= cosθ2      -sinθ2     C=A*B=   cos(θ1-θ2)      -sin(θ1+θ2)

       called SO(n). Group is abelian when n=even.                                  sinθ1    cosθ1          sinθ2       -cosθ2                     -sin(θ1+θ2)      cos(θ1-θ2)  

                                                                                                        Here in C matrix , following rule is violated because  a2 + b2   ≠ 1 and  c2 + d2   ≠ 1 . The

                                                                                                         situation becomes different if angle is replaced by hyperbolic angle. however, at θ -->0 

                                                                                                         the matrices can be approximated to be forming a group w.r.t. matrix multiplication. In

                                                                                                         above case,a2 + b2   =  c2 + d2 = [cos2(θ1-θ2) +sin2(θ1+θ2)] . If we replace

                                                                                                         a2 + b2   ≠ 1 and  c2 + d2   ≠ 1 with above, other 2 things remaining the same, the matrices

                                                                                                         form a pseudo-group with matrix multiplication. However, they are not abelian pseudo

                                                                                                         group because

                                                                                                         C1=B*A =  cos(θ1-θ2)           sin(θ1+θ2)

                                                                                                                             sin(θ1+θ2)           cos(θ1-θ2) and hence C1  ≠ C.

                                                                                                          This group can be termed as a Pseudo Group w.r.t. Orthogonal Group because only 1

                                                                                                           condition out of the 3 conditions is different in group formation i.e a2 + b2   ≠ 1 and

                                                                                                                                 c2 + d2   ≠ 1 for the product matrix.

                                                                                                          = [cos2(θ1θ2) +sin2(θ1θ2)] [1,2] or[-2,-1] i.e between  [-2,+2]. The value of

                                                                                                          Δ of A,B  as well as C,C1 hover between [-1,+1]

Definition of Pseudo Group (2x2 matrix): A set of elements along with a binary operation form a pseudo group w.r.t main group  if 1. the elements follow closure law & associativity law. Moreover, corresponding to each element, there is an adjoint element which belongs to the group . The binary operation of the element on its adjoint produces a scalar matrix whose scalar element is square of the matrix determinant & the same also belongs to the group..                                                                                                         

Further Analysis of Non-Preservation of Norm in case of Non-Orthogonal Matrices (item no. 3 above):

Vector Norm after application of operator is ||V|| =√[(x2 + y2)+2xysin2θ]=√[(x2 + y2)+2xycos(π/2 - 2θ)]

Let φ =π/2 - 2θ, then

||V|| =√[(x2 + y2)+2xycosφ] which is nothing but law of parallelogram of addition of vectors. Further treatment of this resultant are given in parallelogram1.htm.

When  φ = 180  or θ=-π/4 =-45 ,  ||V|| =(x - y)--- it contracts up to minimum value  upon being multiplied by the operator.( Vector contraction)

When  φ = 90°  or θ=0° ,  ||V|| =√(x2 + y2)--- it remains unchanged upon being multiplied by the operator.( It is identity transformation)

When  φ = 0°  or θ=π/4 =45 ,  ||V|| =(x + y)--- it expands up to maximum value  upon being multiplied by the operator.( Vector dilation)

the cycle is 180 for  θ and 360 for φ .

Matrix Equation of the Vector V is given by

1     cosφ    *   x    =  x+cosφ

0     sinφ         y           ysinφ

where the non-orthogonal matrix

cosθ     sinθ   gets mapped to   1     cosφ     where      φ =(π/2 - 2θ)

sinθ     cosθ                               0     sinφ

The matrix M= 1    cosφ

                         0     sinφ    which is a Upper triangular matrix has the property of eigen values being the value of diagonal elements. It also continuously evolves from a singular matrix to an identity matrix I as φ changes from 0 degree to 90 degree. The matrix behaves in similar manner if it is a lower triangular matrix which it can be.

 
Algebric & Geometric Multiplicity of Eigen Values:

* No. of times an eigen value occurs in a matrix, is called its algebraic multiplicity. If there is a nxn matrix, algebric multiplicity cannot exceed n.

* Geometric multiplicity is the no. of linearly independent vectors associated with an eigen value. Geometric multiplicity cannot exceed algebric multiplicity & its minimum value is 1.

* If for any matrix, the geometric multiplicity is equal to the algebraic multiplicity for each eigen value, then the matrix is non-singular and hence can be diagonalized.

Kernel of a Matrix:

Solution of a set of simultaneous linear equations consists of specific solution + homogeneous solution .

Kernel of the co-efficient matrix of simultaneous linear equations tells us about the homogeneous solution part . This is a rough measure of " how much of the domain vector space is shrunk to the "zero vector", i.e. how much collapsing or condensation of information takes place. An extreme example is the zero matrix which annihilates everything , obliterating any useful information the system of equation might reveal to us(It is all kernel and no range)

Example : A

1 1 0

1 0 2

2 1 2

and B=

x

y

z

such that AB=Z where Z=

2

3

5

First we put AB=0 (homogeneous equation )where A is a singular matrix

here x /Δ1 = -y / Δ2 = z / Δ3

Δ1 = 0  2    Δ2 =  1   2     Δ3 = 1   0

        1  2               2  2              2   1

hence  x / -2 = -y/-2 = z/1  or x=-2z, y=2z

Kernel is {t(-2,2,1); t ∊ R} . This is a sub-space of R3 of dimension 1 ( a line passing through the origin)

Put z=1,then  x=1, and y=1 (specific solution)

The Solution is - specific solution (1,1,1) +  t(-2,2,1); t ∊ R

* The matrix M which transforms A matrix into its transpose AT   :

Let A=  a   b   and M=  e  f     such that MA=AT

              c  d                  g  h

then e = (ad-c2 ) / ad-bc) ;    f=-a(b-c)/(ad-bc) ; g= d(b-c)/(ad-bc) ; h= (ad-b2 ) /(ad-bc) if ad-bc=Δ = determinant, then

M=( 1/Δ2 )   * ad-c2         -a(b-c)

                       d(b-c)          ad-b2

e / h= ad-c2 / ad-b2  = say n

f/g=-a /d = say n

if n=1, then M= h  f

                         -f   h and with real numbers, its eigen value is complex i.e. h±if like a rotation matrix if determinant is +1.

With M as above Matrix A can have either of 2 forms;

A1=   d    b

          b   -d  with eigen value= √( d2+b2) which is sinilar to reflection matrix provided determinant is -1.

A2=  d    -b

         b    -d with eigen value √( d2-b2) which can be real or imaginary depending on value of d,b. This is not orthogonal even if determinant is -1 and akin to non-orthogonal reflection matrix. A2 is involutary if determinant is -1.

One can work with general value of n=n.

* Similarly matrix N which transforms matrix A  to its adjoint A' is given by

N=

( 1/Δ2 )   * bc+d2         -b(d+a)

                -c(a+d)          bc+a2

* Similarly matrix O which transforms matrix A  to its inverse A-1 is given by

O=

( 1/Δ4 )   * bc+d2         -b(d+a)

                -c(a+d)          bc+a2

* Matrix T which transforms a typical orthogonal rotation matrix (where Or-1 = OrT )Or to non-orthogonal akin to rotation matrix(where NOr = NOrT )   NOr is

T = 1               0           where  Or =    cosθ    -sinθ     and   NOr =     cosθ    -sinθ

    -sin2θ      cos2θ                               sinθ     cosθ                              sinθ     cosθ     and T *Or = NOr   .

where determinant of T and NOr are cos2θ. This implies both are periodic functions of θ with values in the range [-1,+1]. this contrasts with the determinant of  Or which is fixed at +1 or -1. In the instant case, T is a lower triangular matrix. It can also be a upper triangular matrix.

Roto -Translation Matrix:

Any 2x2 square matrix when operates on 2-D vector produces another 2-D vector whose origin remains the same, and the co-ordinate in 2-D space is different. This displacement can be broken into 2 types of movements, rotational and translational , following in succession. These 2 movements are not necessarily commutative, i.e rotation-translation may or may not be equal to translation-rotation. The question is how to split the above matrix to rotation and translation.

a  b  0   * x1  = ax1+bx2 =z1

c  d  0      x2     cx1+dx2   z2

0  0  1      1         1             1

This is equivalent to

L *  cosθ    -sinθ   0  * x1  =L * x1cosθ -x2sinθ  

       sinθ      cosθ  0     x2           x1sinθ+x2cosθ

       0          0    1     1                    1

where L is translation part & the matrix M is rotational part (anti-clockwise) where M=

cosθ    -sinθ   0   

sinθ      cosθ  0    

   0          0    1   

 

Now z1=Lx1cosθ -Lx2sinθ=L(x1cosθ -x2sinθ) =Ly1

        z2=Lx1sinθ +Lx2cosθ=L(x1sinθ +x2cosθ)=Ly2

here there are 2 variables L,   and 2 equations. Solving,

cosθ =(z1x1+z2x2) /L(x12 +x22) =[x1/√(x12 +x22)]*[z1/√(z12 +z22)]  +[x2/√(x12 +x22)]* [z2/√(z12 +z22)]=cosb*cosa +sinb*sina=cos(b-a)

hence, θ =b-a

Here, cos b=x1/√(x12 +x22)    cos a= z1/√(z12 +z22)=(ax1+bx2)/√[(ax1 +bx2)2+(cx1 +dx2)2=  y1/√(y12 +y22)

          sin b=x2/√(x12 +x22    sin a= z2/√(z12 +z22)=(cx1+dx2)/√[(ax1 +bx2)2+(cx1 +dx2)2=  y2/√(y12 +y22)

So M=

cosθ    -sinθ   0 = cos(b-a)    -sin(b-a)   0   

sinθ      cosθ  0     sin(b-a)     cos(b-a)   0    

   0          0    1         0               0           1

M can be a product of 2 rotation matrices M=Ma * Mb where

 Ma=  cos a   sin a  0

         -sin a   cos a  0

            0        0       1

 Mb=  cos b   -sin b  0

           sin b   cos b  0

            0        0       1

[Ma , Mb] = 0 in 2 dimensions.

 

L = √{ (z12 +z22) /(x12 +x22)} which is nothing but the Lorentz ratio.

L2=x12(a2+c2) / (x12+x22)   +x22(b2+d2) / (x12+x22)   +2x1x2(ab+cd)/(x12+x22) ; ............

    =[( ||col1|| / ||vecX|| ) *x-component of vecX]2 +[( ||col2|| / ||vecX|| ) *y-component of vecX]2  +2 *(|x-component of vecX/||vecX||) *(y-component of vecX/||vecX|| )*<col1,col2>

= L2part a +L2 part b +L2part c ; ..........(1)

In real cases, the matrix may be the result of any number of discrete rotation and translation in any order or it may be a continuous process.

Now L can be L=1 ......(a) invariance of norm

                       L < 1..... (b) contraction of norm

                       L > 1......(c) dilation of norm

suppose  a    b  * x1 =  z1 = ax1+bx2

               c   d      x2     z2    cx1+dx2

Upon the action of the operator , x1 transforms to z1 and z1 is not  necessarily equal to x1 and so also transformation of x2 to z2. But what are the special

conditions under which ||x|| = ||Z|| or L=1

It is not difficult to observe that if  a2 +c2 =1 , b2+d2=1, ac+bd=0, then L=1 irrespective of whether x1,x2 are both real or both imaginary or one real & one imaginary if the matrix is a real matrix, here L2part c vanishes and other 2 parts sum up to 1. If out of x1 and x2, one is real and the other is imaginary, then the two parts subtract to 1. one part is dilation and the other is contraction, magnitude of both being greater than 1 and both differ in magnitude by 1. On the other hand, if both are real / imaginary, each part contributes towards either dilation/contraction and both sum up to 1.

We know a  b  * x1 = z1

                c  d     x2     z2

If the reference frame also moves proportionally, then the new origin is given by

               a-1   b    * x1 =O1

                c    d-1     x2   O2 ( this can be derived from Δx =  z1-x1  and Δy = z2-x2

Example: matrix is A= 2  3   vector X= 1     AX = 8   (click submit on the top)

                                    4   5                    2              14

now for shift of origin, click submit2B above the writing " clicking  here, A matrix becomes a-1,b,c,d-1." and is filled up against B matrix. Now put vector Y = 1

       2   i.e.  same as X and click submit at the top of the page . The BY vector is the new co-ordinate of the origin.

Generalizing , we write  a  b  * x1 m = AX

                                       c  d     x2 n               where (m,n) are the coordinates of initial origin . then coordinates of transformed origin is

                                     a-1  b    *  x1 m

                                      c   d-1      x2 n

= a-1   b     * x1    a-1   b  *  m  

    c    d-1      x2       c    d-1     n