Matrix ,Vector multiplication

 Case : Ia Case : Ib |A> (input ket vector,dimension-2) (i) (i) (j) (j) |A>  ( normalized ) (i) (j) M (operator:1x2 matrix) -- -- |B> (output ket vector,dimension-2) ( normalized A transform ) ( normalized A transform) ( normalized A transform) ( normalized A transform ) λ(eigen value) = λ(eigen value) = probability to observe |B> along |i> probability to observe |B> along |i> probability to observe |B> along |j> probability to observe |B> along |j>

Useful External Links: 1, 2, 3,

 Case : IIa Case : IIb |A> (input ket vector,dimension-2) (i) (i) (j) (j) |A>  ( normalized ) (i) (i) (j) (j) M (operator:2x2 matrix) -- -- -- -- |B> (output ket vector,dimension-2) ( normalized A transform ) ( normalized A transform ) ( normalized A transform ) ( normalized A transform ) λ (eigen value)= λ (eigen value)= Rank of the Matrix M = Rank of the Matrix M = probability to observe |B> along |i> : probability to observe |B> along |i> : probability to observe |B> along |j>: probability to observe |B> along |j> :

 Case : IIIa Case : IIIb |A> (input ket vector,dimension-3) (i) (i) (j) (j) (k) (k) |A>  ( normalized ) M (operator:3x3 matrix) ---- ---- ---- ---- ---- ---- |B> (output ket vector,dimension-3) ( normalized A transform ) ( normalized A transform ) ( normalized A transform ) ( normalized A transform ) ( normalized A transform ) ( normalized A transform ) λ(Eigen Value) = λ (Eigen Value)= Rank of the Matrix M = Rank of the Matrix  M= probability to observe |B> along |i> : probability to observe |B> along |i> : probability to observe |B> along |j> : probability to observe |B> along |j> : probability to observe |B> along |k> : probability to observe |B> along |k> :
 * In the above exercise, put real values. * A state vector  |A > is represented in general as  |A > =  Σi=1 to n (αi|i i>)....(1) where n is the dimension of Hilbert Space / Euclidian Space . In 3-D Vector Space, n=3 and  |A>= α1i 1 + α2i2 + α3i3= α1i + α2j + α3k where i,j,k are orthogonal basis vectors ( ortho-normal vectors) in x,y,z co-ordinate axes, i 1 =i , i 2 =j ,i 3 =k and   αi  are in general complex numbers and are called components of the state vector. |A > is called a Ket vector and its components ( complex numbers )are represented as a column matrix  with no. of rows as the dimension of vector space and column=1. Its counterpart is  is said to be normalized if = 1. A state vector  |A > & another state vector |B >  are  said to be orthogonal  if = =0. * The inner product of a ortho-normal basis vector ( bra vector) with a ket vector is the corresponding component of the ket vector :-< i |A >= αi ; < j |A >= αj ; < k |A >= αk ;  A= αii + αjj + αkk where αi,αj,αk are the components of ket vector in 3-D Hilbert space ( vector space of complex numbers ). In general, ii|A>= αi....(2) * If M is a linear operator acting on a state vector |λ>   and M | λ> = λ | λ > = .....(3) where λ is a number , then  |λ> is called an eigen vector and λ is called an eigen value. Operators in Quantum Mechanics have certain characterstic features- they are things we use to calculate eigen values and eigen vectors. They act on state vectors which are abstract mathematical entities and not on   actual systems. When an operator acts on a state vector, it produces a new state vector. * If  |λ> is a state vector &  |λN> is a normalized state vector, then  |λN> =√[1/Σi (αi)2]  | λ> .......(4) and  λN =√[Σi (αi)2]  * λ ........(5) It follows that  M |λN> = λN |  λN > ........(6)    αiN=  αi / √[Σi (αi)2] .......(7)  | λN > =  Σi=1 to n (αiN|i i>) .....(8) * Σj ()*αj =βk  or  Σj  Mkjαj=βk  where k,j are basis vectors, αj is jth component of |A> & βk is kth component of |B>. Since αj and βk are both column vectors of same dimension say n, the operator M has to be a n x n matrix. We call it  Mkj . If direction of |A> and direction of |B> are same, then |A> is called the Eigen Vector of M and in such cases |B>= λ | A> where λ is called the Eigen value. * In Quantum Mechanics, operator M is a linear Hermitian matrix. M=M† where  M†  = (MT)*  where M dagger is the Hermitian Conjugate of M and is arrived at by  first taking the transpose of M i.e. MT ( changing rows to columns and columns to rows) and replacing the complex numbers with their respective complex conjugates) or mij = (mji)* where * stands for complex conjugate. The diagonal elements of the Hermitian matrix are real. The real number counterpart of Hermitian matrix is a symmetric square (nxn) matrix. The product of two hermitian matrices A,B is hermitian if and only if AB=BA. Every hermitian matrix is a normal matrix i.e. MM† =M†M. For real counterpart, MMT=MTM. In case of real matrices, all orthogonal, symmetric and skew symmetric matrices are normal. Among complex no. matrices, all hermitian, skew hermitian, unitary matrices are normal. The reverses are not necessarily true. * A Hermitian matrix i.e. a complex square matrix is said to be unitary if M†  =M-1 or MM† =I where I is identity matrix.  Unitary matrices are important in quantum mechanics because they preserve norms and thus probability amplitudes. If x, y are two complex vectors, multiplication by unitary matrix, say U preserves their inner product which implies U is normal. U is also diagonizable. | det (U) | =1, = . U can be written as U=eiH  where  e is matrix exponential and H is a Hermitian matrix. The real analogue of unitary matrix is orthogonal matrix. The eigenvectors of a Hermitian operator form an orthonormal basis and also form a complete set . But sometimes, we encounter a set of linearly independent eigenvectors that do not form an orthonormal set. This typically happens when a system has degenerate states--- distinct states having same eigen value. In those cases, we create an orthonormal set by using the linearly independent vectors. * If X is a n x  n real or complex matrix, matrix exponential eX is given by the power series- such a series always converges so X is well defined. Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis. Matlab, GNU Octave, and SciPy all use the Padé approximant.[7][8][9] General expression of 2x2 unitary matrix is which depends on 4 real parameters.  (the phase of a, the phase of b, the relative magnitude between a and b, and the angle $\theta$). The determinant of such a matrix is: det  (U)  = eiθ The Sub-Group of such elements of U where det|(U)| =1 is called the special unitary group SU(2) Alternative form is By introducing  φ1 = ψ + Δ and φ2 = ψ − Δ, takes the following factorization: This expression highlights the relation between 2 × 2 unitary matrices and 2 × 2 orthogonal matrices of angle θ. Many other factorizations of a unitary matrix in basic matrices are possible. * For any nonnegative integer n, the set of all n-by-n unitary matrices with matrix multiplication forms a group, called the unitary group U(n). Any square matrix with unit Euclidean norm is the average of two unitary matrices.[1] * The eigen vectors of a Hermitian operator form a complete set. Any vector the operator can generate, can be expanded as a linear combination of the eigen vectors. * If the two  eigen values of the Hermitian operator are different and are λ1 and λ2 respectively, the corresponding eigen vectors are orthogonal to each other. * If the two eigen values of same Hermitian operator acting on two different state vectors are same, any state vector which is a linear combination of the above two state vectors will yield the same eigen value if operated upon by the same Hermitian operator. Exa- If  M|  λ1> = λ | λ1> , M|  λ2> = λ | λ2> and |A>  =α| λ1> +β |λ2 > where α , β are complex numbers , then M|  A> = λ | A> . Here  λ1 &  λ2 are two linearly independent state vectors but not necessarily orthogonal to each other. * If the two eigen values of same Hermitian operator are same, the corresponding eigen vectors can be chosen to be orthogonal to each other. Exa- From  λ1 &  λ2 , we can create 2 ortho-normal  state vectors ( i, j ), both having the same eigen value  λ. i =  |λ1> / | λ1|  & j=  | λ2⊥ >/ | λ2⊥ |  where  |λ2⊥> = |λ2> - <λ2|i> * |i> and also create 2 orthogonal vectors from these ortho-normal vectors. * The situation when  two different eigen values are same for two different  eigen vectors but same operator, is called Degeneracy. *Unambiguously distinguishable states are represented by orthogonal vectors. * Sometimes, we encounter a set of linearly independent eigen vectors that do not form an ortho-normal set. This typically happens when the system has degenerate states-- distinct states having same eigen value. In that case, we always use linearly independent vectors we have to create an orthonormal set that spans the same space. The method is called Gram-Schmidt Procedure. * If |A> is a state vector of a system and L is an observable to be measured, the probability to observe λi ( λi  are the eigen values of L & | λi > are corresponding state vectors) is P ( λi ) = < λi | A > = | |2 * Suppose L is a Hermitian operator ( observable) and M is another Hermitian operator, then    [L,M] =LM-ML is called a commutator of L with M. Links: 1,2,3,4,5,6,7,8,9,10, Link: