Algebraic subjects / theory of matrices : and applications

<p>This thesis consists of six independent parts, each of which develops a topic in the theory of matrices.</p> <p>The first part, 'Compounds, adjugates and partitioned determinants', attempts, with the introduction of suitable notation, and th...

Full description

Bibliographic Details
Main Author: Afriat, S
Format: Thesis
Published: 1953
Description
Summary:<p>This thesis consists of six independent parts, each of which develops a topic in the theory of matrices.</p> <p>The first part, 'Compounds, adjugates and partitioned determinants', attempts, with the introduction of suitable notation, and the proving of some general results, to contribute to and complete certain aspects of the general theory of determinants. It is specially concerned with the derived systems of matrices, the compounds and adjugates of a matrix, whose elements are formed from the minors and cofactors of that matrix, with the dual hybrid compounds, which form a generalization of these, and their determinants, and also with the expansions of bordered and partitioned determinants in terms of the compounds and adjugates of their component parts. The multiplication theorem for compounds and adjugates of Binet and Cauchy is proved, and its application is fundamental throughout. One of the results is that if matrices <em>a</em>, <em>b</em> are square of order m, n and <em>x</em>, <em>y</em> are of order mxn, nxm then <table> <tr> <td nowrap="" rowspan="8"></td> <td rowspan="8" width="25%"></td> <td align="right"> <table><tr> <td><em>a x</em><br><em>y b</em></br></td> <td><em>= |a| |b|+ </em></td> </tr></table> </td> <td align="left"> <table><tr> <td align="center" nowrap=""><em>k-I<br><em>∑</em><br><em>r=I</em></br></br></em></td> <td><em>= (-I)<sup>r</sup>trace(a<sup>[r]</sup>x<sup>[r]</sup>b<sup>[r]</sup>y<sup>[r]</sup>)+(-I)<sup>k<!--</em--></sup></em></td> </tr></table> </td> <td align="left"> <table><tr> <td nowrap="">=   </td> <td align="center" nowrap=""><em>x<sup>[k]</sup>b<sup>[k]</sup>y<sup>[k]</sup></em><br><em>|x| |y|</em></br></td></tr></table></td></tr></table></p><br><em>y<sup>(k)</sup>a<sup>[k]</sup>x<sup>(k)</sup></em></br> according as m&lt;n, m=n or m&gt;n, where k=min(m,n). <p>The second part, 'Commutative matrices, latent vectors and characteristic values', discusses commutative sets of matrices, their common latent vectors, and the sets of characteristic values which correspond to them. Also it makes a study of commutative matrix algebras with a unit over the complex field. A set of characteristic values all of which correspond to a single common latent vector of a set of commutative matrices defines a <em>characteristic set</em> of that matrix set. Two fundamental results proved are the that every commutative matrix set has at least one characteristic set, and that the characteristic sets of any commutative matrix set can be extended to give the characteristic sets of any commutative extension of that set. Also it is proved that the matrices of any commutative set can all be reduced to triangular form by the same unitary transformation, and the sets of similarly situated diagonal elements in the triangular transforms, are the characteristic sets with certain repetitions . Through these repetitions the <em>multiplicity</em> of a characteristic set is defined so as to generalize the concept of the multiplicity of a characteristic value of a single matrix as its multiplicity as a root of the characteristic equation. The <em>rank</em> of a characteristic set is defined as the rank of the space of corresponding simultaneous latent vectors of the matrices of the set, the corresponding <em>axial space</em> . It is shown that the rank of a characteristic set is at most its multiplicity. The <em>index</em> of a characteristic set is defined so as to generalize the concept of the index of a characteristic value of a single matrix as its multiplicity as a root of the minimum function, and it it shown that no characteristic set has zero index. Proof is given to the theorem that the characteristic values of any rational function of commutative matrices are given by the values of that function when the argument ranges over the characteristic sets, and the multiplicity of any characteristic value is determined as the sum of the multiplicities of the characteristic sets which give that value. In any commutative matrix algebra K with a unit over the complex field there exist independent idempotents <em>e</em><sub>n</sub> which are irreducible in K and which form a reduction of the unit. They are called the <em>principal idempotents</em> in K, and there are sub-algebras K<sub>n</sub> in K, called the <em>principal sections</em> of K, in which the units are <em>e</em><sub>n</sub> respectively, and which are such that <em>K</em><sub>n</sub>.<em>K</em><sub>ρ</sub>=0(ρ≠n) and K=<sup>⊗</sup><sub style="position: relative; left: -.7em;">n</sub>K<sub>n</sub>. There are multiplicities m<sub>n</sub> attached, such that, for any matrix <em>a</em> in K, <em>ae</em><sub>n</sub> has just one characteristic value α<sub>n</sub> , and the multiplicity of any value α as a characteristic value of <em>a</em> is given by m<sub>α</sub>=∑ m<sub>n</sub>. It follows, by the Cayley-Hamilton theorem, that <table> <tr> <td nowrap="" rowspan="4"></td> <td align="left"> <table><tr> <td>α<sub>n</sub>=α</td> </tr></table> </td> <td align="left"> <table><tr> <td nowrap="">   </td> <td align="left" nowrap=""><br><sub>π<em>(a</em>-α<sub>n</sub><em>I</em>)<sup>m</sup>n=<em>0</em></sub><br><em><sub>n</sub></em></br></br></td> </tr></table> </td> </tr> </table> for every <em>a</em> in K. If indices i<sub>n</sub> are the least exponents such that <table> <tr> <td nowrap="" rowspan="4"></td> <td align="left"> <table><tr> <td nowrap="">   </td> <td align="left" nowrap=""><br><sub>π<em>(a</em>-α<sub>n</sub><em>I</em>)=<em>0</em></sub><br><em><sub>n</sub></em></br></br></td> </tr></table> </td> </tr> </table> for all <em>a</em> in K, then 0 &lt;i<sub>n</sub>&lt;m<sub>n</sub> and (<em>a</em>-α<sub>n</sub><em>I</em>)<sup>r</sup>(<em>b</em>-β<sub>n</sub><em>I</em>)<sup>s</sup>.....<em>e</em><sub>n</sub>=<em>0</em>, r+s+...&gt;i<sub>n</sub>, for all matrices <em>a</em>, <em>b</em>, ... in K. It is taken that m<sub>n</sub> defines the <em>multiplicity</em> and i<sub>n</sub> the <em>index</em> of any <em>characteristic plane</em> n of K. For each characteristic plane n of K there corresponds an axial space of K, which is denoted by <strike>X</strike><sub>n</sub> , and is such that <em>aX</em>=<em>X</em>α<sub>n</sub> for each matrix <em>a</em> of K and each vector <em>X</em> of <strike>X</strike><sub>n</sub> . Also K Determines a splitting of the fundamental vector space into subspaces <em>Ul</em><sub>n</sub> which are called the <em>principal spaces</em> relative to K, and are the ranges of the principal idempotents <em>e</em><sub>n</sub> , of rank m<sub>n</sub> respectively. Each principal space contains its corresponding axial space, that is to say <strike>X</strike><sub>n</sub>&amp;subset;<em>Ul</em><sub>n</sub> for each n. The principal idempotents in the algebra generated by a single matrix <em>a</em> define the partial resolvents <em>e</em><sub>α</sub> of that matrix, indexed by its characteristic values α. They are unique matrices such that <table> <tr> <td nowrap="" rowspan="4"></td> <td align="left"> <table><tr> <td><em>a e</em><sub>α</sub>=<em>e</em><sub>α</sub><em>a</em>, <em>e</em></td> </tr></table> </td> <td align="left"> <table><tr> <td align="left" nowrap="">2<br>α</br></td> <td> </td> <td>=</td> <td> </td> <td><em>e</em><sub>α≠<em>0,</em></sub></td> </tr></table> </td> <td align="left"> <table><tr> <td> </td> <td><br>∑<br>α</br></br></td> <td><em>e</em><sub>α</sub> and (<em>a</em>-α<em>I</em>)<em>e</em><sub>α</sub></td> </tr></table> </td> </tr> </table> is nullpotent. Any characteristic values of commutative matrices which belong to a common latent vector are said to be <em>latently associated</em> . If a commutative pair of matrices are such that there is a unique characteristic value of one which is latently associated with any characteristic value of the other, then we say that the one is <em>resolved</em> by the other. It is proved that is a commutative pair of matrices are such that one resolves the other, then the independent idempotent resolution of the unit matrix relative to the one refines that of the other; and conversely. It appears as a consequence that a commutative pair of matrices are mutually resolved if and only if they have the same resolvents. If commutative matrices <em>a</em>, <em>b</em>, ... have resolvents <em>f</em><sub>α</sub>, <em>g</em><sub>β</sub>, ..., then <em>f</em> <em>g</em> ...≠<em>0</em> if and only if α, β, ...are latently associated characteristic value of <em>a</em>, <em>b</em>, ... respectively. The resolvent quantities, the axial and principal spaces of a commutative matrix algebra are expressed in terms of the same for any generating subset of matrices.</p> <p>The third part, 'Axial and normal matrix sets' , deals matrix sets, the algebras generated by which span the fundamental vector space by their latent vectors, or which include among their members the transposed conjugat of each member.</p> <p>The fourth part , 'Composite matrices', considers matrices whose elements are commutative matrices, the ground field being the complex numbers. A <em>composite matrix</em> of <em>order</em> n and <em>degree</em> m is an nth order matrix whose elements, called its <em>components</em>, are themselves matrices of order m. Let the i,jth component <em>A</em><sup>*</sup><sub style="position: relative; left: -.5em;">i j</sub> of a composite matrix <em>A</em>* of order n and degree m have r,sth element A<sup>rs</sup><sub style="position: relative; left: -.8em;">ij</sub>. Then correlative to <em>A</em>* there is defined the <em>dual</em> composite matrix <em>A</em><sub>*</sub>, of order m and decree n, whose r,sth component <em>A</em><sup> r s</sup><sub style="position: relative; left: -1.2em;">*</sub> has i,jth element A<sup>rs</sup><sub style="position: relative; left: -.8em;">ij</sub>. Any composite matrix has a <em>ground matrix</em>, which is that simple matrix from which it may be derived by a partitioning. A composite matrix is called <em>commutative</em> if its components all commute , and <em>doubly-commutative</em> if its dual is also commutative . The determinant of a commutative composite matrix of order n and degree m is a determinant of order n whose value is a matrix of order m; and the determinant of this determinant defines the <em>double-determinant</em> of that composite matrix. When the components of a commutative composite matrix are replaced by their respective characteristic values in a latently associated set, that is to say which form a characteristic set for the commutative set of components, a <em>characteristic dual component</em> of the composite matrix is obtained, with a multiplicity attached to it equal to the multiplicity of the characteristic set formed by its elements. The characteristic dual components are dependent on the dual components. When the composite matrix is doubly commutative , a set of <em>characteristic components</em> are also defined. Three of the principal results are as follows. The characteristic values of the ground matrix of a commutative composite matrix are the characteristic values of its characteristic components, with multiplicities attached correspondingly The double determinant of a commutative composite matrix is equal to the determinant of its ground matrix. If <table> <tr> <td nowrap="" rowspan="4"></td> <td align="left"> <table><tr> <td><em>m</em></td> </tr></table> </td> <td align="left"> <table><tr> <td align="left" nowrap=""><br>α<sup>*</sup></br></td> </tr></table> </td> <td align="left"> <table><tr> <td> </td> <td><em>, m</em></td> </tr></table> </td> <td align="left"> <table><tr> <td align="left" nowrap=""><br>α<sub>*</sub></br></td> </tr></table> </td> </tr> </table> denote the multiplicities attached to the characteristic and the characteristic dual components <em>α</em><sup>*</sup>, <em>α</em><sub>*</sub> of a doubly commutative dual pair of composite matrices <em>A</em><sup>*</sup>. <em>A</em><sub>*</sub> then <table><tr> <td> </td> <td><br>π<br>α*</br></br></td> <td> </td> <td><em>(<em>A</em>*-<em>α</em>*<em>I</em>**)<sup>m</sup><em>α</em>*=<em>0</em>**,</em></td> <td> </td> <td><br>π<br>α<sub>*</sub></br></br></td> <td><em>(<em>A</em><sub>*</sub>-<em>α</em><sub>*</sub><em>I</em><sub>**</sub>)<sup>m</sup><em>α</em><sub>*</sub>=<em>0</em><sub>**</sub></em></td> </tr> </table> </p><p> and if</p><p> <em>C</em>*(λ)=|<em>A</em>*-λ<em>I</em>*|, <em>C</em><sub>*</sub>(λ)=|<em>A</em><sub>*</sub>-λ<em>I</em><sub>*</sub>||</p><p> then</p><p> <em>C</em>*(<em>α</em>*)=<em>0</em>*, <em>C</em><sub>*</sub>(<em>α</em><sub>*</sub>)=<em>0</em><sub>*</sub>.</p> <p>The fifth part, 'Functions of a complex matrix variable', investigates functions with a complex matrix for argument, which extend an arbitrary analytic function of a complex variable. It shows results which are generalizations of formulae familiar in the theory of functions of a complex variable, and besides these there are also results which have no analogue in this theory.</p> <p>The sixth part, 'The module of a complex matrix' , investigates and sets out the properties of two non- negative real valued functions defined over the complex matrices, called the upper and lower moduli. These functions supply the fitting analogue for complex matrice of the concept of the modulus of a complex number. The <em>upper modulus</em> |<em>a</em>|* and the <em>lower modulus</em> |<em>a</em>|<sub>*</sub> of any matrix <em>a</em> with complex elements are defined as the non- negative real numbers whose squares are the maximum and the minimum characteristic values respectively of the non-negative definite Hermitian matrix <em>&amp;amacr;</em>'<em>a</em>, and the <em>absolute trace</em> |<em>a</em>|<sub>(+)</sub> and the <em>absolute determinant</em> |<em>a</em>|<sub>(x)</sub> are defined at the non-negative real numbers whose square are the trace and the determinant respectively of <em>&amp;amacr;</em>'<em>a</em>.</p>