ගණිතයෙහි න්යාසයක් යනු සංඛ්යා සමූහයක, සංකේතයන්හි හෝ ප්රකාශනයන්හි ඍජුකෝනාස්රාකාරව පිළියෙළකරන ලද වගුවකි. න්යාසයක අඩංගු තනි තනි අයිතම එහි මූලාවයව ලෙස හෝ ඇතුලත් කිරීම් ලෙස හඳුන්වනු ලබයි. මූලාවයව හයකින් සමන්විත න්යාසයක් සඳහා නිදසුනක් පහත දක්වා ඇත.
සමාන ප්රමාණයෙන් යුතු න්යාසයන්හි අවයව වෙන් වෙන්ව හෝ අඩු කිරිමට හැකියාව ඇත. නීතිය වඩාත් සංකීර්ණ වේ. පළමු න්යාසයෙහි සිරස් තීරු ගණනට දෙවෙනි න්යාසයෙහි තිරස් තීරු ගණන සමානම් පමණක් එම න්යාස දෙක ගුණ කිරීමට හැකියව ඇත. නිරූපනය කිරීම න්යාසයන්හි ප්රධාන භාවිතය වන අතර, f(x) = 4x ආදී සාධාරණීකරණය කිරීමටද යොදාගනියි. නිදසුනක් ලෙස, ත්රිමාන අවකාශය තුළ දෛශික රේඛීය පරිණාමණය වේ. R යනු v යනු අවකාශයේ ලක්ෂ්යයක් විස්තර කරන (එක් සිරස් තීරුවක් පමණක් ඇති න්යාසයක්) නම්, Rv ගුණිතය යනු එම ලක්ෂ්යය පරිවර්තනයෙන් පසු විස්තර කරන තීරු දෛශිකයයි. න්යාස දෙකක ගුණිතය රේඛීය පරිණාමණ දෙකක නිරූපනයකරනු ලබයි. විසඳුම් ලබා ගැනීම න්යාසයන්හි වෙනත් භාවිතයකි. න්යාසය නම් එහි ගණනය කිරීම මඟින් එහි සමහර ගුණ අපෝහනය කිරීමට හැකියාව ඇත. නිදසුනක් ලෙස, සමචතුරස්රාකාර න්යාසයක නිර්ණායකය ශූන්ය නොවන්නේ නම් පමණක් එයට ක් ඇත. රේඛීය පරිණාමණයෙහි ජ්යාමිතියට අන්තර් දෘශ්ටිය සලසනු ලබයි.
න්යාස බොහෝ විද්යාත්මක ක්ෂේත්රයන්හි භාවිත සොයාගනු ලබයි. භෞතික විද්යාවෙහි, විද්යුත් පරිපථ, දෘශ්ටි විද්යාව සහ අධ්යයනය සඳහා න්යාස භාවිතාකරයි. , ද්විමාන තලයෙහි ත්රිමාන ප්රතිබිම්බ ප්රක්ෂේපනය කිරීමට සහ තාත්වික චලිතය පෙනෙන අයුරු නිර්මාණය කිරීමට න්යාස භාවිතාකරයි. බහු මාන සඳහා සහ යොදාගන්නවාසේම පෞරාණික මත පිළිබඳ අවබෝධයක් ලබාගැනීමට යොදාගනියි.
අවුරුදු සිය ගණනක් තිස්සේ වස්තු විෂය වෙමින් වර්තමානය වන විට පර්යේෂණ මට්ටම දක්වාම විහිදී ගිය න්යාස ගණනය සඳහා කාර්යක්ෂම ඇල්ගොරිතම සංවර්ධනය කිරීම ප්රධාන කොටසක් වී පවතී. සෛද්ධාන්තිකවත් ප්රායෝගිකවත්, න්යාස ගණනය සරල කර දෙයි. එක් එක් න්යාස ආකෘති උදා:- , , සඳහා ඈඳන ලද ඇල්ගොරිතම හා වෙනත් න්යාස ගණනයන් ඉක්මන් කරවයි. ග්රහ තාරකා සිද්ධාන්ත වලදී සහ පරමාණුක සිද්ධාන්ත වලදී අපරිමිත න්යාස භාවිතා වෙයි. ශ්රිතයක ටේලර් ශ්රේණිය මත බලපාන කාරකය නිරූපණය කරන න්යාසය ඒ සඳහා ඇති හොඳම නිදසුනයි.
අර්ථදැක්වීම
න්යාස යනු ගණිතමය ප්රකාශනයන්හි පිළියෙළ කිරීමක් වන අතර එමඟින් සරල බව ඇතිකරවිය හැකිය. නිදසුනක් ලෙස,
විකල්ප අංකනයක් ලෙස වෙනුවට විශාල යොදාගත හැක :
න්යාසයක තිරස් පේළි සහ සිරස් තීරු අඩංගුවේ. න්යාසයක අඩංගු සංඛ්යා එහි මූලාවයව ලෙස හෝ ඇතුලත් කිරීම් ලෙස හඳුන්වනු ලබයි. තිරස් පේළි m ගණනකින් හා සිරස් තීරු n ගණනකින් යුතු න්යාසයක් m-by-n න්යාසයක් ලෙස හෝ m × n න්යාසයක් ලෙස එහි ප්රමාණය විශේෂයෙන් සඳහන් කරන අතර, m සහ n එහි මානවේ. ඉහත සඳහන් කර ඇත්තේ 4-by-3 න්යාසයකි. තිරස් පේළි එකක් (1 × n) පමණක් ඇති න්යාස ලෙසද සිරස් තීරු එකක් (m × 1) පමණක් ඇති න්යාස ලෙසද හඳුන්වනු ලබයි. න්යාසයක ඕනෑම තිරස් පේළියක් හෝ සිරස් තීරුවක් තිරස් පේළි දෛශිකය හෝ සිරස් තීරු දෛශිකය නිර්ණය කරන අතර, න්යාසයෙහි වෙනත් තිරස් පේළි හෝ සිරස් තීරු අනුපිළිවෙළින් ඉවත්කිරීම මඟින් ලබාගත හැකියි. නිසදසුනක් ලෙස, ඉහත A න්යාසයෙහි තුන්වන තිරස් පේළියේ තිරස් පේළි දෛශිකය
න්යාසයක තිරස් පේළිය හෝ සිරස් තීරුව අගයකට අර්ථපහදා දෙන විට, එය අනුරූප තිරස් පේළි දෛශිකයට හෝ සිරස් තීරු දෛශිකයට යොමුකරනු ලැබේ. උදාහරණයක් ලෙස න්යාසයක වෙනස් තිරස් පේළි දෙකක් සමාන බව කිව හැකියි, එහි අර්ථය ඒවායෙහි තිරස් පේළි දෛශිකය සමාන බවයි. සමහර අවස්ථාවලදී තිරස් පේළියක හෝ සිරස් තීරුවක අගය හරියටම අගයන් අනුපිළිවෙළින් (Rn හි මූලාවයවයකඇතුලත් කිරීම් තත්වික සංඛ්යා නම්) න්යාසයට වඩා වැඩියෙන් අර්ථපහදා දිය යුතුයි, උදාරණයක් ලෙස න්යාසයක තිරස් පේළි අනුරූප සිරස් තීරු වලට සමාන වන විට එය එහි න්යාසයවෙයි. බොහෝ සෙයින් මෙම ලිපිය තාත්වික සහ සංකීර්ණ න්යාස කේන්ද්ර කරගෙන ඇත, තව දුරටත් න්යාසයක මූලාවයව කෙසේදයත් අනුපිළිවෙළින් තාත්වික හෝ සංකීර්ණ විය හැකියි. බොහෝ සාමාන්ය ආකාරයේ ඇතුලත් කිරීම් සාකච්ඡා කරනු ලබයි.
අංකනය
යම් දෙසකට පැතිර පවත්නා බොහෝ පුළුල් වූ න්යාස විශේෂීකරණ අංකනයක් ඇත. සාමාන්යයෙන් භාවිතයෙන් න්යාසයක් දක්වනු ලබන අතර අනුරූප සමඟ යටිකුරු දර්ශක දෙකක් යෙදීමෙන් ඇතුලත් කිරීමක් ඉදිරිපත් කරනු ලබයි. ඊට අමතරව ඉංග්රීසි ලොකු අකුරු භාවිතයෙන් න්යාසයක් සංකේතවත් කරනු ලබයි, බොහෝ ලේඛකයන් විශේෂ භාවිතා කරයි, වෙනත් ගණිතමය දේවල් මඟින් න්යාස වල ඇති වෙනස හඳුනා ගැනීමට සුලබව සෘජු තද අකුරු (ඇල නැති) ආධාරකරනු ලබයි. විකල්ප අංකනයක් ලෙස සෘජු අකුරු සහිතව හෝ රහිතව විචල්ය නාමයට යටින් ඉරි දෙකක් ගැසීම සිදුකරනු ලබයි, (e.g., ).
න්යාසයක i වෙනි තිරස් පේළියේ සහ j වෙනි සිරස් තීරුවේ ඇතුලත් කිරීම i,j වන ඇතුළත් කිරීම ලෙස සලකයි. නිදසුනක් ලෙස, ඉහත A න්යාසයේ (2,3) ඇතුළත් කිරිම 7 වේ. A නම් න්යාසයක (i, j) ඇතුළත් කිරීම බහුල වශයෙන් භාවිත කරනුයේ ai,j ලෙසිනි. A[i,j] හෝ Ai,j යනු මේ සඳහා භාවිතා කරනු ලබන වෙනත් සංකේත වේ.
Sometimes a matrix is referred to by giving a formula for its (i,j)th entry, often with double parenthesis around the formula for the entry, for example, if the (i,j)th entry of A were given by aij, A would be denoted ((aij)).
An asterisk is commonly used to refer to whole rows or columns in a matrix. For example, ai,∗ refers to the ith row of A, and a∗,j refers to the jth column of A. The set of all m-by-n matrices is denoted (m, n).
A common shorthand is
- A = [ai,j]i = 1,...,m; j = 1,...,n or more briefly A = [ai,j]m×n
to define an m × n matrix A. Usually the entries ai,j are defined separately for all integers 1 ≤ i ≤ m and 1 ≤ j ≤ n. They can however sometimes be given by one formula; for example the 3-by-4 matrix
can alternatively be specified by A = [i − j]i = 1,2,3; j = 1,...,4, or simply A = ((i-j)), where the size of the matrix is understood.
Some programming languages start the numbering of rows and columns at zero, in which case the entries of an m-by-n matrix are indexed by 0 ≤ i ≤ m − 1 and 0 ≤ j ≤ n − 1. This article follows the more common convention in mathematical writing where enumeration starts from 1.
මූලික ක්රියාකාරකම්
There are a number of operations that can be applied to modify matrices called matrix addition, scalar multiplication and transposition. These form the basic techniques to deal with matrices.
Operation | Definition | Example |
---|---|---|
Addition | The sum A+B of two m-by-n matrices A and B is calculated entrywise:
|
|
Scalar multiplication | The scalar multiplication cA of a matrix A and a number c (also called a in the parlance of ) is given by multiplying every entry of A by c: (cA)i,j = c · Ai,j. | |
Transpose | The transpose of an m-by-n matrix A is the n-by-m matrix AT (also denoted Atr or tA) formed by turning rows into columns and vice versa: (AT)i,j = Aj,i. |
Familiar properties of numbers extend to these operations of matrices: for example, addition is , i.e., the matrix sum does not depend on the order of the summands: A + B = B + A. The transpose is compatible with addition and scalar multiplication, as expressed by (cA)T = c(AT) and (A + B)T = AT + BT. Finally, (AT)T = A.
are ways to change matrices. There are three types of row operations: row switching, that is interchanging two rows of a matrix; row multiplication, multiplying all entries of a row by a non-zero constant; and finally row addition, which means adding a multiple of a row to another row. These row operations are used in a number of ways including solving linear equations and finding inverses.
න්යාස ගුණකිරීම,රේඛීය සමීකරණ සහ රේඛීය පරිණාමණ
Multiplication of two matrices is defined only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by of the corresponding row of A and the corresponding column of B:
where 1 ≤ i ≤ m and 1 ≤ j ≤ p. For example, the underlined entry 1 in the product is calculated as (1 × 1) + (0 × 1) + (2 × 0) = 1:
Matrix multiplication satisfies the rules (AB)C = A(BC) (), and (A+B)C = AC+BC as well as C(A+B) = CA+CB (left and right ), whenever the size of the matrices is such that the various products are defined. The product AB may be defined without BA being defined, namely if A and B are m-by-n and n-by-k matrices, respectively, and m ≠ k. Even if both products are defined, they need not be equal, i.e., generally one has
- AB ≠ BA,
i.e., matrix multiplication is not , in marked contrast to (rational, real, or complex) numbers whose product is independent of the order of the factors. An example of two matrices not commuting with each other is:
whereas
The In of size n is the n-by-n matrix in which all the elements on the are equal to 1 and all other elements are equal to 0, e.g.
It is called identity matrix because multiplication with it leaves a matrix unchanged: MIn = ImM = M for any m-by-n matrix M.
Besides the ordinary matrix multiplication just described, there exist other less frequently used operations on matrices that can be considered forms of multiplication, such as the and the . They arise in solving matrix equations such as the .
රේඛීය සමීකරණ
A particular case of matrix multiplication is tightly linked to linear equations: if x designates a column vector (i.e., n×1-matrix) of n variables x1, x2, ..., xn, and A is an m-by-n matrix, then the matrix equation
- Ax = b,
where b is some m×1-column vector, is equivalent to the system of linear equations
- A1,1x1 + A1,2x2 + ... + A1,nxn = b1
- ...
- Am,1x1 + Am,2x2 + ... + Am,nxn = bm .
This way, matrices can be used to compactly write and deal with multiple linear equations, i.e., systems of linear equations.
රේඛීය පරිණාමණ
Matrices and matrix multiplication reveal their essential features when related to linear transformations, also known as linear maps. A real m-by-n matrix A gives rise to a linear transformation Rn → Rm mapping each vector x in Rn to the (matrix) product Ax, which is a vector in Rm. Conversely, each linear transformation f: Rn → Rm arises from a unique m-by-n matrix A: explicitly, the (i, j)-entry of A is the ith coordinate of f(ej), where ej = (0,...,0,1,0,...,0) is the with 1 in the jth position and 0 elsewhere. The matrix A is said to represent the linear map f, and A is called the transformation matrix of f.
For example, the 2×2 matrix
can be viewed as the transform of the into a with vertices at (0, 0), (a, b), (a + c, b + d), and (c, d). The parallelogram pictured at the right is obtained by multiplying A with each of the column vectors and in turn. These vectors define the vertices of the unit square.
The following table shows a number of with the associated linear maps of R2. The blue original is mapped to the green grid and shapes, the origin (0,0) is marked with a black point.
with m=1.25. | Horizontal flip | with r=3/2 | by a factor of 3/2 | by π/6R = 30° |
Under the between matrices and linear maps, matrix multiplication corresponds to of maps: if a k-by-m matrix B represents another linear map g : Rm → Rk, then the composition g ∘ f is represented by BA since
- (g ∘ f)(x) = g(f(x)) = g(Ax) = B(Ax) = (BA)x.
The last equality follows from the above-mentioned associativity of matrix multiplication.
The A is the maximum number of row vectors of the matrix, which is the same as the maximum number of linearly independent column vectors. Equivalently it is the of the of the linear map represented by A. The states that the dimension of the of a matrix plus the rank equals the number of columns of the matrix.
සමචතුරස්රාකාර න්යාස
A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied. A square matrix A is called or non-singular if there exists a matrix B such that
- AB = In.
This is equivalent to BA = In. Moreover, if B exists, it is unique and is called the of A, denoted A−1.
The entries Ai,i form the of a matrix. The , tr(A) of a square matrix A is the sum of its diagonal entries. While, as mentioned above, matrix multiplication is not commutative, the trace of the product of two matrices is independent of the order of the factors: tr(AB) = tr(BA).
Also, the trace of a matrix is equal to that of its transpose, i.e., tr(A) = tr(AT).
If all entries outside the main diagonal are zero, A is called a . If only all entries above (below) the main diagonal are zero, A is called a lower (upper triangular matrix, respectively). For example, if n = 3, they look like
- (diagonal), (lower) and (upper triangular matrix).
නිර්ණායකය
The determinant det(A) or |A| of a square matrix A is a number encoding certain properties of the matrix. A matrix is invertible its determinant is nonzero. Its equals the area (in R2) or volume (in R3) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.
The determinant of 2-by-2 matrices is given by
When the determinant is equal to one, then the matrix represents an . The determinant of 3-by-3 matrices involves 6 terms (). The more lengthy generalises these two formulae to all dimensions.
The determinant of a product of square matrices equals the product of their determinants: det(AB) = det(A) · det(B). Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1. Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the expresses the determinant in terms of , i.e., determinants of smaller matrices. This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve using , where the division of the determinants of two related square matrices equates to the value of each of the system's variables.
අයිගන් අගයන් සහ අයිගන් දෛශික
A number λ and a non-zero vector v satisfying
- Av = λv
are called an eigenvalue and an eigenvector of A, respectively. The number λ is an eigenvalue of an n×n-matrix A if and only if A−λIn is not invertible, which is to
The function pA(t) = det(A−tI) is called the of A, its is n. Therefore pA(t) has at most n different roots, i.e., eigenvalues of the matrix. They may be complex even if the entries of A are real. According to the , pA(A) = 0, that is to say, the characteristic polynomial applied to the matrix itself yields the .
සමමිතිය
A square matrix A that is equal to its transpose, i.e., A = AT, is a . If instead, A was equal to the negative of its transpose, i.e., A = −AT, then A is a . In complex matrices, symmetry is often replaced by the concept of , which satisfy A∗ = A, where the star or denotes the of the matrix, i.e., the transpose of the of A.
By the , real symmetric matrices and complex Hermitian matrices have an ; i.e., every vector is expressible as a of eigenvectors. In both cases, all eigenvalues are real. This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below.
නිශ්චිත භාවය
Matrix A; definiteness; associated quadratic form QA(x,y); set of vectors (x,y) such that QA(x,y)=1 | |
positive definite | indefinite |
1/4 x2 + 1/4y2 | 1/4 x2 − 1/4 y2 |
|
A symmetric n×n-matrix is called (respectively negative-definite; indefinite), if for all nonzero vectors x ∈ Rn the associated given by
- Q(x) = xTAx
takes only positive values (respectively only negative values; both some negative and some positive values). If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.
A symmetric matrix is positive-definite if and only if all its eigenvalues are positive. The table at the right shows two possibilities for 2-by-2 matrices.
Allowing as input two different vectors instead yields the associated to A:
- BA (x, y) = xTAy.
සංඛ්යාත්මක ආකාර
In addition to theoretical knowledge of properties of matrices and their relation to other fields, it is important for practical purposes to perform matrix calculations effectively and precisely. The domain studying these matters is called . As with other numerical situations, two main aspects are the and their . Many problems can be solved by both direct algorithms or iterative approaches. For example, finding eigenvectors can be done by finding a of vectors xn to an eigenvector when n tends to .
Determining the complexity of an algorithm means finding or estimates of how many elementary operations such as additions and multiplications of scalars are necessary to perform some algorithm, e.g., multiplication of matrices. For example, calculating the matrix product of two n-by-n matrix using the definition given above needs n3 multiplications, since for any of the n2 entries of the product, n multiplications are necessary. The outperforms this "naive" algorithm; it needs only n2.807 multiplications. A refined approach also incorporates specific features of the computing devices.
In many practical situations additional information about the matrices involved is known. An important case are , i.e., matrices most of whose entries are zero. There are specifically adapted algorithms for, say, solving linear systems Ax = b for sparse matrices A, such as the .
An algorithm is, roughly speaking, numerically stable, if little deviations (such as rounding errors) do not lead to big deviations in the result. For example, calculating the inverse of a matrix via (Adj (A) denotes the of A)
- A−1 = Adj(A) / det(A)
may lead to significant rounding errors if the determinant of the matrix is very small. The can be used to capture the of linear algebraic problems, such as computing a matrix' inverse.
Although most are not designed with commands or libraries for matrices, as early as the 1970s, some engineering desktop computers such as the had ROM cartridges to add BASIC commands for matrices. Some computer languages such as were designed to manipulate matrices, and can be used to aid computing with matrices.
න්යාස වියෝජන ක්රම
There are several methods to render matrices into a more easily accessible form. They are generally referred to as matrix transformation or matrix decomposition techniques. The interest of all these decomposition techniques is that they preserve certain properties of the matrices in question, such as determinant, rank or inverse, so that these quantities can be calculated after applying the transformation, or that certain matrix operations are algorithmically easier to carry out for some types of matrices.
The factors matrices as a product of lower (L) and an upper (U). Once this decomposition is calculated, linear systems can be solved more efficiently, by a simple technique called . Likewise, inverses of triangular matrices are algorithmically easier to calculate. The Gaussian elimination is a similar algorithm; it transforms any matrix to . Both methods proceed by multiplying the matrix by suitable , which correspond to and adding multiples of one row to another row. expresses any matrix A as a product UDV∗, where U and V are and D is a diagonal matrix.
The or diagonalization expresses A as a product VDV−1, where D is a diagonal matrix and V is a suitable invertible matrix. If A can be written in this form, it is called . More generally, and applicable to all matrices, the Jordan decomposition transforms a matrix into , that is to say matrices whose only nonzero entries are the eigenvalues λ1 to λn of A, placed on the main diagonal and possibly entries equal to one directly above the main diagonal, as shown at the right. Given the eigendecomposition, the nth power of A (i.e., n-fold iterated matrix multiplication) can be calculated via
- An = (VDV−1)n = VDV−1VDV−1...VDV−1 = VDnV−1
and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries, which is much easier than doing the exponentiation for A instead. This can be used to compute the eA, a need frequently arising in solving , and . To avoid numerically situations, further algorithms such as the can be employed.
වීජීය සංක්ෂේපන ආකාරය සහ සාධාරණීකරණය
Matrices can be generalized in different ways. Abstract algebra uses matrices with entries in more general or even , while linear algebra codifies properties of matrices in the notion of linear maps. It is possible to consider matrices with infinitely many columns and rows. Another extension are , which can be seen as higher-dimensional arrays of numbers, as opposed to vectors, which can often be realised as sequences of numbers, while matrices are rectangular or two-dimensional array of numbers. Matrices, subject to certain requirements tend to form known as matrix groups.
න්යාස සමඟ බොහෝ සාමාන්ය ඇතුලත් කිරීම්
This article focuses on matrices whose entries are real or . However, matrices can be considered with much more general types of entries than real or complex numbers. As a first step of generalization, any , i.e., a where , , and operations are defined and well-behaved, may be used instead of R or C, for example or . For example, makes use of matrices over finite fields. Wherever are considered, as these are roots of a polynomial they may exist only in a larger field than that of the coefficients of the matrix; for instance they may be complex in case of a matrix with real entries. The possibility to reinterpret the entries of a matrix as elements of a larger field (e.g., to view a real matrix as a complex matrix whose entries happen to be all real) then allows considering each square matrix to possess a full set of eigenvalues. Alternatively one can consider only matrices with entries in an , such as C, from the outset.
More generally, abstract algebra makes great use of matrices with entries in a R. Rings are a more general notion than fields in that no division operation exists. The very same addition and multiplication operations of matrices extend to this setting, too. The set M(n, R) of all square n-by-n matrices over R is a ring called , isomorphic to the of the left R- Rn. If the ring R is , i.e., its multiplication is commutative, then M(n, R) is a unitary noncommutative (unless n = 1) over R. The of square matrices over a commutative ring R can still be defined using the ; such a matrix is invertible if and only if its determinant is in R, generalising the situation over a field F, where every nonzero element is invertible. Matrices over are called .
Matrices do not always have all their entries in the same ring – or even in any ring at all. One special but common case is , which may be considered as matrices whose entries themselves are matrices. The entries need not be quadratic matrices, and thus need not be members of any ordinary ; but their sizes must fulfil certain compatibility conditions.
රේඛීය සිතියම් වලට ඇති සබැඳියාව
Linear maps Rn → Rm are equivalent to m-by-n matrices, as described above. More generally, any linear map f: V → W between finite- can be described by a matrix A = (aij), after choosing v1, ..., vn of V, and w1, ..., wm of W (so n is the dimension of V and m is the dimension of W), which is such that
In other words, column j of A expresses the image of vj in terms of the basis vectors wi of W; thus this relation uniquely determines the entries of the matrix A. Note that the matrix depends on the choice of the bases: different choices of bases give rise to different, but . Many of the above concrete notions can be reinterpreted in this light, for example, the transpose matrix AT describes the given by A, with respect to the .
More generally, the set of m×n matrices can be used to represent the R-linear maps between the free modules Rm and Rn for an arbitrary ring R with unity. When n = m composition of these maps is possible, and this gives rise to the of n×n matrices representing the of Rn.
න්යාස කාණ්ඩ
A is a mathematical structure consisting of a set of objects together with a , i.e., an operation combining any two objects to a third, subject to certain requirements. A group in which the objects are matrices and the group operation is matrix multiplication is called a matrix group. Since in a group every element has to be invertible, the most general matrix groups are the groups of all invertible matrices of a given size, called the .
Any property of matrices that is preserved under matrix products and inverses can be used to define further matrix groups. For example, matrices with a given size and with a determinant of 1 form a of (i.e., a smaller group contained in) their general linear group, called a ., determined by the condition
- MTM = I,
form the . They are called orthogonal since the associated linear transformations of Rn preserve angles in the sense that the of two vectors is unchanged after applying M to them:
- (Mv) · (Mw) = v · w.
Every is to a matrix group, as one can see by considering the of the . General groups can be studied using matrix groups, which are comparatively well-understood, by means of .
අපරිමිත න්යාස
It is also possible to consider matrices with infinitely many rows and/or columns even if, being infinite objects, one cannot write down such matrices explicitly. All that matters is that for every element in the set indexing rows, and every element in the set indexing columns, there is a well-defined entry (these index sets need not even be subsets of the natural numbers). The basic operations of addition, subtraction, scalar multiplication and transposition can still be defined without problem; however matrix multiplication may involve infinite summations to define the resulting entries, and these are not defined in general.
If R is any ring with unity, then the ring of endomorphisms of as a right R module is isomorphic to the ring of column finite matrices whose entries are indexed by , and whose columns each contain only finitely many nonzero entries. The endomorphisms of M considered as a left R module result in an analogous object, the row finite matrices whose rows each only have finitely many nonzero entries.
If infinite matrices are used to describe linear maps, then only those matrices can be used all of whose columns have but a finite number of nonzero entries, for the following reason. For a matrix A to describe a linear map f: V→W, bases for both spaces must have been chosen; recall that by definition this means that every vector in the space can be written uniquely as a (finite) of basis vectors, so that written as a (column) vector v of coefficients, only finitely many entries vi are nonzero. Now the columns of A describe the images by f of individual basis vectors of V in the basis of W, which is only meaningful if these columns have only finitely many nonzero entries. There is no restriction on the rows of A however: in the product A·v there are only finitely many nonzero coefficients of v involved, so every one of its entries, even if it is given as an infinite sum of products, involves only finitely many nonzero terms and is therefore well defined. Moreover this amounts to forming a linear combination of the columns of A that effectively involves only finitely many of them, whence the result has only finitely many nonzero entries, because each of those columns do. One also sees that products of two matrices of the given type is well defined (provided as usual that the column-index and row-index sets match), is again of the same type, and corresponds to the composition of linear maps.
If R is a , then the condition of row or column finiteness can be relaxed. With the norm in place, can be used instead of finite sums. For example, the matrices whose column sums are absolutely convergent sequences form a ring. Analogously of course, the matrices whose row sums are absolutely convergent series also form a ring.
In that vein, infinite matrices can also be used to describe , where convergence and questions arise, which again results in certain constraints that have to be imposed. However, the explicit point of view of matrices tends to obfuscate the matter, and the abstract and more powerful tools of can be used instead.
ශූන්ය න්යාස
An empty matrix is a matrix in which the number of rows or columns (or both) is zero. Empty matrices help dealing with maps involving the . For example, if A is a 3-by-0 matrix A and B is a 0-by-3 matrix, then AB is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space V to itself, while BA is a 0-by-0 matrix. There is no common notation for empty matrices, but most allow creating and computing with them. The determinant of the 0-by-0 matrix is 1 as follows from regarding the occurring in the Leibniz formula for the determinant as 1. This value is also consistent with the fact that the identity map from any finite dimensional space to itself has determinant 1, a fact that is often used as a part of the characterization of determinants.
භාවිතය
There are numerous applications of matrices, both in mathematics and other sciences. Some of them merely take advantage of the compact representation of a set of numbers in a matrix. For example, in and economics, the encodes the payoff for two players, depending on which out of a given (finite) set of alternatives the players choose. and automated compilation makes use of such as to track frequencies of certain words in several documents.
Complex numbers can be represented by particular real 2-by-2 matrices via
under which addition and multiplication of complex numbers and matrices correspond to each other. For example, 2-by-2 rotation matrices represent the multiplication with some complex number of 1, as above. A similar interpretation is possible for .
Early encryption techniques such as the also used matrices. However, due to the linear nature of matrices, these codes are comparatively easy to break. uses matrices both to represent objects and to calculate transformations of objects using affine to accomplish tasks such as projecting a three-dimensional object onto a two-dimensional screen, corresponding to a theoretical camera observation. Matrices over a are important in the study of .
Chemistry makes use of matrices in various ways, particularly since the use of to discuss molecular bonding and . Examples are the and the used in solving the to obtain the of the method.
ප්රස්තාරික සිද්ධාන්තය
The of a is a basic notion of . It saves which vertices of the graph are connected by an edge. Matrices containing just two different values (0 and 1 meaning for example "yes" and "no") are called . The contains information about distances of the edges. These concepts can be applied to websites connected hyperlinks or cities connected by roads etc., in which case (unless the road network is extremely dense) the matrices tend to be , i.e., contain few nonzero entries. Therefore, specifically tailored matrix algorithms can be used in .
විශ්ලේෂණය සහ ජ්යාමිතිය
The of a ƒ: Rn → R consists of the of ƒ with respect to the several coordinate directions, i.e.
It encodes information about the local growth behaviour of the function: given a x = (x1, ..., xn), i.e., a point where the first of ƒ vanish, the function has a if the Hessian matrix is . can be used to find global minima or maxima of quadratic functions closely related to the ones attached to matrices (see above).
Another matrix frequently used in geometrical situations is the of a differentiable map f: Rn → Rm. If f1, ..., fm denote the components of f, then the Jacobi matrix is defined as
If n > m, and if the rank of the Jacobi matrix attains its maximal value m, f is locally invertible at that point, by the .
can be classified by considering the matrix of coefficients of the highest-order differential operators of the equation. For this matrix is positive definite, which has decisive influence on the set of possible solutions of the equation in question.
The is an important numerical method to solve partial differential equations, widely applied in simulating complex physical systems. It attempts to approximate the solution to some equation by piecewise linear functions, where the pieces are chosen with respect to a sufficiently fine grid, which in turn can be recast as a matrix equation.
සම්භාවිතා සිද්ධාන්ත සහ සංඛ්යානය
are square matrices whose rows are , i.e., whose entries sum up to one. Stochastic matrices are used to define with finitely many states. A row of the stochastic matrix gives the probability distribution for the next position of some particle currently in the state that corresponds to the row. Properties of the Markov chain like , i.e., states that any particle attains eventually, can be read off the eigenvectors of the transition matrices.
Statistics also makes use of matrices in many different forms. is concerned with describing data sets, which can often be represented in matrix form, by reducing the amount of data. The encodes the mutual variance of several . Another technique using matrices are , a method that approximates a finite set of pairs (x1, y1), (x2, y2), ..., (xN, yN), by a linear function
- yi ≈ axi + b, i = 1, ..., N
which can be formulated in terms of matrices, related to the of matrices.
are matrices whose entries are random numbers, subject to suitable , such as . Beyond probability theory, they are applied in domains ranging from to physics.
සමමිති සහ භෞතික විද්යාව තුළ පරිණාමණ
Linear transformations and the associated play a key role in modern physics. For example, in are classified as representations of the of special relativity and, more specifically, by their behavior under the . Concrete representations involving the and more general are an integral part of the physical description of , which behave as . For the three lightest quarks, there is a group-theoretical representation involving the SU(3); for their calculations, physicists use a convenient matrix representation known as the , which are also used for the SU(3) that forms the basis of the modern description of strong nuclear interactions, . The , in turn, expresses the fact that the basic quark states that are important for are not the same as, but linearly related to the basic quark states that define particles with specific and distinct .
ක්වොන්ටම් ගති ස්වභාවයේ රේඛීය සංයෝජන
The first model of (, 1925) represented the theory's operators by infinite-dimensional matrices acting on quantum states. This is also referred to as . One particular example is the that characterizes the "mixed" state of a quantum system as a linear combination of elementary, "pure" .
Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimental particle physics: Collision reactions such as occur in , where non-interacting particles head towards each other and collide in a small interaction zone, with a new set of non-interacting particles as the result, can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states. The linear combination is given by a matrix known as the , which encodes all information about the possible interactions between particles.
සාමාන්ය ක්රම
A general application of matrices in physics is to the description of linearly coupled harmonic systems. The of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity to give the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The best way to obtain solutions is to determine the system's , its , by diagonalizing the matrix equation. Techniques like this are crucial when it comes to the internal dynamics of : the internal vibrations of systems consisting of mutually bound component atoms. They are also needed for describing mechanical vibrations, and oscillations in electrical circuits.
ජ්යාමිතික දෘශ්ටි විද්යාව
provides further matrix applications. In this approximative theory, the of light is neglected. The result is a model in which are indeed . If the deflection of light rays by optical elements is small, the action of a or reflective element on a given light ray can be expressed as multiplication of a two-component vector with a two-by-two matrix called : the vector's components are the light ray's slope and its distance from the optical axis, while the matrix encodes the properties of the optical element. Actually, there are two kinds of matrices, viz. a refraction matrix describing the refraction at a lens surface, and a translation matrix, describing the translation of the plane of reference to the next refracting surface, where another refraction matrix applies. The optical system, consisting of a combination of lenses and/or reflective elements, is simply described by the matrix resulting from the product of the components' matrices.
ඉලෙක්ට්රෝනික විද්යාව
Traditional in electronics leads to a system of linear equations that can be described with a matrix.
The behaviour of many electronic components can be described using matrices. Let A be a 2-dimensional vector with the component's input voltage v1 and input current i1 as its elements, and let B be a 2-dimensional vector with the component's output voltage v2 and output current i2 as its elements. Then the behaviour of the electronic component can be described by B = H · A, where H is a 2 x 2 matrix containing one element (h12), one element (h21) and two elements (h11 and h22). Calculating a circuit now reduces to multiplying matrices.
ඉතිහාසය
Matrices have a long history of application in solving . The (Jiu Zhang Suan Shu), from between 300 BC and AD 200, is the first example of the use of matrix methods to solve simultaneous equations, including the concept of , over 1000 years before its publication by the in 1683[] and the German mathematician in 1693. presented in 1750.
Early matrix theory emphasized determinants more strongly than matrices and an independent matrix concept akin to the modern notion emerged only in 1858, with Memoir on the theory of matrices. The term "matrix" ( for "womb", derived from mater—mother) was coined by , who understood a matrix as an object giving rise to a number of determinants today called , that is to say, determinants of smaller matrices that derive from the original one by removing columns and rows. In a 1851 paper, Sylvester explains:
- I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent.
The study of determinants sprang from several sources. problems led Gauss to relate coefficients of , i.e., expressions such as x2 + xy − 2y2, and in three dimensions to matrices. further developed these notions, including the remark that, in modern parlance, are . was the first to prove general statements about determinants, using as definition of the determinant of a matrix A = [ai,j] the following: replace the powers ajk by ajk in the polynomial
where Π denotes the of the indicated terms. He also showed, in 1829, that the of symmetric matrices are real. studied "functional determinants"—later called by Sylvester—which can be used to describe geometric transformations at a local (or ) level, see above; Vorlesungen über die Theorie der Determinanten and Zur Determinantentheorie, both published in 1903, first treated determinants , as opposed to previous more concrete approaches such as the mentioned formula of Cauchy. At that point, determinants were firmly established.
Many theorems were first established for small matrices only, for example the was proved for 2×2 matrices by Cayley in the aforementioned memoir, and by for 4×4 matrices. , working on , generalized the theorem to all dimensions (1898). Also at the end of the 19th century the (generalizing a special case now known as ) was established by . In the early 20th century, matrices attained a central role in linear algebra. partially due to their use in classification of the systems of the previous century.
The inception of by , and led to studying matrices with infinitely many rows and columns. Later, carried out the , by further developing notions such as on , which, very roughly speaking, correspond to Euclidean space, but with an infinity of .
ලෝක ඉතිහාසයේ ගණිතය තුළ න්යසයන්හි වෙනත් භාවිත
The word has been used in unusual ways by at least two authors of historical importance.
and in their Principia Mathematica (1910–1913) use the word matrix in the context of their . They proposed this axiom as a means to reduce any function to one of lower type, successively, so that at the “bottom” (0 order) the function is identical to its :
- “Let us give the name of matrix to any function, of however many variables, which does not involve any . Then any possible function other than a matrix is derived from a matrix by means of generalization, i.e., by considering the proposition which asserts that the function in question is true with all possible values or with some value of one of the arguments, the other argument or arguments remaining undetermined”.
For example a function Φ(x, y) of two variables x and y can be reduced to a collection of functions of a single variable, e.g., y, by “considering” the function for all possible values of “individuals” ai substituted in place of variable x. And then the resulting collection of functions of the single variable y, i.e., ∀ai: Φ(ai, y), can be reduced to a “matrix” of values by “considering” the function for all possible values of “individuals” bi substituted in place of variable y:
- ∀bj∀ai: Φ(ai, bj).
in his 1946 Introduction to Logic used the word “matrix” synonymously with the notion of as used in mathematical logic.
See also
සටහන්
- . Alternative references for this book include and
- This is immediate from the definition of matrix multiplication.
- For example, , see
- See any standard reference in group.
- See any reference in representation theory or .
- See the item "Matrix" in
- "Empty Matrix: A matrix is empty if either its row or column dimension is zero", Glossary 2009-04-29 at the Wayback Machine, O-Matrix v6 User Guide
- "A matrix having at least one dimension equal to zero is called an empty matrix", MATLAB Data Structures 2009-12-28 at the Wayback Machine
- . For a more advanced, and more general statement see
- . See also .
- (1986), Matrices for Statistics, Oxford University Press,
- see
- cited by
- Merriam–Webster dictionary, , http://www.merriam-webster.com/dictionary/matrix, ප්රතිෂ්ඨාපනය April, 20th 2009
- Per the OED the first usage of the word "matrix" with respect to mathematics appears in J. J. Sylvester in London, Edinb. & Dublin Philos. Mag. 37 (1850), p. 369: "We ‥commence‥ with an oblong arrangement of terms consisting, suppose, of m lines and n columns. This will not in itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants by fixing upon a number p, and selecting at will p lines and p columns, the squares corresponding to which may be termed determinants of the pth order.
- The Collected Mathematical Papers of James Joseph Sylvester: 1837–1853, Paper 37, p. 247
- Alfred North Whitehead and Bertrand Russell (1913) Principia Mathematica to *56, Cambridge at the University Press, Cambridge UK (republished 1962) cf page 162ff.
- Tarski, Alfred 1946 Introduction to Logic and the Methodology of Deductive Sciences, Dover Publications, Inc, New York NY, .
- Eigen means "own" in ජර්මානු and in .
- Additionally, the group is required to be in the general linear group.
- "Not much of matrix theory carries over to infinite-dimensional spaces, and what does is not so useful, but it sometimes helps."
නිර්දේශ
- ; (1992), Ordinary differential equations, Berlin, New York: ,
- (1991), Algebra, ,
- Association for Computing Machinery (1979), Computer Graphics, Tata McGraw–Hill,
- Baker, Andrew J. (2003), Matrix Groups: An Introduction to Lie Group Theory, Berlin, New York: Springer-Verlag,
- Bau III, David; (1997), Numerical linear algebra, Philadelphia: Society for Industrial and Applied Mathematics,
- Bretscher, Otto (2005), Linear Algebra with Applications (3rd ed.), Prentice Hall
- Bronson, Richard (1989), Schaum's outline of theory and problems of matrix operations, New York: ,
- Brown, William A. (1991), Matrices and vector spaces, New York: M. Dekker,
- Coburn, Nathaniel (1955), Vector and tensor analysis, New York: Macmillan,
- Conrey, J. B. (2007), Ranks of elliptic curves and random matrix theory, Cambridge University Press,
- Fudenberg, D.; (1983), Game Theory,
- Gilbarg, David; (2001), Elliptic partial differential equations of second order (2nd ed.), Berlin, New York: Springer-Verlag,
- ; (2004), Algebraic Graph Theory, Graduate Texts in Mathematics, 207, Berlin, New York: Springer-Verlag,
- ; (1996), Matrix Computations (3rd ed.), Johns Hopkins,
- Greub, Werner Hildbert (1975), Linear algebra, Graduate Texts in Mathematics, Berlin, New York: Springer-Verlag,
- Halmos, Paul Richard (1982), A Hilbert space problem book, Graduate Texts in Mathematics, 19 (2nd ed.), Berlin, New York: Springer-Verlag,
- Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press,
- Householder, Alston S. (1975), The theory of matrices in numerical analysis, New York:
- Krzanowski, W. J. (1988), Principles of multivariate analysis, Oxford Statistical Science Series, 3, The Clarendon Press Oxford University Press,
- Itõ, Kiyosi, ed. (1987), Encyclopedic dictionary of mathematics. Vol. I--IV (2nd ed.), MIT Press,
- (1969), Analysis II,
- Lang, Serge (1987a), Calculus of several variables (3rd ed.), Berlin, New York: Springer-Verlag,
- Lang, Serge (1987b), Linear algebra, Berlin, New York: Springer-Verlag,
- Latouche, G.; Ramaswami, V. (1999), Introduction to matrix analytic methods in stochastic modeling (1st ed.), Philadelphia: Society for Industrial and Applied Mathematics,
- Manning, Christopher D.; Schütze, Hinrich (1999), Foundations of statistical natural language processing, MIT Press,
- Mehata, K. M.; Srinivasan, S. K. (1978), Stochastic processes, New York: McGraw–Hill,
- Mirsky, Leonid (1990), An Introduction to Linear Algebra, Courier Dover Publications, , http://books.google.com/?id=ULMmheb26ZcC&pg=PA1&dq=linear+algebra+determinant
- Nocedal, Jorge; Wright, Stephen J. (2006), Numerical Optimization (2nd ed.), Berlin, New York: Springer-Verlag, p. 449,
- Oualline, Steve (2003), Practical C++ programming, ,
- Press, William H.; Flannery, Brian P.; ; Vetterling, William T. (1992), "LU Decomposition and Its Applications", Numerical Recipes in FORTRAN: The Art of Scientific Computing (2nd ed.), Cambridge University Press, pp. 34–42, http://www.mpi-hd.mpg.de/astrophysik/HEA/internal/Numerical_Recipes/f2-3.pdf
- Punnen, Abraham P.; Gutin, Gregory (2002), The traveling salesman problem and its variations, Boston: Kluwer Academic Publishers,
- Reichl, Linda E. (2004), The transition to chaos: conservative classical systems and quantum manifestations, Berlin, New York: Springer-Verlag,
- Rowen, Louis Halle (2008), Graduate Algebra: noncommutative view, Providence, R.I.: ,
- Šolin, Pavel (2005), Partial Differential Equations and the Finite Element Method, ,
- Stinson, Douglas R. (2005), Cryptography, Discrete Mathematics and Its Applications, Chapman & Hall/CRC,
- Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, New York: Springer-Verlag,
- Ward, J. P. (1997), Quaternions and Cayley numbers, Mathematics and its Applications, 403, Dordrecht: Kluwer Academic Publishers Group,
- (2003), The Mathematica Book (5th ed.), Champaign, Ill: Wolfram Media,
භෞතික විද්යා නිර්දේශ
- Bohm, Arno (2001), Quantum Mechanics: Foundations and Applications, Springer,
- Burgess, Cliff; Moore, Guy (2007), The Standard Model. A Primer, Cambridge University Press,
- Guenther, Robert D. (1990), Modern Optics, John Wiley,
- Itzykson, Claude; Zuber, Jean-Bernard (1980), Quantum Field Theory, McGraw–Hill,
- Riley, K. F.; Hobson, M. P.; Bence, S. J. (1997), Mathematical methods for physics and engineering, Cambridge University Press,
- Schiff, Leonard I. (1968), Quantum Mechanics (3rd ed.), McGraw–Hill
- Weinberg, Steven (1995), The Quantum Theory of Fields. Volume I: Foundations, Cambridge University Press,
- Wherrett, Brian S. (1987), Group Theory for Atoms, Molecules and Solids, Prentice–Hall International,
- Zabrodin, Anton; Brezin, Édouard; Kazakov, Vladimir; Serban, Didina; Wiegmann, Paul (2006), Applications of Random Matrices in Physics (NATO Science Series II: Mathematics, Physics and Chemistry), Berlin, New York: ,
පෞරාණික නිර්දේශ
- (2004), Introduction to higher algebra, New York: , , reprint of the 1907 original edition
- (1889), The collected mathematical papers of Arthur Cayley, I (1841–1853), Cambridge University Press, pp. 123–126, http://www.hti.umich.edu/cgi/t/text/pageviewer-idx?c=umhistmath;cc=umhistmath;rgn=full%20text;idno=ABS3153.0001.001;didno=ABS3153.0001.001;view=image;seq=00000140
- , ed. (1978), Abrégé d'histoire des mathématiques 1700-1900, Paris: Hermann
- Hawkins, Thomas (1975), "Cauchy and the spectral theory of matrices", 2: 1–29, ,
- Knobloch, Eberhard (1994), "From Gauss to Weierstrass: determinant theory and its historical evaluations", The intersection of history and mathematics, Sci. Networks Hist. Stud., 15, Basel, Boston, Berlin: Birkhäuser, pp. 51–66
- (1897), , ed., Leopold Kronecker's Werke, Teubner, http://name.umdl.umich.edu/AAS8260.0002.001
- Mehra, J.; Rechenberg, Helmut (1987), The Historical Development of Quantum Theory (1st ed.), Berlin, New York: ,
- Shen, Kangshen; Crossley, John N.; Lun, Anthony Wah-Cheung (1999), Nine Chapters of the Mathematical Art, Companion and Commentary (2nd ed.), Oxford University Press,
- (1915), Collected works, 3, http://name.umdl.umich.edu/AAN8481.0003.001
භාහිර ඈඳුම්
- History
- MacTutor: Matrices and determinants
- Matrices and Linear Algebra on the Earliest Uses Pages
- Earliest Uses of Symbols for Matrices and Vectors
- Online books
- Kaw, Autar K., Introduction to Matrix Algebra, , http://autarkaw.com/books/matrixalgebra/index.html
- The Matrix Cookbook, http://matrixcookbook.com, ප්රතිෂ්ඨාපනය 12/10/2008
- Brookes, M. (2005), The Matrix Reference Manual, London: , http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/intro.html, ප්රතිෂ්ඨාපනය 12/10/2008
- Online matrix calculators
- SuperiorMath (Matrix Calculator), http://www.webalice.it/simoalessia/SuperiorMath/matrix.html, ප්රතිෂ්ඨාපනය 2011-11-29
- Matrix Calculator (DotNumerics ), http://www.dotnumerics.com/MatrixCalculator/
- Xiao, Gang, Matrix calculator, http://wims.unice.fr/wims/wims.cgi?module=tool/linear/matrix.en, ප්රතිෂ්ඨාපනය 12/10/2008
- Online matrix calculator, http://www.bluebit.gr/matrix-calculator/, ප්රතිෂ්ඨාපනය 2011-11-29
- Online matrix calculator(ZK framework), http://matrixcalc.info/MatrixZK/, ප්රතිෂ්ඨාපනය 2011-11-29
- Oehlert, Gary W.; Bingham, Christopher, MacAnova, , School of Statistics, http://www.stat.umn.edu/macanova/macanova.home.html, ප්රතිෂ්ඨාපනය 12/10/2008, a freeware package for matrix algebra and statistics
- Online matrix calculator, http://www.idomaths.com/matrix.php, ප්රතිෂ්ඨාපනය 12/14/2009
- Operation with matrices in R (determinant, track, inverse, adjoint, transpose) 2011-07-18 at the Wayback Machine
විකිපීඩියාව, විකි, සිංහල, පොත, පොත්, පුස්තකාලය, ලිපිය, කියවන්න, බාගන්න, නොමිලේ, නොමිලේ බාගන්න, mp3, වීඩියෝ, mp4, 3gp, jpg, jpeg, gif, png, පින්තූරය, සංගීතය, ගීතය, චිත්රපටය, පොත, ක්රීඩාව, ක්රීඩා., ජංගම දුරකථන, android, ios, apple, ජංගම දුරකථන, samsung, iphone, xiomi, xiaomi, redmi, honor, oppo, nokia, sonya, mi, පීසී, වෙබ්, පරිගණකය
ගණ තය හ න ය සයක යන ස ඛ ය සම හයක ස ක තයන හ හ ප රක ශනයන හ ඍජ ක න ස ර ක රව ප ළ ය ළකරන ලද වග වක න ය සයක අඩ ග තන තන අය තම එහ ම ල වයව ල ස හ ඇත ලත ක ර ම ල ස හඳ න වන ලබය ම ල වයව හයක න සමන ව ත න ය සයක සඳහ න දස නක පහත දක ව ඇත න ය සයක ක ස යම ව ශ ෂ ත න ව ශ තයන ව ත ය ම ව දක වන න යටක ර අන ස රය න 19 132055 6 displaystyle begin bmatrix 1 amp 9 amp 13 20 amp 55 amp 6 end bmatrix dd සම න ප රම ණය න ය ත න ය සයන හ අවයව ව න ව න ව හ අඩ ක ර මට හ ක ය ව ඇත න ත ය වඩ ත ස ක ර ණ ව පළම න ය සය හ ස රස ත ර ගණනට ද ව න න ය සය හ ත රස ත ර ගණන සම නම පමණක එම න ය ස ද ක ග ණ ක ර මට හ ක යව ඇත න ර පනය ක ර ම න ය සයන හ ප රධ න භ ව තය වන අතර f x 4x ආද ස ධ රණ කරණය ක ර මටද ය ද ගන ය න දස නක ල ස ත ර ම න අවක ශය ත ළ ද ශ ක ර ඛ ය පර ණ මණය ව R යන v යන අවක ශය ලක ෂ යයක ව ස තර කරන එක ස රස ත ර වක පමණක ඇත න ය සයක නම Rv ග ණ තය යන එම ලක ෂ යය පර වර තනය න පස ව ස තර කරන ත ර ද ශ කයය න ය ස ද කක ග ණ තය ර ඛ ය පර ණ මණ ද කක න ර පනයකරන ලබය ව සඳ ම ලබ ග න ම න ය සයන හ ව නත භ ව තයක න ය සය නම එහ ගණනය ක ර ම මඟ න එහ සමහර ග ණ අප හනය ක ර මට හ ක ය ව ඇත න දස නක ල ස සමචත රස ර ක ර න ය සයක න ර ණ යකය ශ න ය න වන න නම පමණක එයට ක ඇත ර ඛ ය පර ණ මණය හ ජ ය ම ත යට අන තර ද ශ ට ය සලසන ලබය න ය ස බ හ ව ද ය ත මක ක ෂ ත රයන හ භ ව ත ස ය ගන ලබය භ ත ක ව ද ය ව හ ව ද ය ත පර පථ ද ශ ට ව ද ය ව සහ අධ යයනය සඳහ න ය ස භ ව ත කරය ද ව ම න තලය හ ත ර ම න ප රත බ ම බ ප රක ෂ පනය ක ර මට සහ ත ත ව ක චල තය ප න න අය ර න ර ම ණය ක ර මට න ය ස භ ව ත කරය බහ ම න සඳහ සහ ය ද ගන නව ස ම ප ර ණ ක මත ප ළ බඳ අවබ ධයක ලබ ග න මට ය ද ගන ය අව ර ද ස ය ගණනක ත ස ස වස ත ව ෂය ව ම න වර තම නය වන ව ට පර ය ෂණ මට ටම දක ව ම ව හ ද ග ය න ය ස ගණනය සඳහ ක ර යක ෂම ඇල ග ර තම ස වර ධනය ක ර ම ප රධ න ක ටසක ව පවත ස ද ධ න ත කවත ප ර ය ග කවත න ය ස ගණනය සරල කර ද ය එක එක න ය ස ආක ත උද සඳහ ඈඳන ලද ඇල ග ර තම හ ව නත න ය ස ගණනයන ඉක මන කරවය ග රහ ත රක ස ද ධ න ත වලද සහ පරම ණ ක ස ද ධ න ත වලද අපර ම ත න ය ස භ ව ත ව ය ශ ර තයක ට ලර ශ ර ණ ය මත බලප න ක රකය න ර පණය කරන න ය සය ඒ සඳහ ඇත හ ඳම න දස නය අර ථද ක ව මන ය ස යන ගණ තමය ප රක ශනයන හ ප ළ ය ළ ක ර මක වන අතර එමඟ න සරල බව ඇත කරව ය හ ක ය න දස නක ල ස A 91361117392607 displaystyle mathbf A begin bmatrix 9 amp 13 amp 6 1 amp 11 amp 7 3 amp 9 amp 2 6 amp 0 amp 7 end bmatrix ව කල ප අ කනයක ල ස ව න වට ව ශ ල ය ද ගත හ ක A 91361117392607 displaystyle mathbf A begin pmatrix 9 amp 13 amp 6 1 amp 11 amp 7 3 amp 9 amp 2 6 amp 0 amp 7 end pmatrix න ය සයක ත රස ප ළ සහ ස රස ත ර අඩ ග ව න ය සයක අඩ ග ස ඛ ය එහ ම ල වයව ල ස හ ඇත ලත ක ර ම ල ස හඳ න වන ලබය ත රස ප ළ m ගණනක න හ ස රස ත ර n ගණනක න ය ත න ය සයක m by n න ය සයක ල ස හ m n න ය සයක ල ස එහ ප රම ණය ව ශ ෂය න සඳහන කරන අතර m සහ n එහ ම නව ඉහත සඳහන කර ඇත ත 4 by 3 න ය සයක ත රස ප ළ එකක 1 n පමණක ඇත න ය ස ල සද ස රස ත ර එකක m 1 පමණක ඇත න ය ස ල සද හඳ න වන ලබය න ය සයක ඕන ම ත රස ප ළ යක හ ස රස ත ර වක ත රස ප ළ ද ශ කය හ ස රස ත ර ද ශ කය න ර ණය කරන අතර න ය සය හ ව නත ත රස ප ළ හ ස රස ත ර අන ප ළ ව ළ න ඉවත ක ර ම මඟ න ලබ ගත හ ක ය න සදස නක ල ස ඉහත A න ය සය හ ත න වන ත රස ප ළ ය ත රස ප ළ ද ශ කය 392 displaystyle begin bmatrix 3 amp 9 amp 2 end bmatrix න ය සයක ත රස ප ළ ය හ ස රස ත ර ව අගයකට අර ථපහද ද න ව ට එය අන ර ප ත රස ප ළ ද ශ කයට හ ස රස ත ර ද ශ කයට ය ම කරන ල බ උද හරණයක ල ස න ය සයක ව නස ත රස ප ළ ද කක සම න බව ක ව හ ක ය එහ අර ථය ඒව ය හ ත රස ප ළ ද ශ කය සම න බවය සමහර අවස ථ වලද ත රස ප ළ යක හ ස රස ත ර වක අගය හර යටම අගයන අන ප ළ ව ළ න Rn හ ම ල වයවයකඇත ලත ක ර ම තත ව ක ස ඛ ය නම න ය සයට වඩ ව ඩ ය න අර ථපහද ද ය ය ත ය උද රණයක ල ස න ය සයක ත රස ප ළ අන ර ප ස රස ත ර වලට සම න වන ව ට එය එහ න ය සයව ය බ හ ස ය න ම ම ල ප ය ත ත ව ක සහ ස ක ර ණ න ය ස ක න ද ර කරග න ඇත තව ද රටත න ය සයක ම ල වයව ක ස දයත අන ප ළ ව ළ න ත ත ව ක හ ස ක ර ණ ව ය හ ක ය බ හ ස ම න ය ආක රය ඇත ලත ක ර ම ස කච ඡ කරන ලබය අ කනය යම ද සකට ප ත ර පවත න බ හ ප ළ ල ව න ය ස ව ශ ෂ කරණ අ කනයක ඇත ස ම න යය න භ ව තය න න ය සයක දක වන ලබන අතර අන ර ප සමඟ යට ක ර දර ශක ද කක ය ද ම න ඇත ලත ක ර මක ඉද ර පත කරන ලබය ඊට අමතරව ඉ ග ර ස ල ක අක ර භ ව තය න න ය සයක ස ක තවත කරන ලබය බ හ ල ඛකයන ව ශ ෂ භ ව ත කරය ව නත ගණ තමය ද වල මඟ න න ය ස වල ඇත ව නස හඳ න ග න මට ස ලබව ස ජ තද අක ර ඇල න ත ආධ රකරන ලබය ව කල ප අ කනයක ල ස ස ජ අක ර සහ තව හ රහ තව ව චල ය න මයට යට න ඉර ද කක ග ස ම ස ද කරන ලබය e g A displaystyle underline underline A න ය සයක i ව න ත රස ප ළ ය සහ j ව න ස රස ත ර ව ඇත ලත ක ර ම i j වන ඇත ළත ක ර ම ල ස සලකය න දස නක ල ස ඉහත A න ය සය 2 3 ඇත ළත ක ර ම 7 ව A නම න ය සයක i j ඇත ළත ක ර ම බහ ල වශය න භ ව ත කරන ය ai j ල ස න A i j හ Ai j යන ම සඳහ භ ව ත කරන ලබන ව නත ස ක ත ව Sometimes a matrix is referred to by giving a formula for its i j th entry often with double parenthesis around the formula for the entry for example if the i j th entry of A were given by aij A would be denoted aij An asterisk is commonly used to refer to whole rows or columns in a matrix For example ai refers to the ith row of A and a j refers to the jth column of A The set of all m by n matrices is denoted M displaystyle mathbb M m n A common shorthand is A ai j i 1 m j 1 n or more briefly A ai j m n to define an m n matrix A Usually the entries ai j are defined separately for all integers 1 i m and 1 j n They can however sometimes be given by one formula for example the 3 by 4 matrix A 0 1 2 310 1 2210 1 displaystyle mathbf A begin bmatrix 0 amp 1 amp 2 amp 3 1 amp 0 amp 1 amp 2 2 amp 1 amp 0 amp 1 end bmatrix can alternatively be specified by A i j i 1 2 3 j 1 4 or simply A i j where the size of the matrix is understood Some programming languages start the numbering of rows and columns at zero in which case the entries of an m by n matrix are indexed by 0 i m 1 and 0 j n 1 This article follows the more common convention in mathematical writing where enumeration starts from 1 ම ල ක ක ර ය ක රකම ප රධ න ල ප යන සහ There are a number of operations that can be applied to modify matrices called matrix addition scalar multiplication and transposition These form the basic techniques to deal with matrices Operation Definition ExampleAddition The sum A B of two m by n matrices A and B is calculated entrywise A B i j Ai j Bi j where 1 i m and 1 j n 131100 005750 1 03 01 51 70 50 0 136850 displaystyle begin bmatrix 1 amp 3 amp 1 1 amp 0 amp 0 end bmatrix begin bmatrix 0 amp 0 amp 5 7 amp 5 amp 0 end bmatrix begin bmatrix 1 0 amp 3 0 amp 1 5 1 7 amp 0 5 amp 0 0 end bmatrix begin bmatrix 1 amp 3 amp 6 8 amp 5 amp 0 end bmatrix Scalar multiplication The scalar multiplication cA of a matrix A and a number c also called a in the parlance of is given by multiplying every entry of A by c cA i j c Ai j 2 18 34 25 2 12 82 32 42 22 5 216 68 410 displaystyle 2 cdot begin bmatrix 1 amp 8 amp 3 4 amp 2 amp 5 end bmatrix begin bmatrix 2 cdot 1 amp 2 cdot 8 amp 2 cdot 3 2 cdot 4 amp 2 cdot 2 amp 2 cdot 5 end bmatrix begin bmatrix 2 amp 16 amp 6 8 amp 4 amp 10 end bmatrix Transpose The transpose of an m by n matrix A is the n by m matrix AT also denoted Atr or tA formed by turning rows into columns and vice versa AT i j Aj i 1230 67 T 102 637 displaystyle begin bmatrix 1 amp 2 amp 3 0 amp 6 amp 7 end bmatrix T begin bmatrix 1 amp 0 2 amp 6 3 amp 7 end bmatrix Familiar properties of numbers extend to these operations of matrices for example addition is i e the matrix sum does not depend on the order of the summands A B B A The transpose is compatible with addition and scalar multiplication as expressed by cA T c AT and A B T AT BT Finally AT T A are ways to change matrices There are three types of row operations row switching that is interchanging two rows of a matrix row multiplication multiplying all entries of a row by a non zero constant and finally row addition which means adding a multiple of a row to another row These row operations are used in a number of ways including solving linear equations and finding inverses න ය ස ග ණක ර ම ර ඛ ය සම කරණ සහ ර ඛ ය පර ණ මණප රධ න ල ප ය Schematic depiction of the matrix product AB of two matrices A and B Multiplication of two matrices is defined only if the number of columns of the left matrix is the same as the number of rows of the right matrix If A is an m by n matrix and B is an n by p matrix then their matrix product AB is the m by p matrix whose entries are given by of the corresponding row of A and the corresponding column of B AB i j Ai 1B1 j Ai 2B2 j Ai nBn j r 1nAi rBr j displaystyle mathbf AB i j A i 1 B 1 j A i 2 B 2 j cdots A i n B n j sum r 1 n A i r B r j where 1 i m and 1 j p For example the underlined entry 1 in the product is calculated as 1 1 0 1 2 0 1 1 0 2 131 31 21 10 51 42 displaystyle begin aligned begin bmatrix underline 1 amp underline 0 amp underline 2 1 amp 3 amp 1 end bmatrix begin bmatrix 3 amp underline 1 2 amp underline 1 1 amp underline 0 end bmatrix amp begin bmatrix 5 amp underline 1 4 amp 2 end bmatrix end aligned Matrix multiplication satisfies the rules AB C A BC and A B C AC BC as well as C A B CA CB left and right whenever the size of the matrices is such that the various products are defined The product AB may be defined without BA being defined namely if A and B are m by n and n by k matrices respectively and m k Even if both products are defined they need not be equal i e generally one has AB BA i e matrix multiplication is not in marked contrast to rational real or complex numbers whose product is independent of the order of the factors An example of two matrices not commuting with each other is 1234 0100 0103 displaystyle begin bmatrix 1 amp 2 3 amp 4 end bmatrix begin bmatrix 0 amp 1 0 amp 0 end bmatrix begin bmatrix 0 amp 1 0 amp 3 end bmatrix whereas 0100 1234 3400 displaystyle begin bmatrix 0 amp 1 0 amp 0 end bmatrix begin bmatrix 1 amp 2 3 amp 4 end bmatrix begin bmatrix 3 amp 4 0 amp 0 end bmatrix The In of size n is the n by n matrix in which all the elements on the are equal to 1 and all other elements are equal to 0 e g I3 100010001 displaystyle mathbf I 3 begin bmatrix 1 amp 0 amp 0 0 amp 1 amp 0 0 amp 0 amp 1 end bmatrix It is called identity matrix because multiplication with it leaves a matrix unchanged MIn ImM M for any m by n matrix M Besides the ordinary matrix multiplication just described there exist other less frequently used operations on matrices that can be considered forms of multiplication such as the and the They arise in solving matrix equations such as the ර ඛ ය සම කරණ ප රධ න ල ප යන සහ System of linear equations A particular case of matrix multiplication is tightly linked to linear equations if x designates a column vector i e n 1 matrix of n variables x1 x2 xn and A is an m by n matrix then the matrix equation Ax b where b is some m 1 column vector is equivalent to the system of linear equations A1 1x1 A1 2x2 A1 nxn b1 Am 1x1 Am 2x2 Am nxn bm This way matrices can be used to compactly write and deal with multiple linear equations i e systems of linear equations ර ඛ ය පර ණ මණ ප රධ න ල ප යන සහ The vectors represented by a 2 by 2 matrix correspond to the sides of a unit square transformed into a parallelogram Matrices and matrix multiplication reveal their essential features when related to linear transformations also known as linear maps A real m by n matrix A gives rise to a linear transformation Rn Rm mapping each vector x in Rn to the matrix product Ax which is a vector in Rm Conversely each linear transformation f Rn Rm arises from a unique m by n matrix A explicitly the i j entry of A is the ith coordinate of f ej where ej 0 0 1 0 0 is the with 1 in the jth position and 0 elsewhere The matrix A is said to represent the linear map f and A is called the transformation matrix of f For example the 2 2 matrix A acbd displaystyle mathbf A begin bmatrix a amp c b amp d end bmatrix can be viewed as the transform of the into a with vertices at 0 0 a b a c b d and c d The parallelogram pictured at the right is obtained by multiplying A with each of the column vectors 00 10 11 displaystyle begin bmatrix 0 0 end bmatrix begin bmatrix 1 0 end bmatrix begin bmatrix 1 1 end bmatrix and 01 displaystyle begin bmatrix 0 1 end bmatrix in turn These vectors define the vertices of the unit square The following table shows a number of with the associated linear maps of R2 The blue original is mapped to the green grid and shapes the origin 0 0 is marked with a black point with m 1 25 Horizontal flip with r 3 2 by a factor of 3 2 by p 6R 30 11 2501 displaystyle begin bmatrix 1 amp 1 25 0 amp 1 end bmatrix 1001 displaystyle begin bmatrix 1 amp 0 0 amp 1 end bmatrix 3 2002 3 displaystyle begin bmatrix 3 2 amp 0 0 amp 2 3 end bmatrix 3 2003 2 displaystyle begin bmatrix 3 2 amp 0 0 amp 3 2 end bmatrix cos p 6R sin p 6R sin p 6R cos p 6R displaystyle begin bmatrix cos pi 6 R amp sin pi 6 R sin pi 6 R amp cos pi 6 R end bmatrix Under the between matrices and linear maps matrix multiplication corresponds to of maps if a k by m matrix B represents another linear map g Rm Rk then the composition g f is represented by BA since g f x g f x g Ax B Ax BA x The last equality follows from the above mentioned associativity of matrix multiplication The A is the maximum number of row vectors of the matrix which is the same as the maximum number of linearly independent column vectors Equivalently it is the of the of the linear map represented by A The states that the dimension of the of a matrix plus the rank equals the number of columns of the matrix සමචත රස ර ක ර න ය සA square matrix is a matrix with the same number of rows and columns An n by n matrix is known as a square matrix of order n Any two square matrices of the same order can be added and multiplied A square matrix A is called or non singular if there exists a matrix B such that AB In This is equivalent to BA In Moreover if B exists it is unique and is called the of A denoted A 1 The entries Ai i form the of a matrix The tr A of a square matrix A is the sum of its diagonal entries While as mentioned above matrix multiplication is not commutative the trace of the product of two matrices is independent of the order of the factors tr AB tr BA Also the trace of a matrix is equal to that of its transpose i e tr A tr AT If all entries outside the main diagonal are zero A is called a If only all entries above below the main diagonal are zero A is called a lower upper triangular matrix respectively For example if n 3 they look like d11000d22000d33 displaystyle begin bmatrix d 11 amp 0 amp 0 0 amp d 22 amp 0 0 amp 0 amp d 33 end bmatrix diagonal l1100l21l220l31l32l33 displaystyle begin bmatrix l 11 amp 0 amp 0 l 21 amp l 22 amp 0 l 31 amp l 32 amp l 33 end bmatrix lower and u11u12u130u22u2300u33 displaystyle begin bmatrix u 11 amp u 12 amp u 13 0 amp u 22 amp u 23 0 amp 0 amp u 33 end bmatrix upper triangular matrix න ර ණ යකය ප රධ න ල ප ය A linear transformation on R2 given by the indicated matrix The determinant of this matrix is 1 as the area of the green parallelogram at the right is 1 but the map reverses the since it turns the counterclockwise orientation of the vectors to a clockwise one The determinant det A or A of a square matrix A is a number encoding certain properties of the matrix A matrix is invertible its determinant is nonzero Its equals the area in R2 or volume in R3 of the image of the unit square or cube while its sign corresponds to the orientation of the corresponding linear map the determinant is positive if and only if the orientation is preserved The determinant of 2 by 2 matrices is given by det abcd ad bc displaystyle det begin pmatrix a amp b c amp d end pmatrix ad bc When the determinant is equal to one then the matrix represents an The determinant of 3 by 3 matrices involves 6 terms The more lengthy generalises these two formulae to all dimensions The determinant of a product of square matrices equals the product of their determinants det AB det A det B Adding a multiple of any row to another row or a multiple of any column to another column does not change the determinant Interchanging two rows or two columns affects the determinant by multiplying it by 1 Using these operations any matrix can be transformed to a lower or upper triangular matrix and for such matrices the determinant equals the product of the entries on the main diagonal this provides a method to calculate the determinant of any matrix Finally the expresses the determinant in terms of i e determinants of smaller matrices This expansion can be used for a recursive definition of determinants taking as starting case the determinant of a 1 by 1 matrix which is its unique entry or even the determinant of a 0 by 0 matrix which is 1 that can be seen to be equivalent to the Leibniz formula Determinants can be used to solve using where the division of the determinants of two related square matrices equates to the value of each of the system s variables අය ගන අගයන සහ අය ගන ද ශ ක ප රධ න ල ප ය A number l and a non zero vector v satisfying Av lv are called an eigenvalue and an eigenvector of A respectively The number l is an eigenvalue of an n n matrix A if and only if A lIn is not invertible which is to det A lI 0 displaystyle det mathsf A lambda mathsf I 0 The function pA t det A tI is called the of A its is n Therefore pA t has at most n different roots i e eigenvalues of the matrix They may be complex even if the entries of A are real According to the pA A 0 that is to say the characteristic polynomial applied to the matrix itself yields the සමම ත ය A square matrix A that is equal to its transpose i e A AT is a If instead A was equal to the negative of its transpose i e A AT then A is a In complex matrices symmetry is often replaced by the concept of which satisfy A A where the star or denotes the of the matrix i e the transpose of the of A By the real symmetric matrices and complex Hermitian matrices have an i e every vector is expressible as a of eigenvectors In both cases all eigenvalues are real This theorem can be generalized to infinite dimensional situations related to matrices with infinitely many rows and columns see below න ශ ච ත භ වය Matrix A definiteness associated quadratic form QA x y set of vectors x y such that QA x y 1 1 4001 4 displaystyle begin bmatrix 1 4 amp 0 0 amp 1 4 end bmatrix 1 400 1 4 displaystyle begin bmatrix 1 4 amp 0 0 amp 1 4 end bmatrix positive definite indefinite1 4 x2 1 4y2 1 4 x2 1 4 y2 A symmetric n n matrix is called respectively negative definite indefinite if for all nonzero vectors x Rn the associated given by Q x xTAx takes only positive values respectively only negative values both some negative and some positive values If the quadratic form takes only non negative respectively only non positive values the symmetric matrix is called positive semidefinite respectively negative semidefinite hence the matrix is indefinite precisely when it is neither positive semidefinite nor negative semidefinite A symmetric matrix is positive definite if and only if all its eigenvalues are positive The table at the right shows two possibilities for 2 by 2 matrices Allowing as input two different vectors instead yields the associated to A BA x y xTAy ස ඛ ය ත මක ආක රIn addition to theoretical knowledge of properties of matrices and their relation to other fields it is important for practical purposes to perform matrix calculations effectively and precisely The domain studying these matters is called As with other numerical situations two main aspects are the and their Many problems can be solved by both direct algorithms or iterative approaches For example finding eigenvectors can be done by finding a of vectors xn to an eigenvector when n tends to Determining the complexity of an algorithm means finding or estimates of how many elementary operations such as additions and multiplications of scalars are necessary to perform some algorithm e g multiplication of matrices For example calculating the matrix product of two n by n matrix using the definition given above needs n3 multiplications since for any of the n2 entries of the product n multiplications are necessary The outperforms this naive algorithm it needs only n2 807 multiplications A refined approach also incorporates specific features of the computing devices In many practical situations additional information about the matrices involved is known An important case are i e matrices most of whose entries are zero There are specifically adapted algorithms for say solving linear systems Ax b for sparse matrices A such as the An algorithm is roughly speaking numerically stable if little deviations such as rounding errors do not lead to big deviations in the result For example calculating the inverse of a matrix via Adj A denotes the of A A 1 Adj A det A may lead to significant rounding errors if the determinant of the matrix is very small The can be used to capture the of linear algebraic problems such as computing a matrix inverse Although most are not designed with commands or libraries for matrices as early as the 1970s some engineering desktop computers such as the had ROM cartridges to add BASIC commands for matrices Some computer languages such as were designed to manipulate matrices and can be used to aid computing with matrices න ය ස ව ය ජන ක රම ප රධ න ල ප යන Gaussian elimination සහ There are several methods to render matrices into a more easily accessible form They are generally referred to as matrix transformation or matrix decomposition techniques The interest of all these decomposition techniques is that they preserve certain properties of the matrices in question such as determinant rank or inverse so that these quantities can be calculated after applying the transformation or that certain matrix operations are algorithmically easier to carry out for some types of matrices The factors matrices as a product of lower L and an upper U Once this decomposition is calculated linear systems can be solved more efficiently by a simple technique called Likewise inverses of triangular matrices are algorithmically easier to calculate The Gaussian elimination is a similar algorithm it transforms any matrix to Both methods proceed by multiplying the matrix by suitable which correspond to and adding multiples of one row to another row expresses any matrix A as a product UDV where U and V are and D is a diagonal matrix A matrix in Jordan normal form The grey blocks are called Jordan blocks The or diagonalization expresses A as a product VDV 1 where D is a diagonal matrix and V is a suitable invertible matrix If A can be written in this form it is called More generally and applicable to all matrices the Jordan decomposition transforms a matrix into that is to say matrices whose only nonzero entries are the eigenvalues l1 to ln of A placed on the main diagonal and possibly entries equal to one directly above the main diagonal as shown at the right Given the eigendecomposition the nth power of A i e n fold iterated matrix multiplication can be calculated via An VDV 1 n VDV 1VDV 1 VDV 1 VDnV 1 and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries which is much easier than doing the exponentiation for A instead This can be used to compute the eA a need frequently arising in solving and To avoid numerically situations further algorithms such as the can be employed ව ජ ය ස ක ෂ පන ආක රය සහ ස ධ රණ කරණයMatrices can be generalized in different ways Abstract algebra uses matrices with entries in more general or even while linear algebra codifies properties of matrices in the notion of linear maps It is possible to consider matrices with infinitely many columns and rows Another extension are which can be seen as higher dimensional arrays of numbers as opposed to vectors which can often be realised as sequences of numbers while matrices are rectangular or two dimensional array of numbers Matrices subject to certain requirements tend to form known as matrix groups න ය ස සමඟ බ හ ස ම න ය ඇත ලත ක ර ම This article focuses on matrices whose entries are real or However matrices can be considered with much more general types of entries than real or complex numbers As a first step of generalization any i e a where and operations are defined and well behaved may be used instead of R or C for example or For example makes use of matrices over finite fields Wherever are considered as these are roots of a polynomial they may exist only in a larger field than that of the coefficients of the matrix for instance they may be complex in case of a matrix with real entries The possibility to reinterpret the entries of a matrix as elements of a larger field e g to view a real matrix as a complex matrix whose entries happen to be all real then allows considering each square matrix to possess a full set of eigenvalues Alternatively one can consider only matrices with entries in an such as C from the outset More generally abstract algebra makes great use of matrices with entries in a R Rings are a more general notion than fields in that no division operation exists The very same addition and multiplication operations of matrices extend to this setting too The set M n R of all square n by n matrices over R is a ring called isomorphic to the of the left R Rn If the ring R is i e its multiplication is commutative then M n R is a unitary noncommutative unless n 1 over R The of square matrices over a commutative ring R can still be defined using the such a matrix is invertible if and only if its determinant is in R generalising the situation over a field F where every nonzero element is invertible Matrices over are called Matrices do not always have all their entries in the same ring or even in any ring at all One special but common case is which may be considered as matrices whose entries themselves are matrices The entries need not be quadratic matrices and thus need not be members of any ordinary but their sizes must fulfil certain compatibility conditions ර ඛ ය ස ත යම වලට ඇත සබ ඳ ය ව Linear maps Rn Rm are equivalent to m by n matrices as described above More generally any linear map f V W between finite can be described by a matrix A aij after choosing v1 vn of V and w1 wm of W so n is the dimension of V and m is the dimension of W which is such that f vj i 1mai jwifor j 1 n displaystyle f mathbf v j sum i 1 m a i j mathbf w i qquad mbox for j 1 ldots n In other words column j of A expresses the image of vj in terms of the basis vectors wi of W thus this relation uniquely determines the entries of the matrix A Note that the matrix depends on the choice of the bases different choices of bases give rise to different but Many of the above concrete notions can be reinterpreted in this light for example the transpose matrix AT describes the given by A with respect to the More generally the set of m n matrices can be used to represent the R linear maps between the free modules Rm and Rn for an arbitrary ring R with unity When n m composition of these maps is possible and this gives rise to the of n n matrices representing the of Rn න ය ස ක ණ ඩ ප රධ න ල ප ය A is a mathematical structure consisting of a set of objects together with a i e an operation combining any two objects to a third subject to certain requirements A group in which the objects are matrices and the group operation is matrix multiplication is called a matrix group Since in a group every element has to be invertible the most general matrix groups are the groups of all invertible matrices of a given size called the Any property of matrices that is preserved under matrix products and inverses can be used to define further matrix groups For example matrices with a given size and with a determinant of 1 form a of i e a smaller group contained in their general linear group called a determined by the condition MTM I form the They are called orthogonal since the associated linear transformations of Rn preserve angles in the sense that the of two vectors is unchanged after applying M to them Mv Mw v w Every is to a matrix group as one can see by considering the of the General groups can be studied using matrix groups which are comparatively well understood by means of අපර ම ත න ය ස It is also possible to consider matrices with infinitely many rows and or columns even if being infinite objects one cannot write down such matrices explicitly All that matters is that for every element in the set indexing rows and every element in the set indexing columns there is a well defined entry these index sets need not even be subsets of the natural numbers The basic operations of addition subtraction scalar multiplication and transposition can still be defined without problem however matrix multiplication may involve infinite summations to define the resulting entries and these are not defined in general If R is any ring with unity then the ring of endomorphisms of M i IR displaystyle M bigoplus i in I R as a right R module is isomorphic to the ring of column finite matrices CFMI R displaystyle mathbb CFM I R whose entries are indexed by I I displaystyle I times I and whose columns each contain only finitely many nonzero entries The endomorphisms of M considered as a left R module result in an analogous object the row finite matrices RFMI R displaystyle mathbb RFM I R whose rows each only have finitely many nonzero entries If infinite matrices are used to describe linear maps then only those matrices can be used all of whose columns have but a finite number of nonzero entries for the following reason For a matrix A to describe a linear map f V W bases for both spaces must have been chosen recall that by definition this means that every vector in the space can be written uniquely as a finite of basis vectors so that written as a column vector v of coefficients only finitely many entries vi are nonzero Now the columns of A describe the images by f of individual basis vectors of V in the basis of W which is only meaningful if these columns have only finitely many nonzero entries There is no restriction on the rows of A however in the product A v there are only finitely many nonzero coefficients of v involved so every one of its entries even if it is given as an infinite sum of products involves only finitely many nonzero terms and is therefore well defined Moreover this amounts to forming a linear combination of the columns of A that effectively involves only finitely many of them whence the result has only finitely many nonzero entries because each of those columns do One also sees that products of two matrices of the given type is well defined provided as usual that the column index and row index sets match is again of the same type and corresponds to the composition of linear maps If R is a then the condition of row or column finiteness can be relaxed With the norm in place can be used instead of finite sums For example the matrices whose column sums are absolutely convergent sequences form a ring Analogously of course the matrices whose row sums are absolutely convergent series also form a ring In that vein infinite matrices can also be used to describe where convergence and questions arise which again results in certain constraints that have to be imposed However the explicit point of view of matrices tends to obfuscate the matter and the abstract and more powerful tools of can be used instead ශ න ය න ය ස An empty matrix is a matrix in which the number of rows or columns or both is zero Empty matrices help dealing with maps involving the For example if A is a 3 by 0 matrix A and B is a 0 by 3 matrix then AB is the 3 by 3 zero matrix corresponding to the null map from a 3 dimensional space V to itself while BA is a 0 by 0 matrix There is no common notation for empty matrices but most allow creating and computing with them The determinant of the 0 by 0 matrix is 1 as follows from regarding the occurring in the Leibniz formula for the determinant as 1 This value is also consistent with the fact that the identity map from any finite dimensional space to itself has determinant 1 a fact that is often used as a part of the characterization of determinants භ ව තයThere are numerous applications of matrices both in mathematics and other sciences Some of them merely take advantage of the compact representation of a set of numbers in a matrix For example in and economics the encodes the payoff for two players depending on which out of a given finite set of alternatives the players choose and automated compilation makes use of such as to track frequencies of certain words in several documents Complex numbers can be represented by particular real 2 by 2 matrices via a ib a bba displaystyle a ib leftrightarrow begin bmatrix a amp b b amp a end bmatrix under which addition and multiplication of complex numbers and matrices correspond to each other For example 2 by 2 rotation matrices represent the multiplication with some complex number of 1 as above A similar interpretation is possible for Early encryption techniques such as the also used matrices However due to the linear nature of matrices these codes are comparatively easy to break uses matrices both to represent objects and to calculate transformations of objects using affine to accomplish tasks such as projecting a three dimensional object onto a two dimensional screen corresponding to a theoretical camera observation Matrices over a are important in the study of Chemistry makes use of matrices in various ways particularly since the use of to discuss molecular bonding and Examples are the and the used in solving the to obtain the of the method ප රස ත ර ක ස ද ධ න තය An undirected graph with adjacency matrix 210101010 displaystyle begin bmatrix 2 amp 1 amp 0 1 amp 0 amp 1 0 amp 1 amp 0 end bmatrix The of a is a basic notion of It saves which vertices of the graph are connected by an edge Matrices containing just two different values 0 and 1 meaning for example yes and no are called The contains information about distances of the edges These concepts can be applied to websites connected hyperlinks or cities connected by roads etc in which case unless the road network is extremely dense the matrices tend to be i e contain few nonzero entries Therefore specifically tailored matrix algorithms can be used in ව ශ ල ෂණය සහ ජ ය ම ත ය The of a ƒ Rn R consists of the of ƒ with respect to the several coordinate directions i e H f 2f xi xj displaystyle H f left frac partial 2 f partial x i partial x j right At the x 0 y 0 red of the function f x y x2 y2 the Hessian matrix 200 2 displaystyle begin bmatrix 2 amp 0 0 amp 2 end bmatrix is It encodes information about the local growth behaviour of the function given a x x1 xn i e a point where the first f xi displaystyle partial f partial x i of ƒ vanish the function has a if the Hessian matrix is can be used to find global minima or maxima of quadratic functions closely related to the ones attached to matrices see above Another matrix frequently used in geometrical situations is the of a differentiable map f Rn Rm If f1 fm denote the components of f then the Jacobi matrix is defined as J f fi xj 1 i m 1 j n displaystyle J f left frac partial f i partial x j right 1 leq i leq m 1 leq j leq n If n gt m and if the rank of the Jacobi matrix attains its maximal value m f is locally invertible at that point by the can be classified by considering the matrix of coefficients of the highest order differential operators of the equation For this matrix is positive definite which has decisive influence on the set of possible solutions of the equation in question The is an important numerical method to solve partial differential equations widely applied in simulating complex physical systems It attempts to approximate the solution to some equation by piecewise linear functions where the pieces are chosen with respect to a sufficiently fine grid which in turn can be recast as a matrix equation සම භ ව ත ස ද ධ න ත සහ ස ඛ ය නය Two different Markov chains The chart depicts the number of particles of a total of 1000 in state 2 Both limiting values can be determined from the transition matrices which are given by 70 31 displaystyle begin bmatrix 7 amp 0 3 amp 1 end bmatrix red and 7 2 3 8 displaystyle begin bmatrix 7 amp 2 3 amp 8 end bmatrix black are square matrices whose rows are i e whose entries sum up to one Stochastic matrices are used to define with finitely many states A row of the stochastic matrix gives the probability distribution for the next position of some particle currently in the state that corresponds to the row Properties of the Markov chain like i e states that any particle attains eventually can be read off the eigenvectors of the transition matrices Statistics also makes use of matrices in many different forms is concerned with describing data sets which can often be represented in matrix form by reducing the amount of data The encodes the mutual variance of several Another technique using matrices are a method that approximates a finite set of pairs x1 y1 x2 y2 xN yN by a linear function yi axi b i 1 N which can be formulated in terms of matrices related to the of matrices are matrices whose entries are random numbers subject to suitable such as Beyond probability theory they are applied in domains ranging from to physics සමම ත සහ භ ත ක ව ද ය ව ත ළ පර ණ මණ Linear transformations and the associated play a key role in modern physics For example in are classified as representations of the of special relativity and more specifically by their behavior under the Concrete representations involving the and more general are an integral part of the physical description of which behave as For the three lightest quarks there is a group theoretical representation involving the SU 3 for their calculations physicists use a convenient matrix representation known as the which are also used for the SU 3 that forms the basis of the modern description of strong nuclear interactions The in turn expresses the fact that the basic quark states that are important for are not the same as but linearly related to the basic quark states that define particles with specific and distinct ක ව න ටම ගත ස වභ වය ර ඛ ය ස ය ජන The first model of 1925 represented the theory s operators by infinite dimensional matrices acting on quantum states This is also referred to as One particular example is the that characterizes the mixed state of a quantum system as a linear combination of elementary pure Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimental particle physics Collision reactions such as occur in where non interacting particles head towards each other and collide in a small interaction zone with a new set of non interacting particles as the result can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states The linear combination is given by a matrix known as the which encodes all information about the possible interactions between particles ස ම න ය ක රම A general application of matrices in physics is to the description of linearly coupled harmonic systems The of such systems can be described in matrix form with a mass matrix multiplying a generalized velocity to give the kinetic term and a force matrix multiplying a displacement vector to characterize the interactions The best way to obtain solutions is to determine the system s its by diagonalizing the matrix equation Techniques like this are crucial when it comes to the internal dynamics of the internal vibrations of systems consisting of mutually bound component atoms They are also needed for describing mechanical vibrations and oscillations in electrical circuits ජ ය ම ත ක ද ශ ට ව ද ය ව provides further matrix applications In this approximative theory the of light is neglected The result is a model in which are indeed If the deflection of light rays by optical elements is small the action of a or reflective element on a given light ray can be expressed as multiplication of a two component vector with a two by two matrix called the vector s components are the light ray s slope and its distance from the optical axis while the matrix encodes the properties of the optical element Actually there are two kinds of matrices viz a refraction matrix describing the refraction at a lens surface and a translation matrix describing the translation of the plane of reference to the next refracting surface where another refraction matrix applies The optical system consisting of a combination of lenses and or reflective elements is simply described by the matrix resulting from the product of the components matrices ඉල ක ට ර න ක ව ද ය ව Traditional in electronics leads to a system of linear equations that can be described with a matrix The behaviour of many electronic components can be described using matrices Let A be a 2 dimensional vector with the component s input voltage v1 and input current i1 as its elements and let B be a 2 dimensional vector with the component s output voltage v2 and output current i2 as its elements Then the behaviour of the electronic component can be described by B H A where H is a 2 x 2 matrix containing one element h12 one element h21 and two elements h11 and h22 Calculating a circuit now reduces to multiplying matrices ඉත හ සයMatrices have a long history of application in solving The Jiu Zhang Suan Shu from between 300 BC and AD 200 is the first example of the use of matrix methods to solve simultaneous equations including the concept of over 1000 years before its publication by the in 1683 තහව ර කර න ම ත and the German mathematician in 1693 presented in 1750 Early matrix theory emphasized determinants more strongly than matrices and an independent matrix concept akin to the modern notion emerged only in 1858 with Memoir on the theory of matrices The term matrix for womb derived from mater mother was coined by who understood a matrix as an object giving rise to a number of determinants today called that is to say determinants of smaller matrices that derive from the original one by removing columns and rows In a 1851 paper Sylvester explains I have in previous papers defined a Matrix as a rectangular array of terms out of which different systems of determinants may be engendered as from the womb of a common parent The study of determinants sprang from several sources problems led Gauss to relate coefficients of i e expressions such as x2 xy 2y2 and in three dimensions to matrices further developed these notions including the remark that in modern parlance are was the first to prove general statements about determinants using as definition of the determinant of a matrix A ai j the following replace the powers ajk by ajk in the polynomial a1a2 an i lt j aj ai displaystyle a 1 a 2 cdots a n prod i lt j a j a i where P denotes the of the indicated terms He also showed in 1829 that the of symmetric matrices are real studied functional determinants later called by Sylvester which can be used to describe geometric transformations at a local or level see above Vorlesungen uber die Theorie der Determinanten and Zur Determinantentheorie both published in 1903 first treated determinants as opposed to previous more concrete approaches such as the mentioned formula of Cauchy At that point determinants were firmly established Many theorems were first established for small matrices only for example the was proved for 2 2 matrices by Cayley in the aforementioned memoir and by for 4 4 matrices working on generalized the theorem to all dimensions 1898 Also at the end of the 19th century the generalizing a special case now known as was established by In the early 20th century matrices attained a central role in linear algebra partially due to their use in classification of the systems of the previous century The inception of by and led to studying matrices with infinitely many rows and columns Later carried out the by further developing notions such as on which very roughly speaking correspond to Euclidean space but with an infinity of ල ක ඉත හ සය ගණ තය ත ළ න යසයන හ ව නත භ ව ත The word has been used in unusual ways by at least two authors of historical importance and in their Principia Mathematica 1910 1913 use the word matrix in the context of their They proposed this axiom as a means to reduce any function to one of lower type successively so that at the bottom 0 order the function is identical to its Let us give the name of matrix to any function of however many variables which does not involve any Then any possible function other than a matrix is derived from a matrix by means of generalization i e by considering the proposition which asserts that the function in question is true with all possible values or with some value of one of the arguments the other argument or arguments remaining undetermined For example a function F x y of two variables x and y can be reduced to a collection of functions of a single variable e g y by considering the function for all possible values of individuals ai substituted in place of variable x And then the resulting collection of functions of the single variable y i e ai F ai y can be reduced to a matrix of values by considering the function for all possible values of individuals bi substituted in place of variable y bj ai F ai bj in his 1946 Introduction to Logic used the word matrix synonymously with the notion of as used in mathematical logic See alsoMathematics ද ව රයසටහන Alternative references for this book include and This is immediate from the definition of matrix multiplication tr AB i 1m j 1nAijBji tr BA displaystyle scriptstyle operatorname tr mathsf AB sum i 1 m sum j 1 n A ij B ji operatorname tr mathsf BA For example see See any standard reference in group See any reference in representation theory or See the item Matrix in Empty Matrix A matrix is empty if either its row or column dimension is zero Glossary 2009 04 29 at the Wayback Machine O Matrix v6 User Guide A matrix having at least one dimension equal to zero is called an empty matrix MATLAB Data Structures 2009 12 28 at the Wayback Machine For a more advanced and more general statement see See also 1986 Matrices for Statistics Oxford University Press ISBN 9780198507024 see cited by Merriam Webster dictionary http www merriam webster com dictionary matrix ප රත ෂ ඨ පනය April 20th 2009 Per the OED the first usage of the word matrix with respect to mathematics appears in J J Sylvester in London Edinb amp Dublin Philos Mag 37 1850 p 369 We commence with an oblong arrangement of terms consisting suppose of m lines and n columns This will not in itself represent a determinant but is as it were a Matrix out of which we may form various systems of determinants by fixing upon a number p and selecting at will p lines and p columns the squares corresponding to which may be termed determinants of the pth order The Collected Mathematical Papers of James Joseph Sylvester 1837 1853 Paper 37 p 247 Alfred North Whitehead and Bertrand Russell 1913 Principia Mathematica to 56 Cambridge at the University Press Cambridge UK republished 1962 cf page 162ff Tarski Alfred 1946 Introduction to Logic and the Methodology of Deductive Sciences Dover Publications Inc New York NY ISBN 0 486 28462 X Eigen means own in ජර ම න and in Additionally the group is required to be in the general linear group Not much of matrix theory carries over to infinite dimensional spaces and what does is not so useful but it sometimes helps න ර ද ශ 1992 Ordinary differential equations Berlin New York ISBN 978 3 540 54813 3 1991 Algebra ISBN 978 0 89871 510 1 Association for Computing Machinery 1979 Computer Graphics Tata McGraw Hill ISBN 978 0 07 059376 3 Baker Andrew J 2003 Matrix Groups An Introduction to Lie Group Theory Berlin New York Springer Verlag ISBN 978 1 85233 470 3 Bau III David 1997 Numerical linear algebra Philadelphia Society for Industrial and Applied Mathematics ISBN 978 0 89871 361 9 Bretscher Otto 2005 Linear Algebra with Applications 3rd ed Prentice Hall Bronson Richard 1989 Schaum s outline of theory and problems of matrix operations New York ISBN 978 0 07 007978 6 Brown William A 1991 Matrices and vector spaces New York M Dekker ISBN 978 0 8247 8419 5 Coburn Nathaniel 1955 Vector and tensor analysis New York Macmillan 1029828 Conrey J B 2007 Ranks of elliptic curves and random matrix theory Cambridge University Press ISBN 978 0 521 69964 8 Fudenberg D 1983 Game Theory Gilbarg David 2001 Elliptic partial differential equations of second order 2nd ed Berlin New York Springer Verlag ISBN 978 3 540 41160 4 2004 Algebraic Graph Theory Graduate Texts in Mathematics 207 Berlin New York Springer Verlag ISBN 978 0 387 95220 8 1996 Matrix Computations 3rd ed Johns Hopkins ISBN 978 0 8018 5414 9 Greub Werner Hildbert 1975 Linear algebra Graduate Texts in Mathematics Berlin New York Springer Verlag ISBN 978 0 387 90110 7 Halmos Paul Richard 1982 A Hilbert space problem book Graduate Texts in Mathematics 19 2nd ed Berlin New York Springer Verlag ISBN 978 0 387 90685 0 Horn Roger A Johnson Charles R 1985 Matrix Analysis Cambridge University Press ISBN 978 0 521 38632 6 Householder Alston S 1975 The theory of matrices in numerical analysis New York Krzanowski W J 1988 Principles of multivariate analysis Oxford Statistical Science Series 3 The Clarendon Press Oxford University Press ISBN 978 0 19 852211 9 Ito Kiyosi ed 1987 Encyclopedic dictionary of mathematics Vol I IV 2nd ed MIT Press ISBN 978 0 262 09026 1 1969 Analysis II Lang Serge 1987a Calculus of several variables 3rd ed Berlin New York Springer Verlag ISBN 978 0 387 96405 8 Lang Serge 1987b Linear algebra Berlin New York Springer Verlag ISBN 978 0 387 96412 6 Latouche G Ramaswami V 1999 Introduction to matrix analytic methods in stochastic modeling 1st ed Philadelphia Society for Industrial and Applied Mathematics ISBN 978 0 89871 425 8 Manning Christopher D Schutze Hinrich 1999 Foundations of statistical natural language processing MIT Press ISBN 978 0 262 13360 9 Mehata K M Srinivasan S K 1978 Stochastic processes New York McGraw Hill ISBN 978 0 07 096612 3 Mirsky Leonid 1990 An Introduction to Linear Algebra Courier Dover Publications ISBN 978 0 486 66434 7 http books google com id ULMmheb26ZcC amp pg PA1 amp dq linear algebra determinant Nocedal Jorge Wright Stephen J 2006 Numerical Optimization 2nd ed Berlin New York Springer Verlag p 449 ISBN 978 0 387 30303 1 Oualline Steve 2003 Practical C programming ISBN 978 0 596 00419 4 Press William H Flannery Brian P Vetterling William T 1992 LU Decomposition and Its Applications Numerical Recipes in FORTRAN The Art of Scientific Computing 2nd ed Cambridge University Press pp 34 42 http www mpi hd mpg de astrophysik HEA internal Numerical Recipes f2 3 pdf Punnen Abraham P Gutin Gregory 2002 The traveling salesman problem and its variations Boston Kluwer Academic Publishers ISBN 978 1 4020 0664 7 Reichl Linda E 2004 The transition to chaos conservative classical systems and quantum manifestations Berlin New York Springer Verlag ISBN 978 0 387 98788 0 Rowen Louis Halle 2008 Graduate Algebra noncommutative view Providence R I ISBN 978 0 8218 4153 2 Solin Pavel 2005 Partial Differential Equations and the Finite Element Method ISBN 978 0 471 76409 0 Stinson Douglas R 2005 Cryptography Discrete Mathematics and Its Applications Chapman amp Hall CRC ISBN 978 1 58488 508 5 Stoer Josef Bulirsch Roland 2002 Introduction to Numerical Analysis 3rd ed Berlin New York Springer Verlag ISBN 978 0 387 95452 3 Ward J P 1997 Quaternions and Cayley numbers Mathematics and its Applications 403 Dordrecht Kluwer Academic Publishers Group ISBN 978 0 7923 4513 8 2003 The Mathematica Book 5th ed Champaign Ill Wolfram Media ISBN 978 1 57955 022 6 භ ත ක ව ද ය න ර ද ශ Bohm Arno 2001 Quantum Mechanics Foundations and Applications Springer ISBN 0 387 95330 2 Burgess Cliff Moore Guy 2007 The Standard Model A Primer Cambridge University Press ISBN 0 521 86036 9 Guenther Robert D 1990 Modern Optics John Wiley ISBN 0 471 60538 7 Itzykson Claude Zuber Jean Bernard 1980 Quantum Field Theory McGraw Hill ISBN 0 07 032071 3 Riley K F Hobson M P Bence S J 1997 Mathematical methods for physics and engineering Cambridge University Press ISBN 0 521 55506 X Schiff Leonard I 1968 Quantum Mechanics 3rd ed McGraw Hill Weinberg Steven 1995 The Quantum Theory of Fields Volume I Foundations Cambridge University Press ISBN 0 521 55001 7 Wherrett Brian S 1987 Group Theory for Atoms Molecules and Solids Prentice Hall International ISBN 0 13 365461 3 Zabrodin Anton Brezin Edouard Kazakov Vladimir Serban Didina Wiegmann Paul 2006 Applications of Random Matrices in Physics NATO Science Series II Mathematics Physics and Chemistry Berlin New York ISBN 978 1 4020 4530 1 ප ර ණ ක න ර ද ශ 2004 Introduction to higher algebra New York ISBN 978 0 486 49570 5 reprint of the 1907 original edition 1889 The collected mathematical papers of Arthur Cayley I 1841 1853 Cambridge University Press pp 123 126 http www hti umich edu cgi t text pageviewer idx c umhistmath cc umhistmath rgn full 20text idno ABS3153 0001 001 didno ABS3153 0001 001 view image seq 00000140 ed 1978 Abrege d histoire des mathematiques 1700 1900 Paris Hermann Hawkins Thomas 1975 Cauchy and the spectral theory of matrices 2 1 29 doi 10 1016 0315 0860 75 90032 4 0315 0860 Knobloch Eberhard 1994 From Gauss to Weierstrass determinant theory and its historical evaluations The intersection of history and mathematics Sci Networks Hist Stud 15 Basel Boston Berlin Birkhauser pp 51 66 1897 ed Leopold Kronecker s Werke Teubner http name umdl umich edu AAS8260 0002 001 Mehra J Rechenberg Helmut 1987 The Historical Development of Quantum Theory 1st ed Berlin New York ISBN 978 0 387 96284 9 Shen Kangshen Crossley John N Lun Anthony Wah Cheung 1999 Nine Chapters of the Mathematical Art Companion and Commentary 2nd ed Oxford University Press ISBN 978 0 19 853936 0 1915 Collected works 3 http name umdl umich edu AAN8481 0003 001 භ හ ර ඈඳ ම The Wikibook Linear Algebra has a page on the topic of Matrices ව ක සරසව ය සත ව Matrices ප ළ බඳ ඉග න ම ම ල ශ ර ඇත ම ය ත ළ Linear algebra Matrices HistoryMacTutor Matrices and determinants Matrices and Linear Algebra on the Earliest Uses Pages Earliest Uses of Symbols for Matrices and VectorsOnline booksKaw Autar K Introduction to Matrix Algebra ISBN 978 0 615 25126 4 http autarkaw com books matrixalgebra index html The Matrix Cookbook http matrixcookbook com ප රත ෂ ඨ පනය 12 10 2008 Brookes M 2005 The Matrix Reference Manual London http www ee ic ac uk hp staff dmb matrix intro html ප රත ෂ ඨ පනය 12 10 2008 Online matrix calculatorsSuperiorMath Matrix Calculator http www webalice it simoalessia SuperiorMath matrix html ප රත ෂ ඨ පනය 2011 11 29 Matrix Calculator DotNumerics http www dotnumerics com MatrixCalculator Xiao Gang Matrix calculator http wims unice fr wims wims cgi module tool linear matrix en ප රත ෂ ඨ පනය 12 10 2008 Online matrix calculator http www bluebit gr matrix calculator ප රත ෂ ඨ පනය 2011 11 29 Online matrix calculator ZK framework http matrixcalc info MatrixZK ප රත ෂ ඨ පනය 2011 11 29 Oehlert Gary W Bingham Christopher MacAnova School of Statistics http www stat umn edu macanova macanova home html ප රත ෂ ඨ පනය 12 10 2008 a freeware package for matrix algebra and statistics Online matrix calculator http www idomaths com matrix php ප රත ෂ ඨ පනය 12 14 2009 Operation with matrices in R determinant track inverse adjoint transpose 2011 07 18 at the Wayback Machine