mathematics
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

Print
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

Also known as: matrix theory

matrix, a set of numbers arranged in rows and columns so as to form a rectangular array. The numbers are called the elements, or entries, of the matrix. Matrices have wide applications in engineering, physics, economics, and statistics as well as in various branches of mathematics. Matrices also have important applications in computer graphics, where they have been used to represent rotations and other transformations of images.

Historically, it was not the matrix but a certain number associated with a square array of numbers called the determinant that was first recognized. Only gradually did the idea of the matrix as an algebraic entity emerge. The term matrix was introduced by the 19th-century English mathematician James Sylvester, but it was his friend the mathematician Arthur Cayley who developed the algebraic aspect of matrices in two papers in the 1850s. Cayley first applied them to the study of systems of linear equations, where they are still very useful. They are also important because, as Cayley recognized, certain sets of matrices form algebraic systems in which many of the ordinary laws of arithmetic (e.g., the associative and distributive laws) are valid but in which other laws (e.g., the commutative law) are not valid.

Equations written on blackboard
Britannica Quiz
Numbers and Mathematics

If there are m rows and n columns, the matrix is said to be an “m by n” matrix, written “m × n.” For example,Matrix.

is a 2 × 3 matrix. A matrix with n rows and n columns is called a square matrix of order n. An ordinary number can be regarded as a 1 × 1 matrix; thus, 3 can be thought of as the matrix [3]. A matrix with only one row and n columns is called a row vector, and a matrix with only one column and n rows is called a column vector.

In a common notation, a capital letter denotes a matrix, and the corresponding small letter with a double subscript describes an element of the matrix. Thus, aij is the element in the ith row and jth column of the matrix A. If A is the 2 × 3 matrix shown above, then a11 = 1, a12 = 3, a13 = 8, a21 = 2, a22 = −4, and a23 = 5. Under certain conditions, matrices can be added and multiplied as individual entities, giving rise to important mathematical systems known as matrix algebras.

Matrices occur naturally in systems of simultaneous equations. In the following system for the unknowns x and y,Equations.the array of numbersMatrix.is a matrix whose elements are the coefficients of the unknowns. The solution of the equations depends entirely on these numbers and on their particular arrangement. If 3 and 4 were interchanged, the solution would not be the same.

Get a Britannica Premium subscription and gain access to exclusive content. Subscribe Now

Two matrices A and B are equal to one another if they possess the same number of rows and the same number of columns and if aij = bij for each i and each j. If A and B are two m × n matrices, their sum S = A + B is the m × n matrix whose elements sij = aij + bij. That is, each element of S is equal to the sum of the elements in the corresponding positions of A and B.

A matrix A can be multiplied by an ordinary number c, which is called a scalar. The product is denoted by cA or Ac and is the matrix whose elements are caij.

The multiplication of a matrix A by a matrix B to yield a matrix C is defined only when the number of columns of the first matrix A equals the number of rows of the second matrix B. To determine the element cij, which is in the ith row and jth column of the product, the first element in the ith row of A is multiplied by the first element in the jth column of B, the second element in the row by the second element in the column, and so on until the last element in the row is multiplied by the last element of the column; the sum of all these products gives the element cij. In symbols, for the case where A has m columns and B has m rows,Equation.The matrix C has as many rows as A and as many columns as B.

Unlike the multiplication of ordinary numbers a and b, in which ab always equals ba, the multiplication of matrices A and B is not commutative. It is, however, associative and distributive over addition. That is, when the operations are possible, the following equations always hold true: A(BC) = (AB)C, A(B + C) = AB + AC, and (B + C)A = BA + CA. If the 2 × 2 matrix A whose rows are (2, 3) and (4, 5) is multiplied by itself, then the product, usually written A2, has rows (16, 21) and (28, 37).

A matrix O with all its elements 0 is called a zero, or null, matrix. A square matrix A with 1s on the main diagonal (upper left to lower right) and 0s everywhere else is called an identity, or unit, matrix. It is denoted by I or In to show that its order is n. If B is any square matrix and I and O are the unit and zero matrices of the same order, it is always true that B + O = O + B = B and BI = IB = B. Hence O and I behave like the 0 and 1 of ordinary arithmetic. (In fact, ordinary arithmetic is the special case of matrix arithmetic in which all matrices are 1 × 1.)

A square matrix A in which the elements aij are nonzero only when i = j is called a diagonal matrix. Diagonal matrices have the special property that multiplication of them is commutative; that is, for two diagonal matrices A and B, AB = BA. The trace of a square matrix is the sum of the elements on the main diagonal.

Associated with each square matrix A is a number that is known as the determinant of A, denoted det A. For example, for the 2 × 2 matrixMatrix equation.det A = adbc. A square matrix B is called nonsingular if det B ≠ 0. If B is nonsingular, there is a matrix called the inverse of B, denoted B−1, such that BB−1 = B−1B = I. The equation AX = B, in which A and B are known matrices and X is an unknown matrix, can be solved uniquely if A is a nonsingular matrix, for then A−1 exists and both sides of the equation can be multiplied on the left by it: A−1(AX) = A−1B. Now A−1(AX) = (A−1A)X = IX = X; hence the solution is X = A−1B. A system of m linear equations in n unknowns can always be expressed as a matrix equation AX = B in which A is the m × n matrix of the coefficients of the unknowns, X is the n × 1 matrix of the unknowns, and B is the n × 1 matrix containing the numbers on the right-hand side of the equation.

A problem of great significance in many branches of science is the following: given a square matrix A of order n, find the n × 1 matrix X, called an n-dimensional vector, such that AX = cX. Here c is a number called an eigenvalue, and X is called an eigenvector. The existence of an eigenvector X with eigenvalue c means that a certain transformation of space associated with the matrix A stretches space in the direction of the vector X by the factor c.

This article was most recently revised and updated by Erik Gregersen.