polynomial equation holds exactly (and always) n roots, not necessarily distinct though – as originally realized by Italian mathematicians Niccolò F. Tartaglia and Gerolamo Cardano in the sixteenth century; many concepts relevant for engineering purposes, originally conceived to utilize real numbers (as the only ones adhering to physical evidence), may be easily generalized via complex numbers.
The next stage of informational content is vectors – each defined by a triplet (a,b,c), where c also denotes a real number; each one is represented by a point in a volume domain and is often denoted via a bold, lowercase letter (e.g. v). Their usual graphical representation is a straight, arrowed segment linking the origin of a Cartesian system of coordinates to said point – where length (equal to
, as per Pythagoras’ theorem), coupled with orientations (as per tan{b/a} and tan{c/}) fully define the said triplet. An alternative representation is as [a b c] or – also termed row vector or column vector, respectively; when three column vectors are assembled together, say, , , and , a matrix results, viz. , termed tensor – which may also be obtained by joining three row vectors, say, [a1 a2 a3], [b1 b2 b3], and [c1 c2 c3]. The concept of matrix may be generalized so as to encompass other possibilities of combination of numbers besides a (3 × 3) layout; in fact, a rectangular (p × q) matrix of the form , or [ai,j ; i = 1, 2,…, p; j = 1, 2, …, q] for short, may easily be devised.Matrices are particularly useful in that they permit algebraic operations (and the like) be performed once on a set of numbers simultaneously – thus dramatically contributing to bookkeeping, besides their help to structure mathematical reasoning. In specific situations, it is useful to design higher order number structures, such as arrays of (or block) matrices; for instance,
may be also represented as , provided that, say, A1,1 ≡ [a1], A1,2 ≡ [a2 a3], , and represent, in turn, smaller matrices. An issue of compatibility arises in terms of the sizes of said blocks, though; for a starting (p × q) matrix A, only (p1 × q1) A1,1, (p1 × q2) A1,2, (p2 × q1) A2,1, and (p2 × q2) A2,2 matrices are allowed – obviously with p1 + p2 = p and q1 + q2 = q.One of the most powerful applications of matrices is in solving sets of linear algebraic equations, say,
and
in its simplest version – where a1,1, a1,2, a2,1, a2,2, b1, and b2 denote real numbers, and x1 and x2 denote variables; if a1,1 ≠ 0 and a1,1 a2,2 − a1,2 a2,1 ≠ 0, then one may start by isolating x1 in Eq. (1.1) as
and then replace it in Eq. (1.2) to obtain
After factoring x2 out, Eq. (1.4) becomes
(1.5)
so isolation of x2 eventually gives
– which yields a solution only when a1,1 a2,2 − a1,2 a2,1 ≠ 0; insertion of Eq. (1.6) back in Eq. (1.3) yields
thus justifying why a solution for x1 requires a1,1 ≠ 0, besides a1,1 a2,2 − a1,2 a2,1 ≠ 0 (as enforced from the very beginning). Equation (1.6) may be rewritten as
(1.8)
– provided that one defines
complemented with
the left‐hand sides of Eqs. (1.9) and (1.10) are termed (second‐order) determinants. If both sides of Eq. (1.2) were multiplied by −a1,2/a2,2, one would get