Orthonormal basis

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other.[1][2][3] For example, the standard basis for a Euclidean space Rn is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection (or any orthogonal transformation) is also orthonormal, and every orthonormal basis for Rn arises in this fashion.

For a general inner product space V, an orthonormal basis can be used to define normalized orthogonal coordinates on V. Under these coordinates, the inner product becomes a dot product of vectors. Thus the presence of an orthonormal basis reduces the study of a finite-dimensional inner product space to the study of Rn under dot product. Every finite-dimensional inner product space has an orthonormal basis, which may be obtained from an arbitrary basis using the Gram–Schmidt process.

In functional analysis, the concept of an orthonormal basis can be generalized to arbitrary (infinite-dimensional) inner product spaces.[4] Given a pre-Hilbert space H, an orthonormal basis for H is an orthonormal set of vectors with the property that every vector in H can be written as an infinite linear combination of the vectors in the basis. In this case, the orthonormal basis is sometimes called a Hilbert basis for H. Note that an orthonormal basis in this sense is not generally a Hamel basis, since infinite linear combinations are required. Specifically, the linear span of the basis must be dense in H, but it may not be the entire space.

If we go on to Hilbert spaces, a non-orthonormal set of vectors having the same linear span as an orthonormal basis may not be a basis at all. For instance, any square-integrable function on the interval [−1, 1] can be expressed (almost everywhere) as an infinite sum of Legendre polynomials (an orthonomal basis), but not necessarily as an infinite sum of the monomials xn.

Examples[edit]

  • The set of vectors {e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)} (the standard basis) forms an orthonormal basis of R3.
Proof: A straightforward computation shows that the inner products of these vectors equals zero, e1, e2⟩ = ⟨e1, e3⟩ = ⟨e2, e3⟩ = 0 and that each of their magnitudes equals one, ||e1|| = ||e2|| = ||e3|| = 1. This means that {e1, e2, e3} is an orthonormal set. All vectors (x, y, z) in R3 can be expressed as a sum of the basis vectors scaled
so {e1, e2, e3} spans R3 and hence must be a basis. It may also be shown that the standard basis rotated about an axis through the origin or reflected in a plane through the origin forms an orthonormal basis of R3.
  • Notice that an orthogonal transformation of the standard inner-product space can be used to construct other orthogonal bases of .
  • The set {fn : nZ} with fn(x) = exp(2πinx) forms an orthonormal basis of the space of functions with finite Lebesgue integrals, L2([0,1]), with respect to the 2-norm. This is fundamental to the study of Fourier series.
  • The set {eb : bB} with eb(c) = 1 if b = c and 0 otherwise forms an orthonormal basis of ℓ2(B).
  • Eigenfunctions of a Sturm–Liouville eigenproblem.
  • An orthogonal matrix is a matrix whose column vectors form an orthonormal set.

Basic formula[edit]

If B is an orthogonal basis of H, then every element x of H may be written as

When B is orthonormal, this simplifies to

and the square of the norm of x can be given by

Even if B is uncountable, only countably many terms in this sum will be non-zero, and the expression is therefore well-defined. This sum is also called the Fourier expansion of x, and the formula is usually known as Parseval's identity.

If B is an orthonormal basis of H, then H is isomorphic to  2(B) in the following sense: there exists a bijective linear map Φ : H 2(B) such that

for all x and y in H.

Incomplete orthogonal sets[edit]

Given a Hilbert space H and a set S of mutually orthogonal vectors in H, we can take the smallest closed linear subspace V of H containing S. Then S will be an orthogonal basis of V; which may of course be smaller than H itself, being an incomplete orthogonal set, or be H, when it is a complete orthogonal set.

Existence[edit]

Using Zorn's lemma and the Gram–Schmidt process (or more simply well-ordering and transfinite recursion), one can show that every Hilbert space admits a basis, but not orthonormal base[5]; furthermore, any two orthonormal bases of the same space have the same cardinality (this can be proven in a manner akin to that of the proof of the usual dimension theorem for vector spaces, with separate cases depending on whether the larger basis candidate is countable or not). A Hilbert space is separable if and only if it admits a countable orthonormal basis. (One can prove this last statement without using the axiom of choice).

As a homogeneous space[edit]

The set of orthonormal bases for a space is a principal homogeneous space for the orthogonal group O(n), and is called the Stiefel manifold of orthonormal n-frames.

In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: given an orthogonal space, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-one correspondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends a given basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take any orthogonal basis to any other orthogonal basis.

The other Stiefel manifolds for of incomplete orthonormal bases (orthonormal k-frames) are still homogeneous spaces for the orthogonal group, but not principal homogeneous spaces: any k-frame can be taken to any other k-frame by an orthogonal map, but this map is not uniquely determined.

See also[edit]

References[edit]

  1. ^ Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4.
  2. ^ Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.
  3. ^ Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.
  4. ^ Rudin, Walter (1987). Real & Complex Analysis. McGraw-Hill. ISBN 0-07-054234-1.
  5. ^ Linear Functional Analysis Authors: Rynne, Bryan, Youngson, M.A. page 79