In this subsection, we will summarize all the available error bounds. Later subsections will provide further details. The reader may also refer to [12,95].
Bounds for individual eigenvalues and eigenvectors are provided by driver xGEEVX (subsection 2.3.4) or computational routine xTRSNA (subsection 2.4.5). Bounds for clusters of eigenvalues and their associated invariant subspace are provided by driver xGEESX (subsection 2.3.4) or computational routine xTRSEN (subsection 2.4.5).
We let
be the ith computed eigenvalue and
an ith true eigenvalue.4.1
Let
be the
corresponding computed right eigenvector, and vi a true right
eigenvector (so
).
If
is a subset of the
integers from 1 to n, we let
denote the average of
the selected eigenvalues:
,
and similarly for
.
We also let
denote the subspace spanned by
;
it is
called a right invariant subspace because if v is any vector in
then
Av is also in
.
is the corresponding computed subspace.
The algorithms for the nonsymmetric eigenproblem are normwise backward stable:
they compute the exact eigenvalues, eigenvectors and invariant subspaces
of slightly perturbed matrices A+E, where
.
Some of the bounds are stated in terms of |E|2 and others in
terms of |E|F; one may use
to approximate
either quantity.
The code fragment in the previous subsection approximates
|E| by
,
where
is returned by xGEEVX.
xGEEVX (or xTRSNA) returns two quantities for each
,
pair: si and
.
xGEESX (or xTRSEN) returns two quantities for a selected subset
of eigenvalues:
and
.
si (or
)
is a reciprocal condition number for the
computed eigenvalue
(or
),
and is referred to as RCONDE by xGEEVX (or xGEESX).
(or
)
is a reciprocal condition number for
the right eigenvector
(or
), and
is referred to as RCONDV by xGEEVX (or xGEESX).
The approximate error bounds for eigenvalues, averages of eigenvalues,
eigenvectors, and invariant subspaces
provided in Table 4.5 are
true for sufficiently small |E|, which is why they are called asymptotic.
If the problem is ill-conditioned, the asymptotic bounds may only hold
for extremely small |E|. Therefore, in Table 4.6
we also provide global bounds
which are guaranteed to hold for all
.
We also have the following bound, which is true for all E:
all the
lie in the union of n disks,
where the i-th disk is centered at
and has
radius
n |E|2 / si. If k of these disks overlap,
so that any two points inside the k disks can be connected
by a continuous curve lying entirely inside the k disks,
and if no larger set of k+1 disks has this property,
then exactly k of the
lie inside the
union of these k disks. Figure 4.1 illustrates
this for a 10-by-10 matrix, with 4 such overlapping unions
of disks, two containing 1 eigenvalue each, one containing 2
eigenvalues, and one containing 6 eigenvalues.
Finally, the quantities s and
tell use how we can best
(block) diagonalize a matrix A by a similarity,
,
where each diagonal block
Aii has a selected subset of the eigenvalues of A. Such a decomposition
may be useful in computing functions of matrices, for example.
The goal is to choose a V with a nearly minimum condition number
which performs this decomposition, since this generally minimizes the error
in the decomposition.
This may be done as follows. Let Aii be
ni-by-ni. Then columns
through
of V span the invariant
subspace of A corresponding
to the eigenvalues of Aii; these columns should be chosen to be any
orthonormal basis of this space (as computed by xGEESX, for example).
Let
be the value corresponding to the
cluster of
eigenvalues of Aii, as computed by xGEESX or xTRSEN. Then
,
and no other choice of V can make
its condition number smaller than
[26].
Thus choosing orthonormal
subblocks of V gets
to within a factor b of its minimum
value.
In the case of a real symmetric (or complex Hermitian) matrix,
s=1 and
is the absolute gap, as defined in subsection 4.7.
The bounds in Table 4.5 then reduce to the
bounds in subsection 4.7.