General Solution System of Differential Equations Repeated Eigen Values
Systems of Differential Equations
Martha L. Abell , James P. Braselton , in Introductory Differential Equations (Fifth Edition), 2018
Repeated Eigenvalues
We recall from our previous experience with repeated eigenvalues of a system that the eigenvalue can have two linearly independent eigenvectors associated with it or only one (linearly independent) eigenvector associated with it. We investigate the behavior of solutions in the case of repeated eigenvalues by considering both of these possibilities.
- 1.
-
If the eigenvalue has two corresponding linearly independent eigenvectors and a general solution is
- 2.
-
If the eigenvalue has only one corresponding (linearly independent) eigenvector , a general solution is
Note: The name "star" was selected due to the shape of the solutions.
Example 6.37
Classify the equilibrium point in the systems: (a) ; and (b) .Solution: (a) The eigenvalues are found by solving
Hence, . In this case, an eigenvector satisfies , which is equivalent to , so there is only one corresponding (linearly independent) eigenvector . Because , is a degenerate stable node. In this case, the eigenline is . We graph this line in Fig. 6.15A and direct the arrows toward the origin because of the negative eigenvalue. Next, we sketch trajectories that become tangent to the eigenline as and associate with each arrows directed toward the origin.
(b) Solving the characteristic equation
we have . However, because an eigenvector satisfies the system , any nonzero choice of is an eigenvector. If we select two linearly independent vectors such as and , we obtain two linearly independent eigenvectors corresponding to . (Note: The choice of these two vectors does not change the value of the solution, because of the form of the general solution in this case.) Because , we classify as a degenerate unstable star node. A general solution of the system is , so when we eliminate the parameter, we obtain . Therefore, the trajectories of this system are lines passing through the origin. In Fig. 6.15B, we graph several trajectories. Because of the positive eigenvalue, we associate with each an arrow directed away from the origin. □
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128149485000069
Systems of Differential Equations
Martha L. Abell , James P. Braselton , in Introductory Differential Equations (Fourth Edition), 2014
Repeated Eigenvalues
We recall from our previous experience with repeated eigenvalues of a 2 × 2 system that the eigenvalue can have two linearly independent eigenvectors associated with it or only one (linearly independent) eigenvector associated with it. We investigate the behavior of solutions in the case of repeated eigenvalues by considering both of these possibilities.
- 1.
-
If the eigenvalue λ = λ1,2 has two corresponding linearly independent eigenvectors v 1 and v 2, a general solution is
Note: The name "star" was selected due to the shape of the solutions.
- 2.
-
If the eigenvalue λ = λ1,2 has only one corresponding (linearly independent) eigenvector v = v 1, a general solution is
Example 6.6.3
Classify the equilibrium point (0, 0) in the systems: (a) and (b) .
Solution
- (a)
-
The eigenvalues are found by solving . Hence, λ1,2 = − 2. In this case, an eigenvector satisfies , which is equivalent to , so there is only one corresponding (linearly independent) eigenvector . Because λ = − 2 < 0, (0, 0) is a degenerate stable node. In this case, the eigenline is y = − x/3. We graph this line in Figure 6.15(a) and direct the arrows toward the origin because of the negative eigenvalue. Next, we sketch trajectories that become tangent to the eigenline as t → ∞and associate with each arrows directed toward the origin.
- (b)
-
Solving the characteristic equation
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124172197000065
Multiple Random Variables
Scott L. Miller , Donald Childers , in Probability and Random Processes, 2004
6.4.2 Quadratic Transformations of Gaussian Random Vectors
In this section, we show how to calculate the PDFs of various quadratic forms of Gaussian random vectors. In particular, given a vector of N zero-mean Gaussian random variables, X, with an arbitrary covariance matrix, C xx , we form a scalar quadratic function of the vector X of the general form
(6.40)
where B is an arbitrary N × N matrix. We would then like to find the PDF of the random variable, Z. These types of problems occur frequently in the study of noncoherent communication systems.
One approach to this problem would be first to form the CDF, FZ(z)=Pr(X T BX≤z). This could be accomplished by computing
(6.41)
where A(z) is the region defined by xTBx≤z. While conceptually straightforward, defining the regions and performing the required integration can get quite involved. Instead, we elect to calculate the PDF of Z by first finding its characteristic function. Once the characteristic function is found, the PDF can be found through an inverse transformation.
For the case of Gaussian random vectors, finding the characteristic function of a quadratic form turns out to be surprisingly straightforward:
(6.42)
This integral is understood to be over the entire N-dimensional x-plane. To evaluate this integral, we simply manipulate the integrand into the standard form of an N-dimensional Gaussian distribution and then use the normalization integral for Gaussian PDFs. Toward that end, define the matrix F according to . Then
(6.43)
where the integral in unity because the integral is a Gaussian distribution.
Using the fact that , this can be rewritten in the more convenient form
(6.44)
To get a feel for the functional form of the characteristic function, note that the determinant of a matrix can be written as the product of its eigenvalues. Furthermore, for a matrix of the form A = I + cD for a constant c, the eigenvalues of A, {λA }, can be written in terms of the eigenvalues of the matrix D, {λD}, according to
λA = 1 + cλD . Hence,
(6.45)
where the λns are the eigenvalues of the matrix BCXX . The particular functional form of the resulting PDF depends on the specific eigenvalues. Two special cases are considered as examples next.
6.4.2.1 Special Case#1:
In this case, let's assume further that the Xn are uncorrelated and equal variance so that CXX = σ 2 I. Then the matrix BCXX has N repeated eigenvalues all equal to σ 2. The resulting characteristic function is
(6.46)
This is the characteristic function of a chi-square random variable with N degrees of freedom. The corresponding PDF is
(6.47)
6.4.2.2 Special Case #2:
Again, we take the Xi to be uncorrelated with equal variance so that Cxx = σ 2 I. In this case, the product matrix BCxx has two pairs of repeated eigenvalues of values ±σ 2/2. The resulting characteristic function is
(6.48)
This is the characteristic function of a two-sided exponential (Laplace) random variable,
(6.49)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780121726515500063
Multiple Random Variables
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
6.4.2 Quadratic Transformations of Gaussian Random Vectors
In this section, we show how to calculate the PDFs of various quadratic forms of Gaussian random vectors. In particular, given a vector of N zero-mean Gaussian random variables, X , with an arbitrary covariance matrix, CXX , we form a scalar quadratic function of the vector X of the general form
where B is an arbitrary N × N matrix. We would then like to find the PDF of the random variable, Z. These types of problem occur frequently in the study of noncoherent communication systems.
One approach to this problem would be to first form the CDF, FZ (z) = Pr( X T BX ≤ z). This could be accomplished by computing
(6.41)
where A(z) is the region defined by x T Bx ≤ z. While conceptually straightforward, defining the regions and performing the required integration can get quite involved. Instead, we elect to calculate the PDF of Z by first finding its characteristic function. Once the characteristic function is found, the PDF can be found through an inverse transformation.
For the case of Gaussian random vectors, finding the characteristic function of a quadratic form turns out to be surprisingly manageable.
(6.42)
This integral is understood to be over the entire N-dimensional x -plane. To evaluate this integral, we simply manipulate the integrand into the standard form of a N-dimensional Gaussian distribution and then use the normalization integral for Gaussian PDFs. Toward that end, define the matrix F according to F −1 = C −1 XX −2jω B . Then
(6.43)
The last step is accomplished using the fact that the integral of a multidimensional Gaussian PDF is unity. In addition, using the matrix property that det( F −1)= (det( F ))−1, this can be rewritten in the more convenient form
(6.44)
To get a feel for the functional form of the characteristic function, note that the determinant of a matrix can be written as the product of its eigenvalues. Furthermore, for a matrix of the form A = I + c D , for a constant c, the eigenvalues of A, {λ A }, can be written in terms of the eigenvalues of the matrix D , {λD}, according to λ A = 1 + cλ D . Therefore,
(6.45)
where the λ n s are the eigenvalues of the matrix BC xx . The particular functional form of the resulting PDF depends on the specific eigenvalues. Two special cases are considered as examples next.
Example 6.5
In this example, we consider the case where the matrix B is an identity so that Z is the sum of the squares of Gaussian random variables, . Further, let us assume that the Xn are uncorrelated and equal variance so that C XX = σ2 I. Then, the matrix BC XX has N repeated eigenvalues all equal to σ 2. The resulting characteristic function is
This is the characteristic function of a chi-square random variable with N degrees of freedom. The corresponding PDF is
Example 6.6
For this example, suppose we need to find the PDF of Z = X 1 X 2 + X 3 X 4. In this case, the quantity Z can be expreesed in the general quadratic form of Equation (6.40) if we choose the matrix B according to
Again, we take the Xi to be uncorrelated with equal variance so that CXX = σ2 I . In this case, the product matrix BC XX has two pairs of repeated eigenvalues of values ±σ2/2. The resulting characteristic function is
This is the characteristic function of a two-sided exponential (Laplace) random variable,
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500096
Uncertain Input Data Problems and the Worst Scenario Method
Ivan Hlaváěek Dr. , ... Ivo Babuška Dr. , in North-Holland Series in Applied Mathematics and Mechanics, 2004
25.1 Matrix-Based State Problems
Let us consider a vector a = (a 1, a 2,…, ak )T ∈ ℝ k of input data and a state equation expressed by a linear system
(25.1)
where K(a) is an n × n nonsingular matrix and u, f(a) are n-dimensional column vectors. A unique solution u(a) ≡ u exists.
Next, let Φ : ℝ k × ℝ n → ℝ, Φ ≡ Φ(a, u), be a criterion-functional.
It is supposed that Φ and the elements of K(a) and f(a) are m-times differentiable with respect to a 1,…, αk . The implicit function theorem says that u(a) is also m-times differentiable with respect to the input data. As a consequence, Ψ(α) = Φ(a,u(a)) is m-times differentiable too.
By differentiating Φ and (25.1) with respect to aj, j ∈ {1,…,k}, we can easily infer (see (Haug et al., 1986, Section 1.2))
(25.2)
where ∂Φ/∂u stands for (∂Φ/∂u 1,…, ∂Φ/∂un ) and K(a)û = f(a) holds for û, which is held constant for the process of differentiation.
Let us assume that the matrix K(a) is symmetric and let us set up the adjoint equation
Then
(25.3)
If k > 1 and, for example, only one criterion-functional is to be differentiated, then (25.3) is more efficient than (25.2) because (25.3) requires solving only two linear systems to obtain u(a) and μ. If, however, the gradients of a number of criterion-functionals are to be calculated, then this advantage vanishes. To determine whether (25.2) or (25.3) is to be employed, the number of right-hand sides f considered in (25.1) is important too. Take, for instance, structural design, where families of loads are often used. We refer to (Haug et al., 1986, Section 1.2) for a detailed discussion and also for methods delivering the second order derivatives of Ψ.
Let us focus on the differentiation of eigenvalues. We consider the generalized eigenproblem
(25.4)
where y ≡ y(a) ∈ ℝ n and K(a), M(a) are n × n symmetric positive definite and differentiable matrices.
It is easy to differentiate the eigenvalue λ(a) ≡ λ if its multiplicity equals one. Under the normalization condition y T M(a)y = 1, we derive (see (Haug et al., 1986, Section 1.3))
(25.5)
Formula (25.5) is a special case of an algorithm used to differentiate multiple eigenvalues.
Theorem 25.1
Let the eigenvalue λ(a) have multiplicity s ≥ 1 at a, and let â = (â 1, â 2,..., âk )T be a nonzero vector. Then the differentials Dλ i (a,â), i = 1,…, s, of the repeated eigenvalue λ(a) in the direction â exist and are equal to the eigenvalues of an s × s matrix with elements
where {yi : i = 1, … , s} is any M(a)-orthonormal basis of the eigenspace associated with λ(a).
Proof. We refer to (Haug et al., 1986, Section 1.3.6) for a proof. □
Eigenvectors can also be differentiated. However, even for simple eigenvalues, the directional derivative of corresponding eigenvectors is not expressed by an explicit formula. To obtain the derivative, it is necessary to solve a linear system reduced to a subspace; more details in (Haug et al., 1986).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0167593104800159
Eigenvalues in Riemannian Geometry
In Pure and Applied Mathematics, 1984
Proof
Consider the functions f of the form where ϕ1, …, ϕ k are orthonormal, with each ϕ j an eigenfunction of λ j , j = 1, …, k, and where f is orthogonal to v 1, …, v k−1 in L 2(M), that is,
(83)
If we think of α1, …, α k as unknowns and (ϕ j , vl ) as given coefficients, then system (83) has more unknowns than equations and a nontrivial solution of (83) must exist. But then which implies the claim.
Domain Monotonicity Of Eigenvalues
(vanishing Dirichlet data): Let Ω1, …, Ω m be pairwise disjoint normal domains in M, whose boundaries, when intersecting ∂M, do so transversally. Given an eigenvalue problem on M, consider, for each r = 1, …, m, the eigenvalue problem on Ω r obtained by requiring vanishing Dirichlet data on ∂Ω r ∩ M and by leaving theoriginal data on ∂Ω r ∩ ∂M unchanged. Arange all the eigenvalues of Ω1, …, Ω m in an increasing sequence with each eigenvalue repeated according to its multiplicity, and let the eigenvalues of M be given as in (79). Then we have for all k = 1, 2, …,
(84)
Proof
We use the max–min method. For functions in L 2 pick ϕ1, …, ϕ k−1. For j = 1, …, k let ψ j : → ℕ be an eigenfunction of vj when restricted to the appropriate subdomain, and identically zero, otherwise. Then ψ j ∈ H(M), and ψ1, …, ψ k may be chosen orthonormal in L 2(M). As before, there exist α1, …, α k , not all equal to zero, satisfying Therefore the function is orthogonal to ϕ1, …, ϕ k−1 in L 2(M), which implies which is the claim.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0079816908608090
Systems of linear differential equations
Henry J. Ricardo , in A Modern Introduction to Differential Equations (Third Edition), 2021
6.8.3 Both eigenvalues zero
Finally, let's assume that . If there are two linearly independent eigenvectors and , then the general solution is , a single vector of constants. If there is only one linearly independent eigenvector V corresponding to the eigenvalue 0, then we can find a generalized eigenvector and use formula (6.8.3):
For , we get . In Exercise 15 you will investigate a system that has both eigenvalues zero.
Exercises 6.8
A
For each of the Systems 1–8, (a) find the eigenvalues and their corresponding linearly independent eigenvectors and (b) sketch/plot a few trajectories and show the position(s) of the eigenvector(s) if they do not have complex entries. Do part (a) manually, but if the eigenvalues are irrational numbers, you may use technology to find the corresponding eigenvectors.
- 1.
-
,
- 2.
-
,
- 3.
-
,
- 4.
-
,
- 5.
-
,
- 6.
-
,
- 7.
-
,
- 8.
-
,
B
- 9.
-
Given a characteristic polynomial , what condition on α and β guarantees that there is a repeated eigenvalue?
- 10.
-
Let . Show that A has only one eigenvalue if and only if .
- 11.
-
Write a system of first-order linear equations for which is a sink with eigenvalues and .
- 12.
-
Write a system of first-order linear equations for which is a source with eigenvalues and .
- 13.
-
Show that if V is an eigenvector of a matrix A corresponding to eigenvalue λ and vector W is a solution of , then V and W are linearly independent. [See Eqs. (6.8.2)–(6.8.3).] [Hint: Suppose that for some scalar c. Then show that V must be the zero vector.]
- 14.
-
Suppose that a system has only one eigenvalue λ, and that every eigenvector is a scalar multiple of one fixed eigenvector, V. Then Eq. (6.8.3) tells us that any trajectory has the form .
- a.
-
If , show that the slope of approaches the slope of the line determined by V as . [Hint: , as a scalar multiple of , is parallel to .]
- b.
-
If , show that the slope of approaches the slope of the line determined by V as .
- 15.
-
Consider the system , .
- a.
-
Show that the only eigenvalue of the system is 0.
- b.
-
Find the single independent eigenvector V corresponding to .
- c.
-
Show that every trajectory of this system is a straight line parallel to V, with trajectories on opposite sides of V moving in opposite directions. [Hint: First, for any trajectory not on the line determined by V, look at its slope, .]
- 16.
-
If is a system with a double eigenvalue and , show that the general solution of the system is
C
- 17.
-
Prove that is the general solution of , where .
- 18.
-
Suppose the matrix A has repeated real eigenvalues λ and there is a pair of linearly independent eigenvectors associated with A. Prove that .
- 19.
-
A special case of the Cayley–Hamilton Theorem states that if is the characteristic equation of a matrix A, then is the zero matrix. (We say that a matrix always satisfies its own characteristic equation.) Using this result, show that if a matrix A has a repeated eigenvalue λ and (the zero vector), then either V is an eigenvector of A or else is an eigenvector of A. [See Appendix B.3 if you are not familiar with matrix-matrix multiplication.]
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128182178000130
General Solution System of Differential Equations Repeated Eigen Values
Source: https://www.sciencedirect.com/topics/mathematics/repeated-eigenvalue