Systems of Differential Equations

Martha L. Abell , James P. Braselton , in Introductory Differential Equations (Fifth Edition), 2018

Repeated Eigenvalues

We recall from our previous experience with repeated eigenvalues of a 2 × 2 system that the eigenvalue can have two linearly independent eigenvectors associated with it or only one (linearly independent) eigenvector associated with it. We investigate the behavior of solutions in the case of repeated eigenvalues by considering both of these possibilities.

1.

If the eigenvalue λ = λ 1 , 2 has two corresponding linearly independent eigenvectors v 1 and v 2 a general solution is

X ( t ) = c 1 v 1 e λ t + c 2 v 2 e λ t = ( c 1 v 1 + c 2 v 2 ) e λ t .

If λ > 0 , then X ( t ) becomes unbounded along the lines through ( 0 , 0 ) determined by the vectors c 1 v 1 + c 2 v 2 , where c 1 and c 2 are arbitrary constants. In this case, we call the equilibrium point an unstable star node. However, if λ < 0 , then X ( t ) approaches ( 0 , 0 ) along these lines, and we call ( 0 , 0 ) a stable star node.
2.

If the eigenvalue λ = λ 1 , 2 has only one corresponding (linearly independent) eigenvector v = v 1 , a general solution is

X ( t ) = c 1 v e λ t + c 2 ( v t + w ) e λ t = ( c 1 v + c 2 w ) e λ t + c 2 v t e λ t

where w satisfies ( A λ I ) w = v . If we write this solution as

X ( t ) = t e λ t [ ( c 1 v + c 2 w ) 1 t + c 2 v ] ,

we can more easily investigate the behavior of this solution. If λ < 0 , then lim t t e λ t = 0 and lim t [ ( c 1 v + c 2 w ) 1 t + c 2 v ] = c 2 v . The solutions approach ( 0 , 0 ) along the line through ( 0 , 0 ) determined by v, and we call ( 0 , 0 ) a stable deficient node. If λ > 0 , the solutions become unbounded along this line, and we say that ( 0 , 0 ) is an unstable deficient node.

Note: The name "star" was selected due to the shape of the solutions.

Example 6.37

Image 2
Classify the equilibrium point ( 0 , 0 ) in the systems: (a) { x = x + 9 y y = x 5 y ; and (b) { x = 2 x y = 2 y .

Solution: (a) The eigenvalues are found by solving

| 1 λ 9 1 5 λ | = λ 2 + 4 λ + 4 = ( λ + 2 ) 2 = 0 .

Hence, λ 1 , 2 = 2 . In this case, an eigenvector v 1 = ( x 1 y 1 ) satisfies ( 3 9 1 3 ) ( x 1 y 1 ) = ( 0 0 ) , which is equivalent to ( 1 3 0 0 ) ( x 1 y 1 ) = ( 0 0 ) , so there is only one corresponding (linearly independent) eigenvector v 1 = ( 3 y 1 y 1 ) = ( 3 1 ) y 1 . Because λ = 2 < 0 , ( 0 , 0 ) is a degenerate stable node. In this case, the eigenline is y = x / 3 . We graph this line in Fig. 6.15A and direct the arrows toward the origin because of the negative eigenvalue. Next, we sketch trajectories that become tangent to the eigenline as t and associate with each arrows directed toward the origin.

Figure 6.15

Figure 6.15. (A) Phase portrait for Example 6.37, solution (a). (B) Phase portrait for Example 6.37, solution (b).

(b) Solving the characteristic equation

| 2 λ 0 0 2 λ | = ( 2 λ ) 2 = 0 ,

we have λ = λ 1 , 2 = 2 . However, because an eigenvector v 1 = ( x 1 y 1 ) satisfies the system ( 0 0 0 0 ) ( x 1 y 1 ) = ( 0 0 ) , any nonzero choice of v 1 is an eigenvector. If we select two linearly independent vectors such as v 1 = ( 1 0 ) and v 2 = ( 0 1 ) , we obtain two linearly independent eigenvectors corresponding to λ 1 , 2 = 2 . (Note: The choice of these two vectors does not change the value of the solution, because of the form of the general solution in this case.) Because λ = 2 > 0 , we classify ( 0 , 0 ) as a degenerate unstable star node. A general solution of the system is X ( t ) = c 1 ( 1 0 ) e 2 t + c 2 ( 0 1 ) e 2 t , so when we eliminate the parameter, we obtain y = c 2 x / c 1 . Therefore, the trajectories of this system are lines passing through the origin. In Fig. 6.15B, we graph several trajectories. Because of the positive eigenvalue, we associate with each an arrow directed away from the origin.  □

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128149485000069

Systems of Differential Equations

Martha L. Abell , James P. Braselton , in Introductory Differential Equations (Fourth Edition), 2014

Repeated Eigenvalues

We recall from our previous experience with repeated eigenvalues of a 2  ×   2 system that the eigenvalue can have two linearly independent eigenvectors associated with it or only one (linearly independent) eigenvector associated with it. We investigate the behavior of solutions in the case of repeated eigenvalues by considering both of these possibilities.

1.

If the eigenvalue λ   =   λ1,2 has two corresponding linearly independent eigenvectors v 1 and v 2, a general solution is

X t = c 1 v 1 e λ t + c 2 v 2 e λ t = c 1 v 1 + c 2 v 2 e λ t .

If λ >  0, then X(t) becomes unbounded along the lines through (0, 0) determined by the vectors c 1 v 1  + c 2 v 2, where c 1 and c 2 are arbitrary constants. In this case, we call the equilibrium point an unstable star node. However, if λ <  0, then X(t) approaches (0, 0) along these lines, and we call (0, 0) a stable star node.

Note: The name "star" was selected due to the shape of the solutions.

2.

If the eigenvalue λ   =   λ1,2 has only one corresponding (linearly independent) eigenvector v  = v 1, a general solution is

X t = c 1 v e λ t + c 2 v t + w e λ t = c 1 v + c 2 w e λ t + c 2 v t e λ t

where w satisfies (A    λI)w  = v. If we write this solution as

X t = t e λ t c 1 v + c 2 w 1 t + c 2 v ,

we can more easily investigate the behavior of this solution. If λ <  0, then lim t    teλt   =   0 and lim t    [(c 1 v  + c 2 w)(1/t)   + c 2 v]   = c 2 v. The solutions approach (0, 0) along the line through (0, 0) determined by v, and we call (0, 0) a stable deficient node. If λ >  0, the solutions become unbounded along this line, and we say that (0, 0) is an unstable deficient node.

Example 6.6.3

Classify the equilibrium point (0, 0) in the systems: (a) x = x + 9 y y = x 5 y and (b) x = 2 x y = 2 y .

Solution

(a)

The eigenvalues are found by solving 1 λ 9 1 5 λ = λ 2 + 4 λ + 4 = λ + 2 2 = 0 . Hence, λ1,2  =     2. In this case, an eigenvector v 1 = x 1 y 1 satisfies 3 9 1 3 x 1 y 1 = 0 0 , which is equivalent to 1 3 0 0 x 1 y 1 = 0 0 , so there is only one corresponding (linearly independent) eigenvector v 1 = 3 y 1 y 1 = 3 1 y 1 . Because λ   =     2 <  0, (0, 0) is a degenerate stable node. In this case, the eigenline is y  =   x/3. We graph this line in Figure 6.15(a) and direct the arrows toward the origin because of the negative eigenvalue. Next, we sketch trajectories that become tangent to the eigenline as t    ∞and associate with each arrows directed toward the origin.

Figure 6.15. (a) Phase portrait for Example 6.6.3, solution (a). (b) Phase portrait for Example 6.6.3, solution (b).

(b)

Solving the characteristic equation

2 λ 0 0 2 λ = 2 λ 2 = 0

we have λ   =   λ1,2  =   2. However, because an eigenvector v 1 = x 1 y 1 satisfies the system 0 0 0 0 x 1 y 1 = 0 0 , any nonzero choice of v 1 is an eigenvector. If we select two linearly independent vectors such as v 1 = 1 0 and v 2 = 0 1 , we obtain two linearly independent eigenvectors corresponding to λ1,2  =   2. (Note: The choice of these two vectors does not change the value of the solution, because of the form of the general solution in this case.) Because λ   =   2 >  0, we classify (0, 0) as a degenerate unstable star node. A general solution of the system is X t = c 1 1 0 e 2 t + c 2 0 1 e 2 t ,so when we eliminate the parameter, we obtain y  = c 2 x/c 1. Therefore, the trajectories of this system are lines passing through the origin. In Figure 6.15(b), we graph several trajectories. Because of the positive eigenvalue, we associate with each an arrow directed away from the origin.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124172197000065

Multiple Random Variables

Scott L. Miller , Donald Childers , in Probability and Random Processes, 2004

6.4.2 Quadratic Transformations of Gaussian Random Vectors

In this section, we show how to calculate the PDFs of various quadratic forms of Gaussian random vectors. In particular, given a vector of N zero-mean Gaussian random variables, X, with an arbitrary covariance matrix, C xx , we form a scalar quadratic function of the vector X of the general form

(6.40) Z = X T B X ,

where B is an arbitrary N × N matrix. We would then like to find the PDF of the random variable, Z. These types of problems occur frequently in the study of noncoherent communication systems.

One approach to this problem would be first to form the CDF, FZ(z)=Pr(X T BX≤z). This could be accomplished by computing

(6.41) F Z ( z ) = A ( z ) f X ( x ) d x ,

where A(z) is the region defined by xTBx≤z. While conceptually straightforward, defining the regions and performing the required integration can get quite involved. Instead, we elect to calculate the PDF of Z by first finding its characteristic function. Once the characteristic function is found, the PDF can be found through an inverse transformation.

For the case of Gaussian random vectors, finding the characteristic function of a quadratic form turns out to be surprisingly straightforward:

(6.42) Φ Z ( ω ) E [ e j ω X T B X ] = 1 ( 2 π ) N det ( C X X ) exp ( - 1 2 ( x T [ C X X - 1 - 2 j ω B ] x ) ) d x .

This integral is understood to be over the entire N-dimensional x-plane. To evaluate this integral, we simply manipulate the integrand into the standard form of an N-dimensional Gaussian distribution and then use the normalization integral for Gaussian PDFs. Toward that end, define the matrix F according to F - 1 = C X X - 1 - 2 j ω B . Then

(6.43) Φ Z ( ω ) = - 1 ( 2 π ) N det ( C X X ) exp ( - 1 2 ( x T F - 1 x ) ) d x = det ( F ) det ( C X X ) - 1 ( 2 π ) N det ( F ) exp ( - 1 2 ( x T F - 1 x ) ) d x = det ( F ) det ( C X X )

where the integral in unity because the integral is a Gaussian distribution.

Using the fact that det ( F - 1 ) = ( det ( F ) ) - 1 , this can be rewritten in the more convenient form

(6.44) Φ Z ( ω ) = det ( F ) det ( C X X ) = 1 det ( F - 1 ) det ( C X X ) = 1 det ( F - 1 C X X ) = 1 det ( I - 2 j ω B C X X ) .

To get a feel for the functional form of the characteristic function, note that the determinant of a matrix can be written as the product of its eigenvalues. Furthermore, for a matrix of the form A = I + cD for a constant c, the eigenvalues of A, A }, can be written in terms of the eigenvalues of the matrix D, {λD}, according to

λA = 1 + D . Hence,

(6.45) Φ Z ( ω ) = n = 1 N 1 1 - 2 j ω λ n ,

where the λns are the eigenvalues of the matrix BCXX . The particular functional form of the resulting PDF depends on the specific eigenvalues. Two special cases are considered as examples next.

6.4.2.1 Special Case#1: B = I Z = n = 1 N X n 2

In this case, let's assume further that the Xn are uncorrelated and equal variance so that CXX = σ 2 I. Then the matrix BCXX has N repeated eigenvalues all equal to σ 2. The resulting characteristic function is

(6.46) Φ Z ( ω ) = ( 1 - 2 j ω σ 2 ) - N / 2 .

This is the characteristic function of a chi-square random variable with N degrees of freedom. The corresponding PDF is

(6.47) f Z ( z ) = z ( N / 2 ) - 1 ( 2 σ 2 ) N / 2 Γ ( N / 2 ) exp ( - z 2 σ 2 ) u ( z ) .

6.4.2.2 Special Case #2: N = 4 , B = 1 2 ( 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 ) Z = X 1 X 2 + X 3 X 4

Again, we take the Xi to be uncorrelated with equal variance so that Cxx = σ 2 I. In this case, the product matrix BCxx has two pairs of repeated eigenvalues of values ±σ 2/2. The resulting characteristic function is

(6.48) Φ Z ( ω ) = 1 1 + j ω σ 2 1 1 - j ω σ 2 = 1 1 + ( ω σ 2 ) 2 .

This is the characteristic function of a two-sided exponential (Laplace) random variable,

(6.49) f Z ( z ) = 1 2 σ 2 exp ( - | z | σ 2 ) .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780121726515500063

Multiple Random Variables

Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012

6.4.2 Quadratic Transformations of Gaussian Random Vectors

In this section, we show how to calculate the PDFs of various quadratic forms of Gaussian random vectors. In particular, given a vector of N zero-mean Gaussian random variables, X , with an arbitrary covariance matrix, CXX , we form a scalar quadratic function of the vector X of the general form

where B is an arbitrary N × N matrix. We would then like to find the PDF of the random variable, Z. These types of problem occur frequently in the study of noncoherent communication systems.

One approach to this problem would be to first form the CDF, FZ (z) = Pr( X T BX z). This could be accomplished by computing

(6.41) F Z ( z ) = A ( z ) f X ( x ) d x ,

where A(z) is the region defined by x T Bx z. While conceptually straightforward, defining the regions and performing the required integration can get quite involved. Instead, we elect to calculate the PDF of Z by first finding its characteristic function. Once the characteristic function is found, the PDF can be found through an inverse transformation.

For the case of Gaussian random vectors, finding the characteristic function of a quadratic form turns out to be surprisingly manageable.

(6.42) Φ Z ( ω ) = E [ e j ω X T B X ] = 1 ( 2 π ) N det ( C X X ) exp ( - 1 2 ( x T [ C X X - 1 - 2 j ω B ] x ) ) d x .

This integral is understood to be over the entire N-dimensional x -plane. To evaluate this integral, we simply manipulate the integrand into the standard form of a N-dimensional Gaussian distribution and then use the normalization integral for Gaussian PDFs. Toward that end, define the matrix F according to F −1 = C −1 XX −2jω B . Then

(6.43) Φ Z ( ω ) = 1 ( 2 π ) N det ( C X X ) exp ( - 1 2 ( x T F - 1 x ) ) d x = det ( F ) det ( C X X ) 1 ( 2 π ) N det ( F ) exp ( - 1 2 ( x T F - 1 x ) ) d x = det ( F ) det ( C X X ) .

The last step is accomplished using the fact that the integral of a multidimensional Gaussian PDF is unity. In addition, using the matrix property that det( F −1)= (det( F ))−1, this can be rewritten in the more convenient form

(6.44) Φ Z ( ω ) = det ( F ) det ( C X X ) = 1 det ( F - 1 ) det ( C X X ) = 1 det ( F - 1 C X X ) = 1 det ( I - 2 j ω B C X X ) .

To get a feel for the functional form of the characteristic function, note that the determinant of a matrix can be written as the product of its eigenvalues. Furthermore, for a matrix of the form A = I + c D , for a constant c, the eigenvalues of A, A }, can be written in terms of the eigenvalues of the matrix D , {λD}, according to λ A = 1 + cλ D . Therefore,

(6.45) Φ Z ( ω ) = n = 1 N 1 1 - 2 j ω λ n ,

where the λ n s are the eigenvalues of the matrix BC xx . The particular functional form of the resulting PDF depends on the specific eigenvalues. Two special cases are considered as examples next.

Example 6.5

In this example, we consider the case where the matrix B is an identity so that Z is the sum of the squares of Gaussian random variables, B = I Z = n = 1 N X n 2 . Further, let us assume that the Xn are uncorrelated and equal variance so that C XX = σ2 I. Then, the matrix BC XX has N repeated eigenvalues all equal to σ 2. The resulting characteristic function is

Φ Z ( ω ) = ( 1 - 2 j ω σ 2 ) - N / 2 .

This is the characteristic function of a chi-square random variable with N degrees of freedom. The corresponding PDF is

f Z ( z ) = z ( N / 2 ) - 1 ( 2 σ 2 ) N / 2 Γ ( N / 2 ) exp ( - z 2 σ 2 ) u ( z ) .

Example 6.6

For this example, suppose we need to find the PDF of Z = X 1 X 2 + X 3 X 4. In this case, the quantity Z can be expreesed in the general quadratic form of Equation (6.40) if we choose the matrix B according to

B = 1 2 [ 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 ] .

Again, we take the Xi to be uncorrelated with equal variance so that CXX = σ2 I . In this case, the product matrix BC XX has two pairs of repeated eigenvalues of values ±σ2/2. The resulting characteristic function is

Φ Z ( ω ) = 1 1 + j ω σ 2 1 1 - j ω σ 2 = 1 1 + ( ω σ 2 ) 2 .

This is the characteristic function of a two-sided exponential (Laplace) random variable,

f Z ( z ) = 1 2 σ 2 exp ( - | z | σ 2 ) .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869814500096

Uncertain Input Data Problems and the Worst Scenario Method

Ivan Hlaváěek Dr. , ... Ivo Babuška Dr. , in North-Holland Series in Applied Mathematics and Mechanics, 2004

25.1 Matrix-Based State Problems

Let us consider a vector a = (a 1, a 2,…, ak )T ∈ ℝ k of input data and a state equation expressed by a linear system

(25.1) K ( a ) u = f ( a )

where K(a) is an n × n nonsingular matrix and u, f(a) are n-dimensional column vectors. A unique solution u(a) ≡ u exists.

Next, let Φ : ℝ k × ℝ n → ℝ, Φ ≡ Φ(a, u), be a criterion-functional.

It is supposed that Φ and the elements of K(a) and f(a) are m-times differentiable with respect to a 1,…, αk . The implicit function theorem says that u(a) is also m-times differentiable with respect to the input data. As a consequence, Ψ(α) = Φ(a,u(a)) is m-times differentiable too.

By differentiating Φ and (25.1) with respect to aj, j ∈ {1,…,k}, we can easily infer (see (Haug et al., 1986, Section 1.2))

(25.2) Ψ ( a ) a j = Φ ( a , u ^ ) a j + Φ ( a , u ^ ) u K 1 ( a ) [ f ( a ) a j a j ( K ( a ) u ^ ) ] ,

where Φ/∂u stands for (Φ/∂u 1,…, Φ/∂un ) and K(a)û = f(a) holds for û, which is held constant for the process of differentiation.

Let us assume that the matrix K(a) is symmetric and let us set up the adjoint equation

K ( a ) μ = Φ T ( a , u ^ ) u .

Then

(25.3) Ψ ( a ) a j = Φ ( a , u ^ ) a j + μ T a j ( f ( a ) K ( a ) u ^ ) , j = 1 , , k .

If k > 1 and, for example, only one criterion-functional is to be differentiated, then (25.3) is more efficient than (25.2) because (25.3) requires solving only two linear systems to obtain u(a) and μ. If, however, the gradients of a number of criterion-functionals are to be calculated, then this advantage vanishes. To determine whether (25.2) or (25.3) is to be employed, the number of right-hand sides f considered in (25.1) is important too. Take, for instance, structural design, where families of loads are often used. We refer to (Haug et al., 1986, Section 1.2) for a detailed discussion and also for methods delivering the second order derivatives of Ψ.

Let us focus on the differentiation of eigenvalues. We consider the generalized eigenproblem

(25.4) K ( a ) y = λ M ( a ) y ,

where yy(a) ∈ ℝ n and K(a), M(a) are n × n symmetric positive definite and differentiable matrices.

It is easy to differentiate the eigenvalue λ(a) ≡ λ if its multiplicity equals one. Under the normalization condition y T M(a)y = 1, we derive (see (Haug et al., 1986, Section 1.3))

(25.5) λ ( a ) a j = y T K ( a ) a j y λ ( a ) y T M ( a ) a j y .

Formula (25.5) is a special case of an algorithm used to differentiate multiple eigenvalues.

Theorem 25.1

Let the eigenvalue λ(a) have multiplicity s ≥ 1 at a, and let â = (â 1, â 2,..., âk )T be a nonzero vector. Then the differentials Dλ i (a,â), i = 1,…, s, of the repeated eigenvalue λ(a) in the direction â exist and are equal to the eigenvalues of an s × s matrix M with elements

M i j = = 1 k ( y i T K ( a ) a y j ) a ^ λ ( a ) = 1 k ( y i T K ( a ) a y j ) a ^ , i , j = 1 , , s ,

where {yi : i = 1, … , s} is any M(a)-orthonormal basis of the eigenspace associated with λ(a).

Proof. We refer to (Haug et al., 1986, Section 1.3.6) for a proof.  

Eigenvectors can also be differentiated. However, even for simple eigenvalues, the directional derivative of corresponding eigenvectors is not expressed by an explicit formula. To obtain the derivative, it is necessary to solve a linear system reduced to a subspace; more details in (Haug et al., 1986).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0167593104800159

Eigenvalues in Riemannian Geometry

In Pure and Applied Mathematics, 1984

Proof

Consider the functions f of the form f = j = 1 k α j ϕ j , where ϕ1, …, ϕ k are orthonormal, with each ϕ j an eigenfunction of λ j , j = 1, …, k, and where f is orthogonal to v 1, …, v k−1 in L 2(M), that is,

(83) 0 = j = 1 k α j ( ϕ j , v l ) , l = 1 , , k - 1 .

If we think of α1, …, α k as unknowns and (ϕ j , vl ) as given coefficients, then system (83) has more unknowns than equations and a nontrivial solution of (83) must exist. But then μ | | f | | 2 D [ f , f ] = j = 1 k λ j α j 2 λ k | | f | | 2 which implies the claim.

Domain Monotonicity Of Eigenvalues

(vanishing Dirichlet data): Let Ω1, …, Ω m be pairwise disjoint normal domains in M, whose boundaries, when intersecting ∂M, do so transversally. Given an eigenvalue problem on M, consider, for each r = 1, …, m, the eigenvalue problem on Ω r obtained by requiring vanishing Dirichlet data on ∂Ω r M and by leaving theoriginal data on ∂Ω r ∩ ∂M unchanged. Arange all the eigenvalues of Ω1, …, Ω m in an increasing sequence 0 v 1 v 2 with each eigenvalue repeated according to its multiplicity, and let the eigenvalues of M be given as in (79). Then we have for all k = 1, 2, …,

(84) λ k v k .

Proof

We use the max–min method. For functions in L 2 pick ϕ1, …, ϕ k−1. For j = 1, …, k let ψ j : M ¯ → ℕ be an eigenfunction of vj when restricted to the appropriate subdomain, and identically zero, otherwise. Then ψ j H(M), and ψ1, …, ψ k may be chosen orthonormal in L 2(M). As before, there exist α1, …, α k , not all equal to zero, satisfying j = 1 k α j ( ψ j , ϕ l ) = 0 , l = 1 , , k - 1 . Therefore the function f = j = 1 k α j ψ j is orthogonal to ϕ1, …, ϕ k−1 in L 2(M), which implies λ k | | f | | 2 D [ f , f ] = j = 1 k v j α j 2 v k | | f | | 2 which is the claim.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0079816908608090

Systems of linear differential equations

Henry J. Ricardo , in A Modern Introduction to Differential Equations (Third Edition), 2021

6.8.3 Both eigenvalues zero

Finally, let's assume that λ 1 = λ 2 = 0 . If there are two linearly independent eigenvectors V 1 and V 2 , then the general solution is X ( t ) = c 1 e 0 t V 1 + c 2 e 0 t V 2 = c 1 V 1 + c 2 V 2 , a single vector of constants. If there is only one linearly independent eigenvector V corresponding to the eigenvalue 0, then we can find a generalized eigenvector and use formula (6.8.3):

X ( t ) = c 1 e λ t V + c 2 [ t e λ t V + e λ t W ] .

For λ = 0 , we get X ( t ) = c 1 V + c 2 [ t V + W ] = ( c 1 + c 2 t ) V + c 2 W . In Exercise 15 you will investigate a system that has both eigenvalues zero.

Exercises 6.8

A

For each of the Systems 1–8, (a) find the eigenvalues and their corresponding linearly independent eigenvectors and (b) sketch/plot a few trajectories and show the position(s) of the eigenvector(s) if they do not have complex entries. Do part (a) manually, but if the eigenvalues are irrational numbers, you may use technology to find the corresponding eigenvectors.

1.

x ˙ = 3 x , y ˙ = 3 y

2.

x ˙ = 4 x , y ˙ = x 4 y

3.

x ˙ = 2 x + y , y ˙ = 4 y x

4.

x ˙ = 3 x y , y ˙ = 4 x y

5.

x ˙ = 2 y 3 x , y ˙ = y 2 x

6.

x ˙ = 5 x + 3 y , y ˙ = 3 x y

7.

x ˙ = 3 x y , y ˙ = x y

8.

x ˙ = 2 x + 5 y , y ˙ = 2 y

B

9.

Given a characteristic polynomial λ 2 + α λ + β , what condition on α and β guarantees that there is a repeated eigenvalue?

10.

Let A = [ a b c d ] . Show that A has only one eigenvalue if and only if [ trace ( A ) ] 2 4 det ( A ) = 0 .

11.

Write a system of first-order linear equations for which ( 0 , 0 ) is a sink with eigenvalues λ 1 = 2 and λ 2 = 2 .

12.

Write a system of first-order linear equations for which ( 0 , 0 ) is a source with eigenvalues λ 1 = 3 and λ 2 = 3 .

13.

Show that if V is an eigenvector of a 2 × 2 matrix A corresponding to eigenvalue λ and vector W is a solution of ( A λ I ) W = V , then V and W are linearly independent. [See Eqs. (6.8.2)(6.8.3).] [Hint: Suppose that W = c V for some scalar c. Then show that V must be the zero vector.]

14.

Suppose that a system X ˙ = A X has only one eigenvalue λ, and that every eigenvector is a scalar multiple of one fixed eigenvector, V. Then Eq. (6.8.3) tells us that any trajectory has the form X ( t ) = c 1 e λ t V + c 2 [ t e λ t V + e λ t W ] = t e λ t [ 1 t ( c 1 V + W ) + c 2 V ] .

a.

If λ < 0 , show that the slope of X ( t ) approaches the slope of the line determined by V as t . [Hint: e λ t t X ( t ) , as a scalar multiple of X ( t ) , is parallel to X ( t ) .]

b.

If λ < 0 , show that the slope of X ( t ) approaches the slope of the line determined by V as t .

15.

Consider the system x ˙ = 6 x + 4 y , y ˙ = 9 x 6 y .

a.

Show that the only eigenvalue of the system is 0.

b.

Find the single independent eigenvector V corresponding to λ = 0 .

c.

Show that every trajectory of this system is a straight line parallel to V, with trajectories on opposite sides of V moving in opposite directions. [Hint: First, for any trajectory not on the line determined by V, look at its slope, d y / d x .]

16.

If { x ˙ = a x + b y , y ˙ = c x + d y } is a system with a double eigenvalue and a d , show that the general solution of the system is

c 1 e λ t [ 2 b d a ] + c 2 e λ t ( t [ 2 b d a ] + [ 0 2 ] ) ,

where λ = ( a + d ) / 2 .

C

17.

Prove that c 1 e λ t [ 1 0 ] + c 2 e λ t [ t 1 ] is the general solution of X ˙ = A X , where A = [ λ 1 0 λ ] .

18.

Suppose the matrix A has repeated real eigenvalues λ and there is a pair of linearly independent eigenvectors associated with A. Prove that A = [ λ 0 0 λ ] .

19.

A special case of the Cayley–Hamilton Theorem states that if λ 2 + α λ + β = 0 is the characteristic equation of a matrix A, then A 2 + α A + β I is the zero matrix. (We say that a 2 × 2 matrix always satisfies its own characteristic equation.) Using this result, show that if a 2 × 2 matrix A has a repeated eigenvalue λ and V = [ x y ] 0 (the zero vector), then either V is an eigenvector of A or else ( A λ I ) V is an eigenvector of A. [See Appendix B.3 if you are not familiar with matrix-matrix multiplication.]

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128182178000130