03964.com

文档资料库 文档搜索专家

文档资料库 文档搜索专家

INTRODUCTION TO LINEAR ALGEBRA Fourth Edition MANUAL FOR INSTRUCTORS

Gilbert Strang Massachusetts Institute of Technology math.mit.edu/linearalgebra web.mit.edu/18.06 video lectures: ocw.mit.edu www.wellesleycambridge.com email: gs@math.mit.edu Wellesley - Cambridge Press Box 812060 Wellesley, Massachusetts 02482 math.mit.edu/gs

2

Solutions to Exercises

Problem Set 1.1, page 8

1 The combinations give (a) a line in R3 2 3 4 5

6 7

8 9 10 11

(b) a plane in R3 (c) all of R3 . v C w D .2; 3/ and v w D .6; 1/ will be the diagonals of the parallelogram with v and w as two sides going out from .0; 0/. This problem gives the diagonals v C w and v w of the parallelogram and asks for the sides: The opposite of Problem 2. In this example v D .3; 3/ and w D .2; 2/. 3v C w D .7; 5/ and c v C d w D .2c C d; c C 2d /. u C v D . 2; 3; 1/ and u C v C w D .0; 0; 0/ and 2u C 2v C w D . add ?rst answers/ D . 2; 3; 1/. The vectors u; v; w are in the same plane because a combination gives .0; 0; 0/. Stated another way: u D v w is in the plane of v and w. The components of every c v C d w add to zero. c D 3 and d D 9 give .3; 3; 6/. The nine combinations c.2; 1/ C d.0; 1/ with c D 0; 1; 2 and d D .0; 1; 2/ will lie on a lattice. If we took all whole numbers c and d , the lattice would lie over the whole plane. The other diagonal is v w (or else w v). Adding diagonals gives 2v (or 2w). The fourth corner can be .4; 4/ or .4; 0/ or . 2; 2/. Three possible parallelograms! i j D .1; 1; 0/ is in the base (x -y plane). i C j C k D .1; 1; 1/ is the opposite corner from .0; 0; 0/. Points in the cube have 0 x 1, 0 y 1, 0 z 1. 1 1 1 ; 2 ; 2 /. Four more corners .1; 1; 0/; .1; 0; 1/; .0; 1; 1/; .1; 1; 1/. The center point is . 2 1 1 1 1 1 1 1 1 1 1 1 Centers of faces are . 2 ; 2 ; 0/; . 2 ; 2 ; 1/ and .0; 2 ; 2 /; .1; 2 ; 2 / and . 2 ; 0; 2 /; . 2 ; 1; 1 /. 2

12 A four-dimensional cube has 24 D 16 corners and 2 4 D 8 three-dimensional faces and 24 two-dimensional faces and 32 edges in Worked Example 2.4 A. 13 Sum D zero vector. Sum p D

; sin / 6 6

14 15

16 17

18

19

2:00 vector D 8:00 vector. 2:00 is 30? from horizontal D . 3=2; 1=2/. D .cos Moving the origin to 6:00 adds j D .0; 1/ to every vector. So the sum of twelve vectors changes from 0 to 12j D .0; 12/. 3 1 The point v C w is three-fourths of the way to v starting from w. The vector 4 4 1 1 1 1 v C w is halfway to u D v C w. The vector v C w is 2u (the far corner of the 4 4 2 2 parallelogram). All combinations with c C d D 1 are on the line that passes through v and w. The point V D v C 2w is on that line but it is beyond w. All vectors c v C c w are on the line passing through .0; 0/ and u D 1 vC 1 w. That 2 2 line continues out beyond v C w and back beyond .0; 0/. With c 0, half of this line is removed, leaving a ray that starts at .0; 0/. The combinations c v C d w with 0 c 1 and 0 d 1 ?ll the parallelogram with sides v and w. For example, if v D .1; 0/ and w D .0; 1/ then c v C d w ?lls the unit square. With c 0 and d 0 we get the in?nite “cone” or “wedge” between v and w. For example, if v D .1; 0/ and w D .0; 1/, then the cone is the whole quadrant x 0, y 0. Question: What if w D v? The cone opens to a half-space.

Solutions to Exercises

20 (a)

1 u 3

3

21 The sum is .v

1 C 1 vC 1 w is the center of the triangle between u; v and w; 1 uC 2 w lies 3 3 2 between u and w (b) To ?ll the triangle keep c 0, d 0, e 0, and c C d C e D 1.

u / C .w are in the same plane!

v / C .u

w/ D zero vector. Those three sides of a triangle

1 2

1 22 The vector 2 .u C v C w/ is outside the pyramid because c C d C e D

23 All vectors are combinations of u; v; w as drawn (not in the same plane). Start by seeing 24 The combinations of u and v ?ll one plane. The combinations of v and w ?ll another

C1 C1 > 1. 2 2

that c u C d v ?lls a plane, then adding e w ?lls all of R3 .

plane. Those planes meet in a line: only the vectors c v are in both planes.

25 (a) For a line, choose u D v D w D any nonzero vector

26 Two equations come from the two components: c C 3d D 14 and 2c C d D 8. The 27 The combinations of i D .1; 0; 0/ and i C j D .1; 1; 0/ ?ll the xy plane in xyz space.

(b) For a plane, choose u and v in different directions. A combination like w D u C v is in the same plane. solution is c D 2 and d D 4. Then 2.1; 2/ C 4.3; 1/ D .14; 8/. components of v C w D .4; 5; 6/ and v so v D .3; 5; 7/ and w D .1; 0; 1/.

1 w 2 1 v. 2

28 There are 6 unknown numbers v1 ; v2 ; v3 ; w1 ; w2 ; w3 . The six equations come from the

w D .2; 5; 8/. Add to ?nd 2v D .6; 10; 14/

29 Two combinations out of in?nitely many that produce b D .0; 1/ are

2u C v and No, three vectors u; v; w in the x -y plane could fail to produce b if all three lie on a line that does not contain b. Yes, if one combination produces b then two (and in?nitely many) combinations will produce b. This is true even if u D 0; the combinations can have different c u.

30 The combinations of v and w ?ll the plane unless v and w lie on the same line through

.0; 0/. Four vectors whose combinations ?ll 4-dimensional space: one example is the “standard basis” .1; 0; 0; 0/; .0; 1; 0; 0/; .0; 0; 1; 0/, and .0; 0; 0; 1/.

31 The equations c u C d v C e w D b are

2c d D1 c C2d eD0 d C2e D 0

So d D 2e then c D 3e then 4e D 1

c D 3=4 d D 2=4 e D 1=4

Problem Set 1.2, page 19

2 kuk D 1 and kvk D 5 and kwk D 10. Then 1:4 < .1/.5/ and 48 < .5/.10/, con?rming 1 uv D

1:8 C 3:2 D 1:4, u w D

4:8 C 4:8 D 0, v w D 24 C 24 D 48 D w v.

the Schwarz inequality.

3 Unit vectors v=kvk D . 3 ; 4 / D .:6; :8/ and w=kwk D . 4 ; 3 / D .:8; :6/. The cosine 5 5 5 5

w 24 ? ? ? of is kv vk kwk D 25 . The vectors w; u; w make 0 ; 90 ; 180 angles with w. 4 (a) v . v/ D 1 (b) .v C w/ .v w/ D v v C w v v w w w D 1 C . / . / 1 D 0 so D 90? (notice v w D w v) (c) .v 2w/ .v C 2w/ D v v 4w w D 1 4 D 3.

4 p

Solutions to Exercises

5 u1 D v=kvk D .3; 1/= 10 and u2 D p w=kwk D .2; 1; 2/=3. U 1 D .1; p 3/= 10 is

p

perpendicular to u1 (and so is . 1; 3/= 10). U 2 could be .1; 2; 0/= 5: There is a whole plane of vectors perpendicular to u2 , and a whole circle of unit vectors in that plane. lie on a plane. All vectors perpendicular to .1; 1; 1/ and .1; 2; 3/ lie on a line.

6 All vectors w D .c; 2c/ are perpendicular to v. All vectors .x; y; z/ with x C y C z D 0 7 (a) cos D v w=kvkkwk D 1=.2/.1/ so D 60? or =3 radians 8 (a) False: v and w are any vectors in the plane perpendicular to u

(b) cos D 0 so D 90? or p =2 radians (c) cos D 2=.2/.2/ D 1=2 so D 60? or =3 (d) cos D 1= 2 so D 135? or 3=4. 2w/ D u v C 2u w D 0 (c) True, ku u u C v v D 2 when u v D v u D 0. are perpendicular. v k 2 D .u (b) True: u .v C v/ .u v/ splits into

10 Slopes 2=1 and 1=2 multiply to give 1: then v w D 0 and the vectors (the directions) 11 v w < 0 means angle > 90? ; these w’s ?ll half of 3-dimensional space. 12 .1; 1/ perpendicular to .1; 5/

9 If v2 w2 =v1 w1 D

1 then v2 w2 D v1 w1 or v1 w1 Cv2 w2 D v w D 0: perpendicular!

13 The plane perpendicular to .1; 0; 1/ contains all vectors .c; d; c/. In that plane, v D 14 One possibility among many: u D .1; 1; 0; 0/; v D .0; 0; 1; 1/; w D .1; 1; 1; 1/

c.1; 1/ if 6 2c D 0 or c D 3; v .w c D v w=v v. Subtracting c v is the key to perpendicular vectors. .1; 0; 1/ and w D .0; 1; 0/ are perpendicular.

c v/ D 0 if

and .1; 1; 1; 1/ are perpendicular to each other. “We can rotate those u; v; w in their 3D hyperplane.” p p p .x C y/ D .2 C 8/=2 D 5; cos D 2 16= 10 10 D 8=10. 15 1 2 p w D .1; 1; 0; : : : ; 0/= 2 is a unit vector in the 8D hyperplane perpendicular to v. p p 17 cos ? D 1= 2, cos ˇ D 0, cos
D 1= 2. For any vector v, cos2 ? C cos2 ˇ C cos2
2 2 2 D .v1 C v2 C v3 /=kvk2 D 1. 20 C 5.

16 kvk2 D 1 C 1 C C 1 D 9 so kvk D 3I u D v=3 D . 1 ;:::; 1 / is a unit vector in 9D; 3 3

18 kvk2 D 42 C 22 D 20 and kwk2 D . 1/2 C 22 D 5. Pythagoras is k.3; 4/k2 D 25 D

19 Start from the rules .1/; .2/; .3/ for v w D w v and u .v C w/ and .c v/ w. Use rule .2/

20 We know that .v

for .v Cw/ .v Cw/ D .v Cw/ v C.v Cw/ w. By rule .1/ this is v .v Cw/ Cw .v Cw/. Rule .2/ again gives v v C v w C w v C w w D v v C 2v w C w w. Notice v w D w v! The main point is to be free to open up parentheses. w/ .v w/ D v v 2v w C w w. The Law of Cosines writes kvkkwk cos for v w. When < 90? this v w is positive, so in this case v v C w w is larger than kv wk2 . This is .kvk C kwk/2 . Taking square roots gives kv C wk kvk C kwk. 2v1 w1 v2 w2 which is .v1 w2

21 2v w 2kvkkwk leads to kv C wk2 D v v C 2v w C w w kvk2 C 2kvkkwkCkwk2 .

2 2 2 2 2 2 2 2 2 2 2 2 22 v1 w1 C 2v1 w1 v2 w2 C v2 w2 v1 w1 C v1 w2 C v2 w1 C v2 w2 is true (cancel 4 terms) 2 2 2 2 because the difference is v1 w2 C v2 w1

v2 w1 /2 0.

Solutions to Exercises

5

23 cos ˇ D w1 =kwk and sin ˇ D w2 =kwk. Then cos.ˇ a/ D cos ˇ cos ? C sin ˇ sin ? D 24 Example 6 gives ju1 jjU1 j 25 26

27 28

29

30

There are many proofs of G D .x1 x2 xn /1=n A D .x1 C x2 C C xn /=n. In calculus you are maximizing G on the plane x1 C x2 C C xn D n. The maximum occurs when all x ’s are equal. 31 The columns of the 4 by 4 “Hadamard matrix” (times 1 ) are perpendicular unit 2 vectors: 2 3 1 1 1 1 1 61 1 1 1 17 H D 4 : 1 1 15 2 2 1 1 1 1 1

32 The commands V D randn .3; 30/I D D sqrt .diag .V 0 V //I U D V \D I will give

v1 w1 =kvkkwk C v2 w2 =kvkkwk D v w=kvkkwk. This is cos because ˇ ? D . 1 1 2 2 2 .u2 1 C U1 / and ju2 jjU2 j 2 .u2 C U2 /. The whole line 2 1 1 2 2 2 2 becomes :96 .:6/.:8/ C .:8/.:6/ 2 .:6 C :8 / C 2 .:8 C :6 / D 1. True: :96 < 1. p The cosine of is x= x 2 C y 2 , near side over hypotenuse. Then j cos j2 is not greater than 1: x 2 =.x 2 C y 2 / 1. The vectors w D .x; y/ with .1; 2/ w D x C 2y D 5 lie on a line in the xy plane. p The shortest w on that line is .1; 2/. (The Schwarz inequality p kwk v w=kvk D 5 is an equality when cos D 0 and w D .1; 2/ and kwk D 5.) The length kv wk is between 2 and 8 (triangle inequality when kvk D 5 and kwk D 3). The dot product v w is between 15 and 15 by the Schwarz inequality. Three vectors in the plane could make angles greater than 90? with each other: for example .1; 0/; . 1; 4/; . 1; 4/. Four vectors could not do this (360? total angle). How many can do this in R3 or Rn ? Ben Harris and Greg Marks showed me that the answer is n C 1: The vectors from the center of a regular simplex in Rn to its n C 1 vertices all have negative dot products. If n C 2 vectors in Rn had negative dot products, project them onto the plane orthogonal to the last one. Now you have n C 1 vectors in Rn 1 with negative dot products. Keep going to 4 vectors in R2 : no way! For a speci?c example, pick v D p .1; 2; In this example p 3/ and then w D . 3; 1; 2/. ? cos D v w=kvkkwk D 7= 14 14 D 1=2 and D 120 . This always happens when x C y C z D 0: 1 1 2 v w D xz C xy C yz D .x C y C z/2 .x C y 2 C z 2 / 2 2 1 1 This is the same as v w D 0 kvkkwk: Then cos D : 2 2 p Wikipedia gives this proof of geometric mean G D 3 xyz arithmetic mean A D .x C y C z/=3. First there is equality in case x D y D z . Otherwise A is somewhere between the three positive numbers, say for example z < A < y . Use the known inequality g a for the two positive numbers x and y C z A. Their 1 1 mean a D 2 .x C y C z A/ is 2 .3A A/ D same as A! So a g says that 3 2 A g A D x.y C z A/A. But .y C z A/A D .y A/.A z/ C yz > yz . Substitute to ?nd A3 > xyz D G 3 as we wanted to prove. Not easy!

30 random unit vectors in the columns of U . Then u 0 U is a row matrix of 30 dot products whose average absolute value may be close to 2= .

6

Solutions to Exercises

Problem Set 1.3, page 29

1 2s1 C 3s2 C 4s3 D .2; 5; 9/. The same vector b comes from S times x D .2; 3; 4/:

"

1 0 0 1 1 0 1 1 1

#" # " # " # .row 1/ x 2 2 3 D .row 2/ x D 5 : 4 .row 2/ x 9

2 The solutions are y1 D 1, y2 D 0, y3 D 0 (right side D column 1) and y1 D 1, y2 D 3,

y3 D 5. That second example illustrates that the ?rst n odd numbers add to n2 . " #" # y1 D B1 y1 D B1 1 0 0 B1 D B2 1 1 0 B2 3 y1 C y2 gives y2 D B1 CB2 D y1 C y2 C y3 D B3 y3 D B2 CB3 0 1 1 B3 " # " # 1 0 0 1 0 0 The inverse of S D 1 1 0 is A D 1 1 0 : independent columns in A and S ! 1 1 1 0 1 1 looks for other zero combinations (then the vectors are dependent, they lie in a plane): w2 D .w1 C w3 /=2 so one combination that gives zero is 1 w w2 C 1 w : 2 1 2 3

4 The combination 0w1 C 0w2 C 0w3 always gives the zero vector, but this problem

1 .r 1 C r 3 /. 2 The column and row combinations that produce 0 are the same: this is unusual. " # 1 3 5 1 2 4 has column 3 D 2 .column 1/ C column 2 6 cD3 1 1 3 " # 1 0 1 1 1 0 has column 3 D cD 1 column 1 C column 2 0 1 1 " # 0 0 0 2 1 5 has column 3 D 3 .column 1/ column 2 cD0 3 3 6

5 The rows of the 3 by 3 matrix in Problem 4 must also be dependent: r 2 D

7 All three rows are perpendicular to the solution x (the three equations r 1 x D 0 and

9 The cyclic difference matrix C has a line of solutions (in 4 dimensions) to C x D 0:

r 2 x D 0 and r 3 x D 0 tell us this). Then the whole plane of the rows is perpendicular to x (the plane is also perpendicular to all multiples c x ). 2 32 3 x1 0 D b1 x1 D b1 1 0 0 0 b1 x2 x1 D b2 x2 D b1 C b2 6 1 1 0 0 7 6 b2 7 8 D4 D A 1b x3 x2 D b3 x3 D b1 C b2 C b3 1 1 1 0 5 4 b3 5 x4 x3 D b4 x4 D b1 C b2 C b3 C b4 1 1 1 1 b4 2 1 6 1 4 0 0 0 1 1 0 0 0 1 1 32 3 2 3 2 3 1 x1 0 c 0 7 6 x2 7 6 0 7 6c 7 D when x D 4 5 D any constant vector. 0 5 4 x3 5 4 0 5 c 1 x4 0 c

Solutions to Exercises

z2 z1 D b1 10 z3 z2 D b2 0 z3 D b3 z1 D z2 D z3 D b1 b2 b2 b3 b3 b3

n n

7 " 1 0 0 1 1 0 1 1 1 # " b1 b2 b3 #

D

D?

1

b

11 The forward differences of the squares are .t C 1/2

t 2 D t 2 C 2t C 1 t 2 D 2t C 1. Differences of the nth power are .t C 1/ t Dt t n C nt n 1 C . The leading n 1 term is the derivative nt . The binomial theorem gives all the terms of .t C 1/n . 12 Centered difference matrices of even size seem to be invertible. Look at eqns. 1 and 4: 2 32 3 2 3 2 3 2 3 0 1 0 0 x1 b1 First x1 b2 b4 0 1 0 7 6 x2 7 6 b2 7 solve 6 1 6 x2 7 6 b1 7 D D 4 0 5 1 0 1 5 4 x3 5 4 b3 5 x2 D b1 4 x3 5 4 b4 0 0 1 0 x4 b4 x3 D b4 x4 b1 C b3

n

13 Odd size: The ?ve centered difference equations lead to b1 C b3 C b5 D 0.

x2 x3 x4 x5

x1 x2 x3 x4

14 An example is .a; b/ D .3; 6/ and .c; d / D .1; 2/. The ratios a=c and b=d are equal.

D b1 D b2 D b3 D b4 D b5

Add equations 1; 3; 5 The left side of the sum is zero The right side is b1 C b3 C b5 There cannot be a solution unless b1 C b3 C b5 D 0.

Then ad D bc . Then (when you divide by bd ) the ratios a=b and c=d are equal!

Problem Set 2.1, page 40

1 The columns are i D .1; 0; 0/ and j D .0; 1; 0/ and k D .0; 0; 1/ and b D .2; 3; 4/ D 2 3 4 5

6 7

8

9

2i C 3j C 4k. The planes are the same: 2x D 4 is x D 2, 3y D 9 is y D 3, and 4z D 16 is z D 4. The solution is the same point X D x . The columns are changed; but same combination. The solution is not changed! The second plane and row 2 of the matrix and all columns of the matrix (vectors in the column picture) are changed. If z D 2 then x C y D 0 and x y D z give the point .1; 1; 2/. If z D 0 then x C y D 6 and x y D 4 produce .5; 1; 0/. Halfway between those is .3; 0; 1/. If x; y; z satisfy the ?rst two equations they also satisfy the third equation. The line 1 L of solutions contains v D .1; 1; 0/ and w D . 1 ; 1; 1 / and u D 2 vC 1 w and all 2 2 2 combinations c v C d w with c C d D 1. Equation 1 C equation 2 equation 3 is now 0 D 4. Line misses plane; no solution. Column 3 D Column 1 makes the matrix singular. Solutions .x; y; z/ D .1; 1; 0/ or .0; 1; 1/ and you can add any multiple of . 1; 0; 1/; b D .4; 6; c/ needs c D 10 for solvability (then b lies in the plane of the columns). Four planes in 4-dimensional space normally meet at a point. The solution to Ax D .3; 3; 3; 2/ is x D .0; 0; 1; 2/ if A has columns .1; 0; 0; 0/; .1; 1; 0; 0/; .1; 1; 1; 0/, .1; 1; 1; 1/. The equations are x C y C z C t D 3; y C z C t D 3; z C t D 3; t D 2. (a) Ax D .18; 5; 0/ and (b) Ax D .3; 4; 5; 5/.

8

Solutions to Exercises

10 Multiplying as linear combinations of the columns gives the same Ax . By rows or by

columns: 9 separate multiplications for 3 by 3.

11 Ax equals .14; 22/ and .0; 0/ and (9; 7/. 12 Ax equals .z; y; x/ and .0; 0; 0/ and (3; 3; 6/. 13 (a) x has n components and Ax has m components 14

15 16

17

18

19

20

21

22

(b) Planes from each equation in Ax D b are in n-dimensional space, but the columns are in m-dimensional space. 2x C 3y C z C 5t D 8 is Ax D b with the 1 by 4 matrix A D ? 2 3 1 5 ?. The solutions x ?ll a 3D “plane” in 4 dimensions. It could be called a hyperplane. 1 0 0 1 (a) I D (b) P D 0 1 1 0 0 1 1 0 90? rotation from R D , 180? rotation from R2 D D I. 1 0 0 1 " # " # 0 1 0 0 0 1 P D 0 0 1 produces .y; z; x/ and Q D 1 0 0 recovers .x; y; z/. Q is the 1 0 0 0 1 0 inverse of P . " # 1 0 0 1 0 1 1 0 subtract the ?rst component from the second. ED and E D 1 1 0 0 1 " # # " 1 0 0 1 0 0 0 1 0 , E v D .3; 4; 8/ and E 1 E v recovers E D 0 1 0 and E 1 D 1 0 1 1 0 1 .3; 4; 5/. 0 0 1 0 projects onto the y -axis. projects onto the x -axis and P2 D P1 D 0 1 0 0 0 5 5 . and P2 P1 v D has P1 v D vD 0 0 7 p p 1 2 p p2 rotates all vectors by 45? . The columns of R are the results from RD 2 2 2 rotating .1; 0/ and .0; 1/! " # x The dot product Ax D ? 1 4 5 ? y D .1 by 3/.3 by 1/ is zero for points .x; y; z/ z on a plane in three dimensions. The columns of A are one-dimensional vectors. 2 ? 0 and b D ? 1 7 ? 0 . r D b A x prints as zero.

24 A v D ? 3 4 5 ? 0 and v 0 v D 50. But v A gives an error message from 3 by 1

23 A D ? 1 2 I 3 4 ? and x D ? 5

times 3 by 3.

26 The row picture has two lines meeting at the solution (4; 2). The column picture will

25 ones.4; 4/ ones.4; 1/ D ? 4 4 4 4 ? 0 ; B w D ? 10 10 10 10 ? 0 .

have 4.1; 1/ C 2. 2; 1/ D 4(column 1) C 2(column 2) D right side .0; 6/. 27 The row picture shows 2 planes in 3-dimensional space. The column picture is in 2-dimensional space. The solutions normally lie on a line.

Solutions to Exercises

9

28 The row picture shows four lines in the 2D plane. The column picture is in four-

32 A is singular when its third column w is a combination c u C d v of the ?rst columns.

dimensional space. No solution unless the right side is a combination of the two columns. :7 :65 29 u2 D and u3 D . The components add to 1. They are always positive. :3 :35 u7 ; v7 ; w7 are all close to .:6; :4/. Their components still add to 1. :8 :3 :6 :6 :8 :3 30 D D steady state s. No change when multiplied by . :2 :7 :4 :4 :2 :7 " # " # 8 3 4 5Cu 5 uCv 5 v 5 5 C u C v ; M3 .1; 1; 1/ D .15; 15; 15/; 31 M D 1 5 9 D 5 u v 6 7 2 5Cv 5Cu v 5 u M4 .1; 1; 1; 1/ D .34; 34; 34; 34/ because 1 C 2 C C 16 D 136 which is 4.34/. A typical column picture has b outside the plane of u, v, w. A typical row picture has the intersection line of two planes parallel to the third plane. Then no solution. 2 1 2 1 0 0 1 2 1 32 3 2 3 2 3 2 3 0 x1 1 x1 4 0 7 6 x2 7 6 2 7 6 x2 7 6 7 7 D has the solution 4 5 D 4 5. 1 5 4 x3 5 4 3 5 x3 8 2 x4 4 x4 6

33 w D .5; 7/ is 5u C 7v. Then Aw equals 5 times Au plus 7 times Av.

2 6 1 34 4 0 0

35 x D .1; : : : ; 1/ gives S x D sum of each row D 1 C C 9 D 45 for Sudoku matrices.

6 row orders .1; 2; 3/, .1; 3; 2/, .2; 1; 3/, .2; 3; 1/, .3; 1; 2/, .3; 2; 1/ are in Section 2.7. The same 6 permutations of blocks of rows produce Sudoku matrices, so 64 D 1296 orders of the 9 rows all stay Sudoku. (And also 1296 permutations of the 9 columns.)

Problem Set 2.2, page 51

1 Multiply by `21 D 2

to circle are 2 and 6.

10 2

D 5 and subtract to ?nd 2x C 3y D 14 and 6y D 6. The pivots

3 Subtract

6y D 6 gives y D 1. Then 2x C 3y D 1 gives x D 2. Multiplying the right side .1; 11/ by 4 will multiply the solution by 4 to give the new solution .x; y/ D .8; 4/. times equation 1. The new second pivot multiplying y is d bc/=a. Then y D .ag cf /=.ad bc/.

c a

4 Subtract ` D

1 (or add 1 ) times equation 1. The new second equation is 3y D 3. Then 2 2 y D 1 and x D 5. If the right side changes sign, so does the solution: .x; y/ D . 5; 1/.

.cb=a/

or .ad

5 6x C 4y is 2 times 3x C 2y . There is no solution unless the right side is 2 10 D 20.

Then all the points on the line 3x C 2y D 10 are solutions, including .0; 5/ and .4; 1/. (The two lines in the row picture are the same line, containing all solutions). the lines become the same: in?nitely many solutions like .8; 0/ and .0; 4/.

6 Singular system if b D 4, because 4x C 8y is 2 times 2x C 4y . Then g D 32 makes 7 If a D 2 elimination must fail (two parallel lines in the row picture). The equations

have no solution. With a D 0, elimination will stop for a row exchange. Then 3y D gives y D 1 and 4x C 6y D 6 gives x D 3.

3

10

8 If k D 3 elimination must fail: no solution. If k D 9

Solutions to Exercises

3, elimination gives 0 D 0 in equation 2: in?nitely many solutions. If k D 0 a row exchange is needed: one solution. On the left side, 6x 4y is 2 times .3x 2y/. Therefore we need b2 D 2b1 on the right side. Then there will be in?nitely many solutions (two parallel lines become one single line). The equation y D 1 comes from elimination (subtract x C y D 5 from x C 2y D 6). Then x D 4 and 5x 4y D c D 16. 1 (a) Another solution is 2 .x C X; y C Y; z C Z/. (b) If 25 planes meet at two points, they meet along the whole line through those two points. Elimination leads to an upper triangular system; then comes back substitution. 2x C 3y C z D 8 xD2 y C 3z D 4 gives y D 1 If a zero is at the start of row 2 or 3, 8z D 8 z D 1 that avoids a row operation. 2x 3y D 3 2x 3y D3 2x 3y D 3 xD3 y C z D 1 and 4x 5y C z D 7 gives y C z D 1 and y D 1 2x y 3z D 5 5z D 0 zD0 2y C 3z D 2 Subtract 2 row 1 from row 2, subtract 1 row 1 from row 3, subtract 2 row 2 from row 3 Subtract 2 times row 1 from row 2 to reach .d 10/y z D 2. Equation (3) is y z D 3. If d D 10 exchange rows 2 and 3. If d D 11 the system becomes singular. The second pivot position will contain 2 b . If b D 2 we exchange with row 3. If b D 1 (singular case) the second equation is y z D 0. A solution is .1; 1; 1/. 0x C 0y C 2z D 4 Exchange 0x C 3y C 4z D 4 Example of x C 2y C 2z D 5 but then x C 2y C 2z D 5 (a) 2 exchanges (b) 0x C 3y C 4z D 6 break down 0x C 3y C 4z D 6 (exchange 1 and 2, then 2 and 3) (rows 1 and 3 are not consistent) If row 1 D row 2, then row 2 is zero after the ?rst step; exchange the zero row with row 3 and there is no third pivot. If column 2 D column 1, then column 2 has no pivot. Example x C 2y C 3z D 0, 4x C 8y C 12z D 0, 5x C 10y C 15z D 0 has 9 different coef?cients but rows 2 and 3 become 0 D 0: in?nitely many solutions. Row 2 becomes 3y 4z D 5, then row 3 becomes .q C 4/z D t 5. If q D 4 the system is singular—no third pivot. Then if t D 5 the third equation is 0 D 0. Choosing z D 1 the equation 3y 4z D 5 gives y D 3 and equation 1 gives x D 9. Singular if row 3 is a combination of rows 1 and 2. From the end view, the three planes form a triangle. This happens if rows 1 C 2 D row 3 on the left side but not the right side: x C y C z D 0, x 2y z D 1, 2x y D 4. No parallel planes but still no solution.

3 4 5 3 4 5

10 11 12

13

14 15

16

17 18 19

20

21 (a) Pivots 2; 2 ; 3 ; 4 in the equations 2x C y D 0; 2 y C z D 0; 3 z C t D 0; 4 t D 5

after elimination. Back substitution gives t D 4; z D 3; y D 2; x D 1. (b) If the off-diagonal entries change from C1 to 1, the pivots are the same. The solution is .1; 2; 3; 4/ instead of . 1; 2; 3; 4/.

6

22 The ?fth pivot is 5 for both matrices (1’s or

nC1 . n

1’s off the diagonal). The nth pivot is

Solutions to Exercises

11

could be 2y C `.x C y/ D 3 C ` for any `. Then ` will be the multiplier to reach 2y D 3. a 2 24 Elimination fails on if a D 2 or a D 0. a a 25 a D 2 (equal columns), a D 4 (equal rows), a D 0 (zero column). 26 Solvable for s D 10 (add the two pairs of equations to get a C b C c C d on the left sides, 12 and 2 C s on the right sides). The four 2 equations for 3 a; b; c; d are 2 singular! Two 3 1 1 0 0 1 1 0 0 1 3 0 4 1 1 07 61 0 1 07 60 solutions are and ,AD4 and U D 4 . 1 7 2 6 0 0 1 15 0 0 1 15

27 28 29

23 If ordinary elimination leads to x C y D 1 and 2y D 3, the original second equation

30 31 32

0 1 0 1 0 0 0 0 Elimination leaves the diagonal matrix diag.3; 2; 1/ in 3x D 3; 2y D 2; z D 4. Then x D 1; y D 1; z D 4. A.2; W/ D A.2; W/ 3 A.1; W/ subtracts 3 times row 1 from row 2. The average pivots for rand(3) without row exchanges were 1 ; 5; 10 in one experiment— 2 but pivots 2 and 3 can be arbitrarily large. Their averages are actually in?nite ! With row exchanges in MATLAB’s lu code, the averages :75 and :50 and :365 are much more stable (and should be predictable, also for randn with normal instead of uniform probability distribution). If A.5; 5/ is 7 not 11, then the last pivot will be 0 not 4. Row j of U is a combination of rows 1; : : : ; j of A. If Ax D 0 then U x D 0 (not true if b replaces 0). U is the diagonal of A when A is lower triangular. The question deals with 100 equations Ax D 0 when A is singular. (a) Some linear combination of the 100 rows is the row of 100 zeros. (b) Some linear combination of the 100 columns is the column of zeros. (c) A very singular matrix has all ones: A D eye(100). A better example has 99 random rows (or the numbers 1i ; : : : ; 100i in those rows). The 100th row could be the sum of the ?rst 99 rows (or any other combination of those rows with no zeros). (d) The row picture has 100 planes meeting along a common line through 0. The column picture has 100 vectors all in the same 99-dimensional hyperplane.

Problem Set 2.3, page 63

# " # " #" # " # 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 5 1 0 ; E32 D 0 1 0 ; P D 0 0 1 1 0 0 D 0 0 1 . 1 E21 D 0 0 1 0 7 1 0 1 0 0 0 1 1 0 0 2 E32 E21 b D .1; 5; 35/ but E21 E32 b D .1; 5; 0/. When E32 comes ?rst, row 3 feels no effect from row 1. " # " # " # " # 1 0 0 1 0 0 1 0 0 1 0 0 4 1 0 ; 0 1 0 ; 0 1 0 M D E32 E31 E21 D 4 1 0 : 3 0 0 1 2 0 1 0 2 1 10 2 1 "

12 2 3 2 3 2 1 1 6 7 E21 6 7 E31 6 4 Elimination on column 4: b D 4 0 5 ! 4 4 5 ! 4 0 0 original Ax D b has become U x D c D .1; 4; 10/. Then 1 z D 5; y D 2 ;x D 1 : This solves Ax D .1; 0; 0/. 2

Solutions to Exercises

7 E32 6 7 4 5 ! 4 4 5. The 2 10 back substitution gives 1 3 2 1 3

5 Changing a33 from 7 to 11 will change the third pivot from 5 to 9. Changing a33 from

7 to 2 will change the pivot from 5 to no pivot. 2 32 3 2 3 2 3 7 1 4 6 76 7 6 7 6 Example: 4 2 3 7 5 4 3 5 D 4 4 5. If all columns are multiples of column 1, 2 3 7 there is no second pivot. 1 4

7 To reverse E31 , add 3 7 times row 2 1 to row 3 3. The inverse of the elimination matrix 2

8

9

10

11

12

1 0 0 1 0 0 7 6 7 6 1 E D 4 0 1 0 5 is E D 4 0 1 0 5. 7 0 1 7 0 1 a b a b M D and M * D . det M * D a.d `b/ b.c `a/ c d c `a d `b reduces to ad bc ! " # 1 0 0 M D 0 0 1 . After the exchange, we need E31 (not E21 ) to act on the new row 3. 1 1 0 " # " # " # 1 0 1 1 0 1 2 0 1 E13 D 0 1 0 I 0 1 0 I E31 E13 D 0 1 0 : Test on the identity matrix! 0 0 1 1 0 1 1 0 1 # " 1 2 2 An example with two negative pivots is A D 1 1 2 . The diagonal entries can 1 2 1 change sign during elimination. # " # " 1 2 3 rows and 9 8 7 1 2 . also columns The second product is 0 The ?rst product is 6 5 4 0 2 3 3 2 1 reversed. at zero will stay at zero. nonzero row. (b) E could add row 2 to row 3 to change a zero row to a

13 (a) E times the third column of B is the third column of EB . A column that starts

`21 D 1 , E32 has `32 D 2 , E43 has `43 D 3 . Otherwise the E ’s match I . 2 3 4 " # " # 1 4 7 1 4 7 1 2 5 ! 0 6 12 . The zero became 12, 15 aij D 2i 3j : A D 3 0 3 0 12 24 " # 1 0 0 1 0 . an example of ?ll-in. To remove that 12, choose E32 D 0 0 2 1

14 E21 has

Solutions to Exercises

16 (a) The ages of X and Y are x and y : x

13

2y D 0 and x C y D 33; x D 22 and y D 11 (b) The line y D mx C c contains x D 2, y D 5 and x D 3, y D 7 when 2m C c D 5 and 3m C c D 7. Then m D 2 is the slope. aC b C c D 4

17 The parabola y D aCbx Ccx 2 goes through the 3 given points when aC 2b C 4c D 8 .

aC 3b C 9c D 14 Then a D 2, b D 1, and c D 1. This matrix with columns .1; 1; 1/, .1; 2; 3/, .1; 4; 9/ is a “Vandermonde matrix.” " # " # " # " # 1 0 0 1 0 0 1 0 0 1 0 0 2 3 a 1 0 , E D 2a 1 0 , F D 0 1 0 : 18 EF D a 1 0 , FE D b c 1 b C ac c 1 2b 0 1 0 3c 1

19

20

21 22

23 E.EA/ subtracts 4 times row 1 from row 2 (EEA does the row operation twice).

# " # 0 1 0 0 0 1 PQ D 0 0 1 . In the opposite order, two row exchanges give QP D 1 0 0 , 1 0 0 0 1 0 2 2 If M exchanges rows 2 and 3 then M D I (also . M / D I ). There are many square a b roots of I : Any matrix M D has M 2 D I if a2 C bc D 1. c a 1 2 4 1 0 D (a) Each column of EB is E times a column of B (b) 1 2 4 1 1 1 2 4 . All rows of EB are multiples of 1 2 4 . 2 4 8 2 1 1 1 1 1 1 0 . but FE D give EF D and F D No. E D 1 1 1 2 0 1 1 1 P P (a) a3j xj (b) a21 a11 (c) a21 2a11 (d) .EAx /1 D .Ax /1 D a1j xj .

"

AE subtracts 2 times column 2 of A from column 1 (multiplication by E on the right side acts on columns instead of rows). 2x1 C 3x2 D 1 2 3 1 2 3 1 . The triangular system is ! 24 A b D 0 5 15 5x2 D 15 4 1 17 Back substitution gives x1 D 5 and x2 D 3.

25 The last equation becomes 0 D 3. If the original 6 is 3, then row 1 C row 2 D row 3.

1 26 (a) Add two columns b and b 2 4 and x D . 1

1 4 1 0 ! 7 0 1 0

4 1

1 0 7 !xD 2 2 1

27 (a) No solution if d D 0 and c ¤ 0 (b) Many solutions if d D 0 D c . No effect from a; b . 28 A D AI D A.BC / D .AB/C D IC D C . That middle equation is crucial.

14 2

Solutions to Exercises

30 Given positive integers with ad

bc D 1. Certainly c < a and b < d would be impossible. Also c > a and b > d would be impossible with integers. This leaves 3 4 row 1 < row 2 OR row 2 < row 1. An example is M D . Multiply by 2 3 1 1 1 1 1 0 1 1 to get , then multiply twice by to get . This shows 0 1 2 3 1 1 0 1 1 1 1 0 1 0 1 1 that M D . 0 1 1 1 1 1 0 1 2 3 2 3 2 1 1 1 6 1=2 1 7 6 0 7 6 1 7, E32 D 6 7, E43 D 6 0 1 31 E21 D 6 4 0 5 4 0 2=3 1 5 4 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 3=4 1 2 3 1 6 1=2 7 1 7 E43 E32 E21 D 6 4 1=3 2=3 1 5 1=4 2=4 3=4 1

3 2 3 1 0 0 0 1 0 0 0 1 0 07 6 1 60 1 0 07 29 E D 4 subtracts each row from the next row. The result 4 0 1 1 05 0 1 1 05 0 0 1 1 0 1 2 1 still has multipliers D 1 in a 3 by 3 Pascal matrix. The product M of all elimination 2 3 1 0 0 0 1 0 07 6 1 matrices is 4 . This “alternating sign Pascal matrix” is on page 88. 1 2 1 05 1 3 3 1

3

7 7, 5

Problem Set 2.4, page 75

1 If all entries of A; B; C; D are 1, then BA D 3 ones.5/ is 5 by 5; AB D 5 ones.3/ is 3 2 (a) A (column 3 of B )

(b) (Row 1 of A) B (c) (Row 3 of A)(column 4 of B ) (d) (Row 1 of C )D (column 1 of E ). 3 8 3 AB C AC is the same as A.B C C / D . (Distributive law). 6 9 0 0 4 A.BC / D .AB/C by the associative law. In this example both answers are 0 0 from column 1 of AB and row 2 of C (multiply columns times rows). n 1 2b 1 nb 4 4 2 2n 2 n 2 n 5 (a) A D and A D . (b) A D and A D . 0 1 0 1 0 0 0 0 10 4 16 2 6 .A C B/2 D D A2 C AB C BA C B 2 . But A2 C 2AB C B 2 D . 6 6 3 0

7 (a) True

by 3; ABD D 15 ones.3; 1/ is 3 by 1. DBA and A.B C C / are not de?ned.

(b) False

(c) True

(d) False: usually .AB/2 ¤ A2 B 2 .

Solutions to Exercises

15

8 The rows of DA are 3 (row 1 of A) and 5 (row 2 of A). Both rows of EA are row 2 of A.

9

10

11

12

13 14 15 16 17

18 19

20

The columns of AD are 3 (column 1 of A) and 5 (column 2 of A). The ?rst column of AE is zero, the second is column 1 of A C column 2 of A. " # a aCb AF D and E.AF / equals .EA/F because matrix multiplication is c cCd associative " . # " # aCc bCd aCc bCd FA D and then E.FA/ D . E.FA/ is not c d a C 2c b C 2d the same as F .EA/ because multiplication is not commutative. " # 0 0 1 (a) B D 4I (b) B D 0 (c) B D 0 1 0 (d) Every row of B is 1; 0; 0. 1 0 0 " # " # a 0 a b AB D D BA D gives b D c D 0. Then AC D CA gives c 0 0 0 a D d . The only matrices that commute with B and C (and all other matrices) are multiples of I : A D aI . .A B/2 D .B A/2 D A.A B/ B.A B/ D A2 AB BA C B 2 . In a typical case (when AB ¤ BA) the matrix A2 2AB C B 2 is different from .A B/2 . (a) True (A2 is only de?ned when A is square) (b) False (if A is m by n and B is n by m, then AB is m by m and BA is n by n). (c) True (d) False (take B D 0). (a) mn (use every entry of A) (b) mnp D ppart (a) (c) n3 (n2 dot products). (a) Use only column 2 of B (b) Use only row 2 of A (c)–(d) Use row 2 of ?rst A. 3 3 2 2 1 1 1 1 1 1 7 7 6 6 1 1 5 has aij D . 1/i Cj D A D 4 1 2 2 5 has aij D min.i; j /. A D 4 1 1 2 3 13 1 1 2 1=1 1=2 1=3 6 7 “alternating sign matrix”. A D 4 2=1 2=2 2=3 5 has aij D i=j (this will be an 3=1 3=2 3=3 example of a rank one matrix). Diagonal matrix, lower triangular, symmetric, all rows equal. Zero matrix ?ts all four. a31 21 (a) a11 (b) `31 D a31 =a11 (c) a32 . a /a12 (d) a22 . a /a12 . a11 11 2 3 2 3 0 0 4 0 0 0 0 8 6 0 0 0 4 7 6 0 0 0 0 7 7 6 7 6 A2 D 6 7 ; A4 D zero matrix for strictly triangular A. 7 ; A3 D 6 4 0 0 0 0 5 4 0 0 0 0 5 0 0 0 0 2 x 6 y 6 Then Av D A 6 4 z t 3 7 6 2z 7 6 4t 7 6 0 7 7 6 7 6 7 6 7 7D6 7 ; A2 v D 6 7 ; A3 v D 6 7 ; A4 v D 0 . 5 4 2t 5 4 0 5 4 0 5 0 0 0 2 2y 0 0 0 0 3 2 4z 3 2 8t 3

16

21 A D A2 D A3 D D

Solutions to Exercises

" :5 :5 :5 :5 # but AB D and .AB/2 D zero matrix! :5 1 1 1 0 0 D ; 1 1 1 0 0 :5 " :5 :5 #

0 22 A D 1 0 DE D 1 " 0 23 A D 0

24

25

26

27

28

29

30 31

1 1 2 has A D I ; BC D 0 1 1 0 1 1 0 D D ED . You can ?nd more examples. 0 1 0 0 1 # 1 has A2 D 0. Note: Any matrix A D column times row D uvT will 0 2 3 2 3 0 1 0 0 0 1 6 7 6 7 have A2 D uvT uvT D 0 if vT u D 0. A D 4 0 0 1 5 has A2 D 4 0 0 0 5 0 0 0 0 0 0 3 but A D 0; strictly triangular as in Problem 20. n n 2 2n 1 a an 1 b n n n 1 1 1 n .A1 / D , .A2 / D 2 , .A3 / D . 0 1 1 1 0 0 2 32 3 2 3 2 3 2 3 a b c 1 0 0 a d c 1 0 0 0 1 0 0 0 1 4 d e f 54 0 1 0 5D4 d 5 C4 e 5 C4 f 5 . g h i 0 0 1 g h i # " # " # " " # 0 3 3 0 0 0 0 1 Columns of A 2 3 3 0 C 4 1 2 1 D 6 6 0 C 4 8 4 D times rows of B 1 2 1 2 1 6 6 0 # " 3 3 0 10 14 4 D AB . 7 8 1 (a) (row and (row 3 of A) (column are both zero. " 2 of B ) # " # " 1 of B )# " #3 of A) (column 0 0 x x 0 x x x (b) x 0 x x D 0 x x and x 0 0 x D 0 0 x : both upper. 0 0 x x 0 0 0 0 ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ A times B ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ A ˇ ˇ ˇ , B, ˇ ˇ ˇ , ˇ ˇ with cuts # " # " 1 0 0 1 0 0 0 1 0 produce zeros in the 2; 1 and 3; 1 entries. E21 D 1 1 0 and E31 D 0 0 1 4 0 1 # " # " 1 0 0 2 1 0 1 1 0 . Then EA D 0 1 1 is the Multiply E ’s to get E D E31 E21 D 0 1 3 4 0 1 result of both E ’s since .E31 E21 /A D E31 .E21 A/. 1 1 2 0 1 in the lower corner of EA. , DD , D cb=a D In 29, c D 5 3 1 3 8 A B x Ax B y real part Complex matrix times complex vector D B A y B x C Ay imaginary part. needs 4 real times real multiplications.

Solutions to Exercises

32 A times X D ? x 1 x 2 x 3 ? will be the identity matrix I D ? Ax 1 Ax 2 Ax 3 ?.

17

36 Multiplying AB D(m by n)(n by p ) needs mnp multiplications. Then .AB/C needs

" # " # " # 3 3 1 0 0 8 I AD 1 1 0 will have 33 b D 5 gives x D 3x 1 C 5x 2 C 8x 3 D 8 16 0 1 1 those x 1 D .1; 1; 1/; x 2 D .0; 1; 1/; x 3 D .0; 0; 1/ as columns of its “inverse” A 1 . aCb aCb a C c b C b when b D c 34 A ones D agrees with ones A D cCd cCd aCc bCd and a D d a b Then A D . b a 2 3 2 3 0 1 0 1 2 0 2 0 aba, ada cba, cda These show 61 0 1 07 6 0 2 0 2 7 bab, bcb dab, dcb 16 2-step 2 35 A D 4 ; A D4 ; 0 1 0 15 2 0 2 0 5 abc, adc cbc, cdc paths in 1 0 1 0 0 2 0 2 bad, bcd dad, dcd the graph mpq more. Multiply BC D (n by p)(p by q ) needs npq and then A.BC / needs mnq .

(a) If m; n; p; q are 2; 4; 7; 10 we compare .2/.4/.7/ C .2/.7/.10/ D 196 with the larger number .2/.4/.10/ C .4/.7/.10/ D 360. So AB ?rst is better, so that we multiply that 7 by 10 matrix by as few rows as possible. (b) If u; v; w are N by 1, then .uT v/wT needs 2N multiplications but uT .vwT / needs N 2 to ?nd vwT and N 2 more to multiply by the row vector uT . Apologies to use the transpose symbol so early. (c) We are comparing mnp C mpq with mnq C npq . Divide all terms by mnpq : Now we are comparing q 1 Cn 1 with p 1 Cm 1 . This yields a simple important rule. If matrices A and B are multiplying v for AB v, don’t multiply the matrices ?rst.

37 The proof of .AB/c D A.B c / used the column rule for matrix multiplication—this

rule is clearly linear, column by column.

Even for nonlinear transformations, A.B.c // would be the “composition” of A with B (applying B then A). This composition A ? B is just AB for matrices.

One of many uses for the associative law: The left-inverse B = right-inverse C from B D B.AC / D .BA/C D C .

Problem Set 2.5, page 89

1 A

1

D

0

1 3

1 4

0

and B

1

D

2

1 2

0

1 2

1

and C

1

1

D

7 5

1

2 A simple row exchange has P D I so P

D P . Here P

D

4 . 3 "

# 0 0 1 1 0 0 . Always 0 1 0

P

1

= “transpose” of P , coming in Section 2:7.

18

Solutions to Exercises

4 The equations are x C 2y D 1 and 3x C 6y D 0. No solution because 3 times equation

1 x :5 t :2 5 2 1 3 D and D so A D . This question solved y :2 z :1 2 1 10 AA 1 D I column by column, the main idea of Gauss-Jordan elimination. 1 gives 3x C 6y D 3. a for any a. And also U . 1

1 5 An upper triangular U with U D I is U D 0

2

6 (a) Multiply AB D AC by A

1

as B

C has the form

x x

7 (a) In Ax D .1; 0; 0/, equation 1 C equation 2 8 (a) The vector x D .1; 1; 1/ solves Ax D 0

to ?nd B D C (since A is invertible) (b) As long y 1 1 , we have AB D AC for A D . y 1 1

sides must satisfy b1 C b2 D b3

equation 3 is 0 D 1 (b) Right (c) Row 3 becomes a row of zeros—no third pivot.

9 If you exchange rows 1 and 2 of A to reach B , you exchange columns 1 and 2 of A

(b) After elimination, columns 1 and 2 end in zeros. Then so does column 3 D column 1 C 2: no third pivot.

1

. In matrix notation, B D PA has B 3 2 0 0 0 1=5 0 1=4 0 7 6 0 6 and B 1 D 4 10 A 1 D 4 0 1=3 0 0 5 1=2 0 0 0 block of B ).

11 (a) If B D

to reach B 2

1

1

DA 3 4 0 0

1

P 2 3 0 0

1

D A P for this P . 3 0 0 0 07 (invert each 6 55 7 6

1

and B D

13 M

1

12 Multiply C D AB on the left by A

0 0

1 0 A then certainly A C B = zero matrix is not invertible. (b) A D 0 0 0 are both singular but A C B D I is invertible. 1

1

and on the right by C

1

: Then A

1

CM

14 B

1

1

D C A.

1

B

1

A

1

so multiply on the left by C and the right by A W B

1

D BC

1

.

1

D

DA

1

1 . d b ad bc 0 The inverse of each matrix is a b D . 16 the other divided by ad bc c d c a 0 ad bc # # " " #" #" 1 1 1 1 1 1 1 1 1 1 D E. D 17 E32 E31 E21 D 0 1 1 1 1 1 1 1 " # 1 1 1 1 Reverse the order and change 1 to C1 to get inverses E21 E31 E32 D 1 1 D 1 1 1 L D E 1 . Notice the 1’s unchanged by multiplying in this order.

15 If A has a column of zeros, so does BA. Then BA D I is impossible. There is no A

1 1

0 1

1

DA

1 0 : subtract column 2 of A 1 1

1

from column 1.

18 A2 B D I can also be written as A.AB/ D I . Therefore A

1

is AB .

Solutions to Exercises

19 The .1; 1/ entry requires 4a

19

1 matrices are invertible, including all four with three 1’s. 1 3 1 0 1 3 1 0 1 0 7 3 22 ! ! D I A 1 ; 2 7 0 1 0 1 2 1 0 1 2 1 1 4 1 0 1 4 1 0 1 0 3 4=3 ! ! D I A 1 . 3 9 0 1 0 3 3 1 0 1 1 1=3 " # " # 2 1 0 1 0 0 1 0 0 2 1 0 1=2 1 0 ! 23 ?A I ? D 1 2 1 0 1 0 ! 0 3=2 1 0 1 2 0 0 1 0 1 2 0 0 1 " # " # 2 1 0 1 0 0 2 1 0 1 0 0 0 3=2 1 1=2 1 0 ! 0 3=2 0 3=4 3=2 3=4 ! 0 0 4=3 0 0 4=3 1=3 2=3 1 1=3 2=3 1 " # " # 2 0 0 3=2 1 1=2 1 0 0 3=4 1=2 1=4 0 3=2 0 3=4 3=2 3=4 ! 0 1 0 1=2 1 1=2 D 0 0 4=3 0 0 1 1=3 2=3 1 1=4 1=2 3=4 1 ?I A ?. " # " # " # 1 a b 1 0 0 1 a 0 1 0 b 1 0 0 1 a ac b c ! 0 1 0 0 1 c . 24 0 1 c 0 1 0 ! 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 " # " #" # " # # 1 2 1 1 3 1 1 2 1 1 1 0 1 1 3 1 I 1 2 1 1 D 0 so B 1 does 25 1 2 1 D 4 1 1 2 1 1 3 1 1 2 1 0 not exist. 1 0 1 2 1 2 1 1 1 0 1 0 26 E21 A D D . E12 E21 A D AD . 2 1 2 6 0 2 0 1 2 1 0 2 1 0 Multiply by D D to reach DE12 E21 A D I . Then A 1 D DE12 E21 D 0 1=2 6 2 1 . 2 2 1 " # " # 1 0 0 2 1 0 2 1 3 (notice the pattern); A 1 D 1 2 1 . 27 A 1 D 0 0 1 0 1 1 0 2 1 0 2 2 0 1 2 0 1 1 1 0 1=2 1=2 28 ! ! ! . 2 2 0 1 0 2 1 0 0 2 1 0 0 1 1=2 0 This is I A 1 : row exchanges are certainly allowed in Gauss-Jordan. " (b) False (the matrix of all ones is singular even with diagonal 1’s: ones (3) has 3 equal rows) (c) True (the inverse of A 1 is A and the inverse of A2 is .A 1 /2 /.

20 A ones.4; 1/ is the zero vector so A cannot be invertible. 21 Six of the sixteen 0

3b D 1; the .1; 2/ entry requires 2b a D 0. Then 2 and a D . For the 5 by 5 case 5a 4b D 1 and 2b D a give b D 1 and b D 1 5 5 6 2 a D 6.

29 (a) True (If A has a row of zeros, then every AB has too, and AB D I is impossible)

20

Solutions to Exercises

30 This A is not invertible for c D 7 (equal columns), c D 2 (equal rows), c D 0 (zero

column).

31 Elimination produces the pivots a and a b and a b . A

1

D

1 a.a b/

35 A can be invertible with diagonal zeros. B is singular because each row adds to zero. 36 The equation LDLD D I says that LD D pascal .4; 1/ is its own inverse.

1 1 0 0 60 1 1 07 32 A D4 . When the triangular A alternates 1 and 1 on its diagonal, 0 0 1 15 0 0 0 1 A 1 is bidiagonal with 1’s on the diagonal and ?rst superdiagonal. 33 x D .1; 1; : : : ; 1/ has P x D Qx so .P Q/x D 0. I 0 A 1 0 D I 34 and and . C I I 0 D 1 CA 1 D 1

1

2

3

"

# a 0 b a a 0 . 0 a a

37 hilb(6) is not the exact Hilbert matrix because fractions are rounded off. So inv(hilb(6)) 38 39

40

41

42

43 4 by 4 still with T11 D 1 has pivots 1; 1; 1; 1; reversing to T D UL makes T44 D 1.

is not the exact either. The three Pascal matrices have P D LU D LLT and then inv.P / D inv.LT /inv.L/. Ax D b has many solutions when A D ones .4; 4/ D singular matrix and b D ones .4; 1/. Anb in MATLAB will pick the shortest solution x D .1; 1; 1; 1/=4. This is the only solution that is combination of the rows of A (later it comes from the “pseudoinverse” AC D pinv(A) which replaces A 1 when A is singular). Any vector that solves Ax D 0 could be added to this particular solution x . 3 3 2 2 1 a 0 0 1 a ab abc 1 b 07 bc 7 60 60 1 b The inverse of A D 4 is A 1 D 4 . (This 0 0 1 c5 0 0 1 c 5 0 0 0 1 0 0 0 1 would be a good example for the cofactor formula A 1 D C T = det A in Section 5.3) 32 3 2 3 32 2 1 1 1 1 1 76 7 6a 1 7 7 60 1 6a 1 The product 4 54 5 D 4b d 1 5 5 40 d 1 1 b 0 1 0 e 0 1 f 1 c e f 1 c 0 0 1 that in this order the multipliers shows a; b; c; d; e; f are unchanged in the product (important for A D LU in Section 2.6). MM 1 D .In U V / .In C U.Im V U / 1 V / .this is testing formula 3/ D In U V C U.Im V U / 1 V U V U.Im V U / 1 V .keep simplifying/ D In U V C U.Im V U /.Im V U / 1 V D In .formulas 1; 2; 4 are similar/

44 Add the equations C x D b to ?nd 0 D b1 C b2 C b3 C b4 . Same for F x D b. 45 The block pivots are A and S D D

CA 1 B (and d cb=a is the correct second pivot of an ordinary 2 by 2 matrix). The example problem has 1 1 0 4 5 6 SD . 3 3 D 4 0 1 6 5 2

Solutions to Exercises

46 Inverting the identity A.I C BA/ D .I C AB/A gives .I C BA/

1 1

21

A 1 D A 1 .I C AB/ . So I CBA and I CAB are both invertible or both singular when A is invertible. (This remains true also when A is singular : Problem 6.6.19 will show that AB and BA have the same nonzero eigenvalues, and we are looking here at D 1.)

Problem Set 2.6, page 102

1 0 1 0 x 5 1 `21 D 1 multiplied row 1; L D times D D c is Ax D b : 1 1 1 1 y 2 1 1 x 5 D . 1 2 y 7 1 0 c1 5 5 2 Lc D b is D , solved by c D as elimination goes forward. 1 1 c2 7 2 1 1 x 5 3 U x D c is D , solved by x D in back substitution. 0 1 y 2 2 1 times .x Cy Cz D 5/C2 times .y C2z D 2/C1 times .z D 2/ gives x C3y C6z D 11. " #" # " # " #" # " # " # 1 5 5 1 1 1 5 5 2 D 7 ; Ux D 1 2 x D 2 ; xD 2 . Lc D 1 1 1 2 1 2 11 1 2 2 " #" # " # 1 2 1 0 2 1 0 0 1 0 4 2 D 0 4 2 D U . With E 1 as L, A D LU D EA D 3 0 1 6 3 5 0 0 5 " # 1 0 1 U. 3 0 1 #" # " # " # " 1 1 1 1 1 0 0 1 0 1 2 1 A D 0 2 3 D U . Then A D 2 1 0 U is 0 2 1 0 0 1 0 0 6 0 2 1 the same as E211 E321 U D LU . The multipliers `21 ; `32 D 2 fall into place in L. # #" #" #" " 1 0 0 1 1 1 2 2 2 . This is 2 1 1 1 E32 E31 E21 A D 3 4 5 2 1 3 1 1 # " # " 1 0 0 1 0 1 0 2 0 D U . Put those multipliers 2; 3; 2 into L. Then A D 2 1 0 U D LU . 0 0 2 3 2 1 # #" # " " #" 1 1 1 1 a 1 a 1 1 1 . D E D E32 E31 E21 D ac b c 1 b 1 1 c 1 The multipliers are just a; b; c and the upper triangular U is I . In this case A D L and its inverse is that matrix E D L 1 . # #" " # " 1 d e g d D 1; e D 1, then l D 1 1 1 0 f h f D 0 is not allowed 2 by 2: d D 0 not allowed; 1 1 2 D l 1 i no pivot in row 2 m n 1 1 2 1

3 `31 D 1 and `32 D 2 (and `33 D 1): reverse steps to get Au D b from U x D c : 4

5

6

7

8

9

22

Solutions to Exercises

10 c D 2 leads to zero in the second pivot position: exchange rows and not singular. 11

12

13

14

15

16

c D 1 leads to zero in the third pivot position. In this case the matrix is singular. " # " # 2 4 8 2 3 A D 0 3 9 has L D I (A is already upper triangular) and D D I 0 0 7 7 " # 1 2 4 A D LU has U D A; A D LDU has U D D 1 A D 0 1 3 with 1’s on the 0 0 1 diagonal. 2 4 1 0 2 4 1 0 2 0 1 2 AD D D D LDU ; U is LT 4 11 2 1 0 3 2 1 0 3 0 1 " #" # " #" #" # 1 1 4 0 1 1 1 4 0 4 1 0 4 4 D 4 1 4 0 1 1 D LDLT . 0 1 1 0 0 4 0 1 1 4 0 0 1 2 3 2 32 3 a a a a 1 a a a a a ¤ 0 All of the b a b a b a7 b ¤ a multipliers 6a b b b 7 61 1 76 . Need 4 a b c c 5 D 4 1 1 1 54 c b c b5 c ¤ b are `ij D 1 a b c d 1 1 1 1 d c d ¤ c for this A 2 3 2 32 3 a r r r 1 a r r r a¤0 b r s r s r7 b¤r 6a b s s 7 61 1 76 . Need 4a b c t 5 D 41 1 1 54 c s t s5 c¤s a b c d 1 1 1 1 d t d ¤t 5 2 2 4 2 2 1 0 . gives x D xD . Then gives c D cD 3 3 0 1 3 11 4 1 2 2 4 2 2 4 D c. xD . Forward to xD Ax D b is LU x D 3 0 1 11 8 17 " # # " # " " # # " # " 3 4 1 1 1 4 4 1 0 0 1 1 0 c D 5 gives c D 1 . Then 0 1 1 x D 1 gives x D 0 . 1 1 0 0 1 1 6 1 1 1 Those are the forward elimination and back substitution steps for # " # #" " 4 1 1 1 1 1 1 xD 5 . Ax D 1 1 6 1 1 1 1

(b) I goes to L 1 (c) LU goes to U . Elimination multiply by L 1 ! 18 (a) Multiply LDU D L1 D1 U1 by inverses to get L1 1 LD D D1 U1 U 1 . The left side is lower triangular, the right side is upper triangular ) both sides are diagonal. (b) L; U; L1 ; U1 have diagonal 1’s so D D D1 . Then L1 1 L and U1 U 1 are both I . # " # # " " #" a 1 1 1 0 a a 0 b b 1 1 D LI U I a a C b D (same L) 19 1 1 0 b bCc c 1 0 1 1 (same U ). A tridiagonal matrix A has bidiagonal factors L and U . 20 A tridiagonal T has 2 nonzeros in the pivot row and only one nonzero below the pivot (one operation to ?nd ` and then one for the new pivot!). T D bidiagonal L times bidiagonal U .

17 (a) L goes to I

Solutions to Exercises

23

21 For the ?rst matrix A; L keeps the 3 lower zeros at the start of rows. But U may not

22

23 24 25 26

have the upper zero where A24 D 0. For the second matrix B; L keeps the bottom left zero at the start of row 4. U keeps the upper right zero at the start of column 4. One zero in A and two zeros in B are ?lled in. " # " # " # 5 3 1 4 2 0 2 0 0 Eliminating upwards, 3 3 1 ! 2 2 0 ! 2 2 0 D L. We reach 1 1 1 1 1 1 1 1 1 a lower triangular L, and the multipliers are in an upper triangular U . A D UL with " # 1 1 1 U D 0 1 1 . 0 0 1 The 2 by 2 upper submatrix A2 has the ?rst two pivots 5; 9. Reason: Elimination on A starts in the upper left corner with elimination on A2 . The upper left blocks all factor at the same time as A: Ak is Lk Uk . The i; j entry of L 1 is j= i for i j . And Li i 1 is .1 i /= i below the diagonal .K 1 /ij D j.n i C 1/=.n C 1/ for i j (and symmetric): .n C 1/K 1 looks good.

Problem Set 2.7, page 115

1 1 AD 9 1 AD c 0 1 9 1 0 T 1 has A D ;A D ; .A 1 /T D .AT / 3 0 3 3 1=3 1 0 c c has AT D A and A 1 D 2 D .A 1 /T . 0 1 c c

1 T 1

1 3 D ; 0 1=3

2 .AB/T is not AT B T except when AB D BA. Transpose that to ?nd: B T AT D AT B T . 3 (a) ..AB/

4

5

6 7

8

/ D .B 1 A 1 /T D .A 1 /T .B 1 /T . This is also .AT / 1 .B T / 1 . (b) If U is upper triangular, so is U 1 : then .U 1 /T is lower triangular. 0 1 AD has A2 D 0. The diagonal of AT A has dot products of columns of A with 0 0 themselves. If AT A D 0, zero dot products ) zero columns ) A D zero matrix. "0# 1 2 3 2 T T 0 1 1 4 5 6 (a) x Ay D D 5 (b) x A D (c) Ay D . 4 5 6 5 0 T A CT T M D ; M T D M needs AT D A and B T D C and D T D D . B T DT 0 A (a) False: is symmetric only if A D AT . (b) False: The transpose of AB A 0 0 A 0 AT T T is B A D BA when A and B are symmetric transposes to . A 0 AT 0 So .AB/T D AB needs BA D AB . (c) True: Invertible symmetric matrices have symmetric in verses! Easiest proof is to transpose AA 1 D I . (d) True: .ABC /T is C T B T AT .D CBA for symmetric matrices A; B; and C ). The 1 in row 1 has n choices; then the 1 in row 2 has n 1 choices . . . (n! overall).

24 "

Solutions to Exercises

10 .3; 1; 2; 4/ and .2; 3; 1; 4/ keep 4 in place; 6 more even P ’s keep 1 or 2 or 3 in place;

#" # " # " # 0 1 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 1 0 but P2 P1 D 1 0 0 . 9 P1 P2 D D 1 0 0 0 1 0 1 0 0 0 0 1 If P3 and P4 exchange different pairs of rows, P3 P4 D P4 P3 does both exchanges.

.2; 1; 4; 3/ and .3; 4; 1; 2/ exchange 2 pairs. .1; 2; 3; 4/; .4; 3; 2; 1/ make 12 even P ’s. " #" # " # 0 1 0 0 0 6 1 2 3 1 2 3 D 0 4 5 is upper triangular. Multiplying on 11 PA D 0 0 1 1 0 0 0 4 5 0 0 6 the right by a permutation matrix P2 exchanges the columns. To make this A lower tri" # 1 1 angular, we also need P1 to exchange rows 2 and 3: P1 AP2 D 1 " # " # 1 6 0 0 1 A D 5 4 0 . 1 3 2 1

12 .P x /T .P y / D x T P T P y D x T y since P T P D I . In general P x y D x P T y ¤ x P y :

Non-equality where P ¤ P T : "

"

0 0 1

1 0 0 1 0 0

#" # " # " # " #" # 1 1 1 0 1 0 1 2 1 ¤ 2 0 0 1 1 . 3 2 3 1 0 0 2

14 The “reverse identity” P takes .1; : : : ; n/ into .n; : : : ; 1/. When rows and also columns

# 0 1 0 13 A cyclic P D 0 0 1 or its transpose will have P 3 D I W .1; 2; 3/ ! .2; 3; 1/ ! 1 0 0 b4 D P b ¤ I: b D 1 0 for the same P has P .3; 1; 2/ ! .1; 2; 3/. P 0 P are reversed, .PAP /ij is .A/n

i C1;n j C1 .

In particular .PAP /11 is Ann . (b) P D

15 (a) If P sends row 1 to row 4, then P T sends row 4 to row 1

P T with E D

16 A2

0 1

1 moves all rows: 1 and 2 are exchanged, 3 and 4 are exchanged. 0

E 0

0 E

D

18 (a) 5 C 4 C 3 C 2 C 1 D 15 independent entries if A D AT (b) L has 10 and D has 5; 19 (a) The transpose of R AR is R A R

B 2 (but not .A C B/.A B/, this is different) and also ABA are symmetric if A and B are symmetric. 0 1 1 1 T needs row exchange 17 (a) A D D A is not invertible (b) A D 1 1 1 1 1 0 1 1 . has D D (c) A D 1 0 0 1 total 15 in LDLT (c) Zero diagonal if AT D

T T T TT T

by n matrix R) (b) .R R/jj of column j ) 0.

D RT AR D n by n when AT D A (any m D (column j of R) (column j of R) D (length squared

A, leaving 4 C 3 C 2 C 1 D 10 choices.

Solutions to Exercises

25

0 1 3 1 0 1 0 1 3 1 b 1 0 1 1 b 20 D ; D 3 2 3 1 0 7 0 1 b c b 1 0 c b2 0 1 2 3 32 32 1 " # 1 2 0 1 2 1 0 2 3 6 1 7 76 76 T 2 1 1 2 1 D4 2 5 D LDL . 54 54 1 2 3 2 4 0 1 2 0 1 1 3 3 "

# " # 2 4 8 1 b c 5 7 d b 2 e bc The examples 4 3 9 and b d e lead to and . 7 32 e bc f c 2 8 9 0 c e f " # " #" # " # " #" # 1 1 1 0 1 1 1 1 2 0 1 1 ; 1 AD 1 1 1 1 22 1 AD 0 1 1 2 3 1 1 1 2 0 1 1 2 3 0 0 0 1 This cyclic P exchanges rows 1-2 then 61 0 0 07 23 A D 4 D P and L D U D I . 0 1 0 05 rows 2-3 then rows 3-4. 0 0 1 0 " #" # " #" # 1 0 1 2 1 2 1 1 1 0 3 8 D 0 1 3 8 . If we wait 24 PA D LU is 1 2 1 1 0 1=3 1 2=3 " #" #" # 1 1 2 1 1 1 0 1 2 . to exchange and a12 is the pivot, A D L1 P1 U1 D 3 1 1 1 0 0 2

25 The splu code will not end when abs.A.k; k// < tol line 4 of the slu code on page 100.

21 Elimination on a symmetric 3 by 3 matrix leaves a symmetric lower right 2 by 2 matrix.

26 One way to decide even vs. odd is to count all pairs that P has in the wrong order. Then

Instead splu looks for a nonzero entry below the diagonal in the current column k , and executes a row exchange. The 4 lines to exchange row k with row r are at the end of Section 2.7 (page 113). To ?nd that nonzero entry A.r; k/, follow abs.A.k; k// < tol by locating the ?rst nonzero (or the largest A.r; k/ out of r D k C 1; : : : ; n).

P is even or odd when that count is even or odd. Hard step: Show that an exchange always switches that count! Then 3 or 5 exchanges will leave that count odd. # # " " 1 0 0 1 T 3 1 puts 0 in the 2; 1 entry of E21 A. Then E21 AE21 D 0 2 4 27 (a) E21 D 0 4 9 1 # " 1 1 is still symmetric, with zero also in its 1, 2 entry. (b) Now use E32 D 4 1 T T to make the 3, 2 entry zero and E32 E21 AE21 E32 D D also has zero in its 2, 3 entry. Key point: Elimination from both sides gives the symmetric LDLT directly. 2 3 0 1 2 3 61 2 3 07 28 A D 4 D AT has 0; 1; 2; 3 in every row. (I don’t know any rules for a 2 3 0 15 3 0 1 2 symmetric construction like this)

26

29 Reordering the rows and/or the columns of

Solutions to Exercises

a b

c d will move the entry a. So the result cannot be the transpose (which doesn’t move a). " #" # " # 1 0 1 yBC yBC C yBS T 1 1 0 yCS yBC C yCS . 30 (a) Total currents are A y D D 0 1 1 yBS yCS yBS (b) Either way .Ax /T y D x T .AT y / D xB yBC C xB yBS xC yBC C xC yCS xS yCS xS yBS . " # " 700 # 1 50 x1 1 40 2 6820 1 truck T 3 31 40 1000 D Ax ; A y D D x2 50 1000 50 188000 1 plane 2 50 3000

32 Ax y is the cost of inputs while x AT y is the value of outputs. 33 P 3 D I so three rotations for 360? ; P rotates around .1; 1; 1/ by 120? . 34

1 4

2 1 0 1 D 9 2 1 2

1

2 D EH D (elementory matrix) times (symmetric matrix). 5

35 L.U T /

T

is lower triangular times lower triangular, so lower triangular. The transpose of U DU is U T D T U T T D U T DU again, so U T DU is symmetric. The factorization multiplies lower triangular by symmetric to get LDU which is A.

1

36 These are groups: Lower triangular with diagonal 1’s, diagonal invertible D , permuta37 Certainly B T is northwest. B 2 is a full matrix! B

1 1 1 is southeast: 1 D 0 1 0 1 1 . The rows of B are in reverse order from a lower triangular L, so B D PL. Then B 1 D L 1 P 1 has the columns in reverse order from L 1 . So B 1 is southeast. Northwest B D PL times southeast P U is .PLP /U D upper triangular.

1

tions P , orthogonal matrices with QT D Q

.

38 There are n? permutation matrices of order n. Eventually two powers of P must be

the same: If P r D P s then P r s D I . Certainly r s n! " 0 1 P2 0 1 and P3 D 0 0 P D is 5 by 5 with P2 D 1 0 P3 1 0 A T /.

# 0 1 and P 6 D I . 0

1 .A CAT / 2

39 To split A into (symmetric B ) C (anti-symmetric C ), the only choice is B D

and C D 1 .A 2

40 Start from Q Q D I , as in

T

"

# qT 1 qT 2

q1

1 q2 D 0

0 1

T (b) The off-diagonal entry is q T 1 q 2 D 0 (and in general q i q j D 0) cos sin (c) The leading example for Q is the rotation matrix . sin cos

T (a) The diagonal entries give q T 1 q 1 D 1 and q 2 q 2 D 1: unit vectors

Solutions to Exercises

27

Problem Set 3.1, page 127

1 x C y ¤ y C x and x C .y C z/ ¤ .x C y / C z and .c1 C c2 /x ¤ c1 x C c2 x . 3 (a) cx may not be in our set: not closed under multiplication. Also no 0 and no 2 When c.x1 ; x2 / D .cx1 ; 0/, the only broken rule is 1 times x equals x . Rules (1)-(4)

4 5 6 7 8

9

10 11

12 For the plane x C y 13 14

x (b) c.x C y / is the usual .xy/c , while c x C c y is the usual .x c /.y c /. Those are equal. With c D 3, x D 2, y D 1 this is 3.2 C 1/ D 8. The zero vector is the number 1. 0 0 1 1 1 2 2 The zero vector in matrix space M is I AD and A D . 1 1 2 2 0 0 2 The smallest subspace of M containing the matrix A consists of all matrices cA. (a) One possibility: The matrices cA form a subspace not containing B (b) Yes: the subspace must contain A B D I (c) Matrices whose main diagonal is all zero. When f .x/ D x 2 and g .x/ D 5x , the combination 3f 4g in function space is h.x/ D 3f .x/ 4g .x/ D 3x 2 20x . Rule 8 is broken: If c f .x/ is de?ned to be the usual f .cx/ then .c1 C c2 /f D f ..c1 C c2 /x/ is not generally the same as c1 f C c2 f D f .c1 x/ C f .c2 x/. If .f C g /.x/ is the usual f .g .x// then .g C f /x is g .f .x// which is different. In Rule 2 both sides are f .g .h.x///. Rule 4 is broken there might be no inverse function f 1 .x/ such that f .f 1 .x// D x . If the inverse function exists it will be the vector f. (a) The vectors with integer components allow addition, but not multiplication by 1 2 (b) Remove the x axis from the xy plane (but leave the origin). Multiplication by any c is allowed but not all vector additions. The only subspaces are (a) the plane with b1 D b2 (d) the linear combinations of v and w (e) the plane with b1 C b2 C b3 D 0. a b a a (a) All matrices (b) All matrices (c) All diagonal matrices. 0 0 0 0

for addition x C y still hold since addition is not changed.

15

16 17

2z D 4, the sum of .4; 0; 0/ and .0; 4; 0/ is not on the plane. (The key is that this plane does not go through .0; 0; 0/.) The parallel plane P0 has the equation x C y 2z D 0. Pick two points, for example .2; 0; 1/ and .0; 2; 1/, and their sum .2; 2; 2/ is in P0 . (a) The subspaces of R2 are R2 itself, lines through .0; 0/, and .0; 0/ by itself (b) The subspaces of R4 are R4 itself, three-dimensional planes n v D 0, two-dimensional subspaces .n1 v D 0 and n2 v D 0/, one-dimensional lines through .0; 0; 0; 0/, and .0; 0; 0; 0/ by itself. (a) Two planes through .0; 0; 0/ probably intersect in a line through .0; 0; 0/ (b) The plane and line probably intersect in the point .0; 0; 0/ (c) If x and y are in both S and T , x C y and c x are in both subspaces. The smallest subspace containing a plane P and a line L is either P (when the line L is in the plane P) or R3 (when L is not in P). (a) The invertible matrices do not so they are not a subspace include the zero matrix, 1 0 0 0 is not singular: not a subspace. (b) The sum of singular matrices C 0 0 0 1

28

18 (a) True: The symmetric matrices do form a subspace

Solutions to Exercises

(b) True: The matrices with AT D A do form a subspace (c) False: The sum of two unsymmetric matrices could be symmetric. The column space of A is the x -axis D all vectors .x; 0; 0/. The column space of B is the xy plane D all vectors .x; y; 0/. The column space of C is the line of vectors .x; 2x; 0/. (a) Elimination leads to 0 D b2 2b1 and 0 D b1 C b3 in equations 2 and 3: Solution only if b2 D 2b1 and b3 D b1 (b) Elimination leads to 0 D b1 C 2b3 in equation 3: Solution only if b3 D b1 . A combination of . Then of the columns C is also a combination of the columns of A 1 3 1 2 1 2 C D and A D have the same column space. B D has a 2 6 2 4 3 6 different column space. (a) Solution for every b (b) Solvable only if b3 D 0 (c) Solvable only if b3 D b2 . The extra column b enlarges the column space unless b is already in the column space. 1 0 1 (larger column space) 1 0 1 (b is in column space) ?A b? D 0 0 1 (no solution to Ax D b) 0 1 1 (Ax D b has a solution) The example B D 0 and A ¤ 0 is a case when AB D 0 has a smaller column space than A. The solution to Az D b C b is z D x C y . If b and b are in C .A/ so is b C b . The column space of any invertible 5 by 5 matrix is R5 . The equation Ax D b is always solvable (by x D A 1 b/ so every b is in the column space of that invertible matrix. (a) False: Vectors that are not in a column space don’t form a subspace. (b) True: Only the zero matrix has C .A/ D f0g . (c) True: C .A/ D C .2A/. 1 0 (or other examples). (d) False: C .A I / ¤ C .A/ when A D I or A D 0 0 # # " # " " 1 2 0 1 1 2 1 1 0 A D 1 0 0 and 1 0 1 do not have .1; 1; 1/ in C .A/. A D 2 4 0 3 6 0 0 1 1 0 1 0 has C .A/ D line. When Ax D b is solvable for all b, every b is in the column space of A. So that space is R9 . (a) If u and v are both in S C T , then u D s1 C t 1 and v D s2 C t 2 . So u C v D .s1 C s2 / C .t 1 C t 2 / is also in S C T . And so is c u D c s1 C c t 1 : a subspace. (b) If S and T are different lines, then S [ T is just the two lines (not a subspace) but S C T is the whole plane that they span. If S D C .A/ and T D C .B/ then S C T is the column space of M D ? A B ?. The columns of AB are combinations of the columns of A. So all columns of ? A AB ? 0 1 0 0 are already in C .A/. But A D has a larger column space than A2 D . 0 0 0 0 n For square matrices, the column space is R when A is invertible.

19

20

21

22 23

24 The column space of AB is contained in (possibly equal to) the column space of A.

25 26

27

28

29 30

31 32

Solutions to Exercises

29

Problem Set 3.2, page 140

1 2 2 1 (a) U D 0 0 1 0 0 0 " 4 6 2 3 0 0 # " # 2 4 2 Free variables x2 ; x4 ; x5 Free x3 (b) U D 0 4 4 Pivot variables x1 ; x3 Pivot x1 ; x2 0 0 0

2 (a) Free variables x2 ; x4 ; x5 and solutions . 2; 1; 0; 0; 0/, .0; 0; 2; 1; 0/, .0; 0; 3; 0; 1/

(b) .3; 1; 0/. Total of pivot and free is n. 7 (a) The nullspace of A in Problem 5 is the plane x C 3y C 5z D 0; it contains all the vectors .3y C 5z; y; z/ D y.3; 1; 0/ C z.5; 0; 1/ D combination of special solutions. (b) The line through .3; 1; 0/ has equations x C 3y C 5z D 0 and 2x C 6y C 7z D 0. The special solution for the free variable x2 is .3; 1; 0/. 1 3 0 1 0 1 3 5 with I D ? 1 ?; R D with I D . 8 RD 0 0 0 0 0 1 0 1

9 (a) False: Any singular square matrix would have free variables

(b) Free variable x3 : solution .1; 1; 1/. Special solution for each free variable. 3 The complete solution to Ax D 0 is ( 2x2 ; x2 ; 2x4 3x5 ; x4 ; x5 ) with x2 ; x4 ; x5 free. The complete solution to B x D 0 is (2x3 ; x3 ; x3 ). The nullspace contains only x D 0 when there are no free variables. " # " # 1 2 0 0 0 1 0 1 1 1 , R has the same nullspace as U and A. 4 RD 0 0 1 2 3 , RD 0 0 0 0 0 0 0 0 0 1 3 5 1 0 1 3 5 1 3 5 1 0 5 A D D I B D D 2 6 10 2 1 0 0 0 2 6 7 2 1 1 3 5 D LU . 0 0 3

6 (a) Special solutions .3; 1; 0/ and .5; 0; 1/

10

11

12

13 14 15

(b) True: An invertible square matrix has no free variables. (c) True (only n columns to hold pivots) (d) True (only m rows to hold pivots) (a) Impossible row 1 (b) A D invertible (c) A D all ones (d) A D 2I; R D I . 3 32 32 2 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 60 0 0 1 1 1 17 60 0 1 1 1 1 17 60 0 0 0 0 1 17 40 0 0 0 1 1 15 40 0 0 0 0 1 15 40 0 0 0 0 0 05 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 3 2 3 2 1 1 0 1 1 1 0 0 0 1 1 0 0 1 1 1 60 0 1 1 1 1 0 07 60 0 0 1 0 1 1 17 4 0 0 0 0 0 0 1 0 5, 4 0 0 0 0 1 1 1 1 5. Notice the identity 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 matrix in the pivot columns of these reduced row echelon forms R. If column 4 of a 3 by 5 matrix is all zero then x4 is a free variable. Its special solution is x D .0; 0; 0; 1; 0/, because 1 will multiply that zero column to give Ax D 0. If column 1 D column 5 then x5 is a free variable. Its special solution is . 1; 0; 0; 0; 1/. If a matrix has n columns and r pivots, there are n r special solutions. The nullspace contains only x D 0 when r D n. The column space is all of Rm when r D m. All important!

30

Solutions to Exercises

16 The nullspace contains only x D 0 when A has 5 pivots. Also the column space is R5 , 17 A D ? 1

because we can solve Ax D b and every b is in the column space. 3 1 ? gives the plane x 3y z D 0; y and z are free variables. The special solutions are .3; 1; 0/ and .1; 0; 1/. " # x 18 Fill in 12 then 4 then 1 to get the complete solution to x 3y z D 12: y D z " # " # " # 12 4 1 0 C y 1 C z 0 D x particular C x nullspace . 0 0 1 to ?nd U x D 0. Then U and LU have the same nullspace. Column 5 is sure to have no pivot since it is a combination of earlier columns. With 4 pivots in the other columns, the special solution is s D .1; 0; 1; 0; 1/. The nullspace contains all multiples of this vector s (a line in R5 ). For special solutions .2; 2; 1; 0/ and .3; 1; 0; 1/ with free variables x3 ; x4 : R D 1 0 2 3 and A can be any invertible 2 by 2 matrix times this R. 0 1 2 1 " # 1 0 0 4 3 is the line through .4; 3; 2; 1/. The nullspace of A D 0 1 0 0 0 1 2 # " 1 0 1=2 2 has .1; 1; 5/ and .0; 3; 1/ in C .A/ and .1; 1; 2/ in N .A/. Which AD 1 3 5 1 3 other A’s? This construction is impossible: 2 pivot columns and 2 free variables, only 3 columns. # " 1 1 0 0 0 1 0 has .1; 1; 1/ in C .A/ and only the line .c; c; c; c/ in N .A/. AD 1 1 0 0 1 10 01 . has N .A/ D C .A/ and also (a)(b)(c) are all false. Notice rref.AT / D AD 00 00

1

19 If LU x D 0, multiply by L 20

21

22

23

24 25

26 30

r D r . If n D 3 then 3 D 2r is impossible. 28 If A times every column of B the column space of B is contained in the nullspace is zero, 1 1 1 1 . Here C .B/ equals N .A/. and B D of A. An example is A D 1 1 1 1 (For B D 0; C .B/ is smaller.) 29 For A D random 3 by 3 matrix, R is almost sure to be I . For 4 by 3, R is most likely to be I with fourth row of zeros. What about a random 3 by 4 matrix? 31 If N .A/ D line through x D .2; 1; 0; 1/; A has three pivots (4 columns and 1 special # " 1 0 0 2 1 (add any zero rows). solution). Its reduced echelon form can be R D 0 1 0 0 0 1 0

27 If nullspace D column space (with r pivots) then n

Solutions to Exercises

32 Any zero rows come after these rows: R D ? 1 33 (a)

31 1 0 3 ?, R D 0 1 0 , R D I. 0

34 One reason that R is the same for A and

A: They have the same nullspace. They also have the same column space, but that is not required for two matrices to share the same R. (R tells us the nullspace and row space.) y 35 The nullspace of B D ? A A ? contains all vectors x D for y in R4 . y

36 If C x D 0 then Ax D 0 and B x D 0. So N .C / D N .A/ \ N .B/ D intersection. 37 Currents: y1

1 0

2 0 1 0 1 1 0 1 0 0 ; , , , 1 0 0 0 0 0 0 0 0

(b) All 8 matrices are R’s !

y3 C y4 D y1 C y2 C Cy5 D y2 C y4 C y6 D y4 y5 y6 D 0. These equations add to 0 D 0. Free variables y3 ; y5 ; y6 : watch for ?ows around loops.

Problem Set 3.3, page 151

1 (a) and (c) are correct; (b) is completely false; (d) is false because R might have 1’s

in nonpivot columns. " 4 4 4 4 4 4 2 AD 4 4 4 " 1 2 3 2 3 4 AD 3 4 5 " 1 1 1 1 1 1 AD 1 1 1 " # 1 2 0 3 RA D 0 0 1 RB 0 0 0

# " 4 4 has R D 4 # " 4 5 has R D 6 # " 1 1 has R D 1

1 0 0 1 0 0 1 0 0

1 0 0 0 1 0 1 0 0

1 0 0 1 2 0 1 0 0

RA Zero rows go D RA RA RC ! ! 0 to the bottom 0 I I 4 If all pivot variables come last then R D . The nullspace matrix is N D . 0 0 0

5 I think R1 D A1 ; R2 D A2 is true. But R1

T

# 1 0 . 0 # 2 3 . 0 # 1 0 . 0 0 RA

The rank is r D 1; The rank is r D 2; The rank is r D 1

R2 may have 1’s in some pivots. # 1 0 0 0 . 0 0

6 A and A have the same rank r D number of pivots. But pivcol (the column number)

0 is 2 for this matrix A and 1 for AT : A D 0 0 4 1 0I

"

7 Special solutions in N D ? 2

3 " 1 2 8 The new entries keep rank 1: A D 4 a b M D . c bc=a

5 0 1 ? and ? 1 0 0I # " 2 2 4 1 4 8 ; B D 8 16 2

0 2 1 ?. # 6 3 3 3=2 ; 6 3

32

Solutions to Exercises

9 If A has rank 1, the column space is a line in Rm . The nullspace is a plane in Rn (given

by one equation). The nullspace matrix N is n by n 1 (with n 1 special solutions in its columns). The column space of AT is a line in Rn . 2 3 2 3 3 6 6 3 1 2 2 2 2 6 4 2 1 1 3 2 10 4 1 2 2 5 D 4 1 5 and D 1 1 3 2 1 4 8 8 4

11 A rank one matrix has one pivot. (That pivot is in row 1 after possible row exchange; it

13 P has rank r (the same as A) because elimination produces the same pivot columns. 14 The rank of RT is also r . The example matrix A has rank 2 with invertible S :

could come in any column.) The second row of U is zero. Invertible r by r submatrices 1 3 1 0 12 SD and S D ? 1 ? and S D . Use pivot rows and columns 1 4 0 1

1 P D 2 2

"

3 6 7

#

1 2 P D 3 6

T

2 7

1 S D 3

T

2 7

1 3 SD : 2 7

15 The product of rank one matrices has rank one or zero. These particular matrices have 16 .uvT /.wzT / D u.vT w/zT has rank one unless the inner product is vT w D 0.

rank.AB/ D 1; rank.AM / D 1 except AM D 0 if c D 1=2.

17 (a) By matrix multiplication, each column of AB is A times the corresponding column

18 If we know that rank.B T AT / rank.AT /, then since rank stays the same for transposes,

of B . So if column j of B is a combination of earlier columns, then column j of AB is the same combination of earlier columns of AB . Then rank .AB/ rank .B/. No new pivot columns! (b) The rank of B is r D 1. Multiplyingby A cannot increase 1 1 . It drops to zero this rank. The rank of AB stays the same for A D I and B D 1 1 1 1 for A2 D 1 1 1 .

19 We are given AB D I which has rank n. Then rank.AB/ rank.A/ forces rank.A/ D 20 Certainly A and B have at most rank 2. Then their product AB has at most rank 2. 21 (a) A and B will both have the same nullspace and row space as the R they share.

(apologies that this fact is not yet proved), we have rank.AB/ rank.A/.

n. This means that A is invertible. The right-inverse B is also a left-inverse: BA D I and B D A 1 . Since BA is 3 by 3, it cannot be I even if AB D I .

(b) A equals an invertible matrix times B , when they share the same R. A key fact! # # " " 1 0 1 1 0 1 1 0 D 1 1 0 C 22 A D .pivot columns/.nonzero rows of R/ D 1 4 0 0 1 1 1 0 1 8 " # 0 0 0 2 2 1 0 columns 2 0 0 2 0 0 4 . BD D D C 2 3 0 1 times rows 2 0 0 3 0 0 8

Solutions to Exercises

" 1 0 0 1 2 2 0 0 0 0 0 0 # " 1 0 0 1 0 0

33 2 2 0 0 0 0 #

3 1 2 2 0 07 6 1 has x3 ; x4 free. Special solutions in N D 4 (for c D 1) and N D 0 1 05 0 0 1 2 3 2 2 07 1 2 0 1 6 0 (for c ¤ 1). If c D 1; R D and x1 free; if c D 2; R D 4 1 05 0 0 0 0 0 1 1 and x2 free; R D I if c ¤ 1; 2. Special solutions in N D .c D 1/ or N D 0 2 .c D 2/ or N D 2 by 0 empty matrix. 1 I I I 24 A D I I has N D IB D has the same N ; C D I I I has I 0 0 " # I I I 0 . N D 0 I # # " " 1 1 1 1 2 4 1 0 2 3 D (pivot columns) times R. 25 A D 1 2 2 5 D 1 2 0 1 0 1 1 3 1 3 2 6 I r by r r by n r I F T ; rref.R / D D 27 R D 0 m r by r m r by n r 0 0 I 0 ;I 28 The row-column reduced echelon form is always 0 0

26 The m by n matrix Z has r ones to start its main diagonal. Otherwise Z is all zeros.

23 If c D 1; R D

has x2 ; x3 ; x4 free. If c ¤ 1; R D 2

0 ; rref.RT R/ D same R 0 is r by r .

Problem Set 3.4, page 163

# " # " # 2 4 6 4 b1 2 4 6 4 b1 2 4 6 4 b1 1 2 5 7 6 b2 ! 0 1 1 2 b2 b1 ! 0 1 1 2 b2 b1 2 3 5 2 b3 0 1 1 2 b3 b1 0 0 0 0 b3 C b2 2b1 Ax D b has a solution when b3 C b2 2b1 D 0; the column space contains all combinations of .2; 2; 2/ and .4; 5; 3/. This is the plane b3 C b2 2b1 D 0 (!). The nullspace contains all combinations of s1 D . 1; 1; 1; 0/ and s2 D .2; 2; 0; 1/I xcomplete D xp C c1 s1 C c2 s2 I R 1 d D 0 0 " 0 1 1 1 0 0 2 2 0 4 1 0 # gives the particular solution xp D .4; 1; 0; 0/: "

34 "

Solutions to Exercises

# " # " # 2 1 3 b1 2 1 3 b1 1 1=2 3=2 5 0 0 2 6 3 9 b2 ! 0 0 0 b2 3b1 Then ? R d ? D 0 0 4 2 6 b3 0 0 0 b3 2b1 0 0 0 0 Ax D b has a solution when b2 3b1 D 0 and b3 2b1 D 0; C .A/ D line through .2; 6; 4/ which is the intersection of the planes b2 3b1 D 0 and b3 2b1 D 0; the nullspace contains all combinations of s1 D . 1=2; 1; 0/ and s2 D . 3=2; 0; 1/; particular solution x p D d D .5; 0; 0/ and complete solution x p C c1 s1 C c2 s2 . " # " # 2 3 0 C x2 1 . The matrix is singular but the equations are 3 x D complete 1 0 still solvable; b is in the column space. Our particular solution has free variable y D 0.

1 1 ; 0; 2 ; 0/ C x2 . 3; 1; 0; 0/ C x4 .0; 0; 2; 1/. D xp C x n D . 2 " # " # 1 2 2 b1 1 2 2 b1 4 b2 ! 0 1 0 b2 2b1 5 2 5 solvable if b3 2b1 b2 D 0. 4 9 8 b3 0 0 0 b3 2b1 b2 Back-substitution gives the particular solution to Ax D b and the special solution to " # " # 5b1 2b2 2 Ax D 0: x D b2 2b1 C x3 0 . 0 1 5b1 2b3 6 (a) Solvable if b2 D 2b1 and 3b1 3b3 C b4 D 0. Then x D D xp b3 2b1 " # " # 5b1 2b3 1 (b) Solvable if b2 D 2b1 and 3b1 3b3 C b4 D 0. x D b3 2b1 C x3 1 . 0 1 " # " # 1 3 1 b1 1 3 1 b2 One more step gives ? 0 0 0 0 ? D 1 1 b2 3b1 row 3 2 (row 2) C 4(row 1) 7 3 8 2 b2 ! 0 2 4 0 b3 0 2 2 b3 2b1 provided b3 2b2 C4b1 D0.

4 x

complete

8 (a) Every b is in C .A/: independent rows, only the zero combination gives 0.

11 A 1 by 3 system has at least two free variables. But x null in Problem 10 only has one. 12 (a) x 1 13 (a) The always multiplied by 1 solution xp is particular

(b) We need b3 D 2b2 , because .row 3/ 2.row 2/ D 0. " #" # " 1 0 0 1 2 3 5 b1 1 2 1 0 0 0 2 2 b2 2b1 9 L U c D 2 D 2 4 3 1 1 0 0 0 0 b3 C b2 5b1 3 6 D A b ; particular x p D . 9; 0; 3; 0/ means 9.1; 2; 3/ C 3.3; 8; 7/ This is Ax p D b. 1 0 1 2 xD has x p D .2; 4; 0/ and x null D .c; c; c/. 10 0 1 1 4 x 2 and 0 solve Ax D 0 (b) A.2x 1 2x 2 / D 0; A.2x 1

# 3 5 b1 8 12 b2 7 13 b3 D .0; 6; 6/.

(b) Any can be xp solution p 1 6 3 3 x 2 is shorter (length 2) than D . Then (c) (length 2) 6 1 3 3 y 0 (d) The only “homogeneous” solution in the nullspace is x n D 0 when A is invertible.

x2 / D b

Solutions to Exercises

35

14 If column 5 has no pivot, x5 is a free variable. The zero vector is not the only solution 15 If row 3 of U has no pivot, that is a zero row. U x D c is only solvable provided 16 17

to Ax D 0. If this system Ax D b has a solution, it has in?nitely many solutions.

18 19 20

21

22

23

24

25 26

c 3 D 0. Ax D b might not be solvable, because U may have other zero rows needing more ci D 0. The largest rank is 3. Then there is a pivot in every row. The solution always exists. The column space is R3 . An example is A D ? I F ? for any 3 by 2 matrix F . The largest rank of a 6 by 4 matrix is 4. Then there is a pivot in every column. The solution is unique. The nullspace contains only the zero vector. An example is A D R D ? I F ? for any 4 by 2 matrix F . Rank D 2; rank D 3 unless q D 2 (then rank D 2). Transpose has the same rank! Both matrices A have rank 2. Always AT A and AAT have the same rank as A. " #" # 1 0 0 1 0 1 0 1 0 3 4 1 0 0 2 2 3 . A D LU D I A D LU 2 1 0 2 1 0 3 0 1 0 3 1 0 0 11 5 " # " # " # " # " # " # " # x 4 1 1 x 4 1 1 Cz 0 0 . The second (a) y D 0 C y (b) y D 0 C z z 0 0 1 z 0 1 equation in part (b) removed one special solution. If Ax 1 D b and also Ax 2 D b then we can add x 1 x 2 to any solution of Ax D B : the solution x is not unique. But there will be no solution to Ax D B if B is not in the column space. For A; q D 3 gives rank 1, every other q gives rank 2. For B; q D 6 gives rank 1, every other q gives rank 2. These matrices cannot have rank 3. 1 1 b1 x1 1 ?x ? D has 0 or 1 solutions, depending on b (b) D (a) 1 b2 x2 ? b ? has in?nitely many solutions for every b (c) There are 0 or 1 solutions when A has rank r < m and r < n: the simplest example is a zero matrix. (d) one solution for all b when A is square and invertible (like A D I ). (a) r < m, always r n (b) r D m, r < n (c) r < m; r D n (d) r D m D n. # # " # " " 2 4 4 1 0 2 2 4 4 2 and 0 3 6 ! R D I . 0 3 6 !RD 0 1 0 0 5 0 0 0 0 0 0

27 If U has n pivots, then R has n pivots equal to 1. Zeros above and below those pivots

make R D I . " # 2 1 2 0 1 2 3 5 1 2 3 0 1 2 0 0 1 ; ! ! ; xn D 28 0 0 1 0 0 4 8 0 0 4 0 0 0 1 0 0 Free x2 D 0 gives x p D . 1; 0; 2/ because the pivot columns contain I . " # " # " 1 0 0 0 0 1 0 29 ? R d ? D 0 0 1 0 leads to x n D 1 ; ? R d ? D 0 0 0 0 0 0 0 0 0 no solution because of the 3rd equation

1 . 2 0 1 0 # 1 2 : 5

36 " # " # " 1 0 2 3 2 1 0 2 3 2 1 0 2 0 1 3 2 0 5 ! 0 3 0 3 3 ! 0 1 0 0 2 0 4 9 10 0 0 0 3 6 0 0 0 1

Solutions to Exercises

2 3 2 3 # 4 2 4 6 37 6 07 3 ; 4 5; x n D x3 4 5. 30 0 1 2 2 0 " # " # 1 1 1 0 31 For A D 0 2 , the only solution to Ax D 2 is x D . B cannot exist since 1 0 3 3 2 equations in 3 unknowns cannot have a unique solution. 2 3 2 32 3 1 3 1 1 1 3 1 1 27 61 2 37 61 1 7 60 32 A D 4 factors into LU D 4 and the rank 5 40 2 4 65 2 2 1 0 05 1 1 5 1 2 0 1 0 0 0 is r D 2. The special solution to Ax D 0 and U x D 0 is s D . 7; 2; 1/. Since b D .1; 3; 6; 5/ is also the last column of A, a particular solution to Ax D b is .0; 0; 1/ and the complete solution is x D .0; 0; 1/ C c s. (Or use the particular solution x p D .7; 2; 0/ with free variable x3 D 0.) For b D .1; 0; 0; 0/ elimination leads to U x D .1; 1; 0; 1/ and the fourth equation is 0 D 1. No solution for this b. 1 1 0 1 0 33 If the complete solution to Ax D is x D C then A D . 3 0 c 3 0

34 (a) If s D .2; 3; 1; 0/ is the only special solution to Ax D 0, the complete solution is

x D c s (line of solution!). The rank of A must be 4

1 D 3. " 1 0 (b) The fourth variable x4 is not free in s, and R must be 0 1 0 0

# 2 0 3 0 . 0 1

(c) Ax D b can be solve for all b, because A and R have full row rank r D 3. 35 For the 1; 2; 1 matrix K (9 by 9) and constant right side b D .10; ; 10/, the solution x D K 1 b D .45; 80; 105; 120; 125; 120; 105; 80; 45/ rises and falls along the parabola xi D 50i 5i 2 . (A formula for K 1 is later in the text.) 36 If Ax D b and C x D b have the same solutions, A and C have the same shape and the same nullspace (take b D 0). If b D column 1 of A, x D .1; 0; : : : ; 0/ solves Ax D b so it solves C x D b. Then A and C share column 1. Other columns too: A D C !

Problem Set 3.5, page 178

1

# " # 1 1 1 2 0 independent. But 0 1 1 3 ? c ? D 0 is solved by c D .1; 1; 4; 1/. Then 0 0 1 4 0 v1 C v2 4v3 C v4 D 0 (dependent). 2 v1 ; v2 ; v3 are independent (the 1’s are in different positions). All six vectors are on the plane .1; 1; 1; 1/ v D 0 so no four of these six vectors can be independent.

"

1 1 1 0 1 1 0 0 1

#"

c1 c2 c3

#

"

D 0 gives c3 D c2 D c1 D 0. So those 3 column vectors are

Solutions to Exercises

3 If a D 0 then column 1 D 0; if d D 0 then b.column 1/

37

a.column 2/ D 0; if f D 0 then all columns end in zero (they are all in the xy plane, they must be dependent). " #" # " # a b c x 0 y D 0 gives z D 0 then y D 0 then x D 0. A square 4 Ux D 0 d e 0 0 f z 0 triangular matrix has independent columns (invertible matrix) when its diagonal has no zeros. " # " # " # 1 2 3 1 2 3 1 2 3 5 7 ! 0 5 7 : invertible ) independent 5 (a) 3 1 2 ! 0 2 3 1 0 1 5 0 0 18=5 columns. " # " # " # " # " # 1 2 3 1 2 3 1 2 3 1 0 3 1 2 ! 0 7 7 ! 0 7 7 I A 1 D 0 , columns (b) 2 3 1 0 7 7 0 0 0 1 0 add to 0.

6 Columns 1, 2, 4 are independent. Also 1, 3, 4 and 2, 3, 4 and others (but not 1, 2, 3).

Same column numbers (not same columns!) for A.

7 The sum v1

w2 / D 0. So the # 0 1 1 0 1 . difference are dependent and the difference matrix is singular: A D 1 1 1 0 w3 / .w 1 w 3 / C .w 1 " .c1 C c2 /w3 D 0. Since the w’s are independent, c2 C c3 D c1 C c3 D c1 C c2 D 0. The only solution is c1 D c2 D c3 D 0. Only this combination of v1 ; v2 ; v3 gives 0.

v2 C v3 D 0 because .w2

8 If c1 .w2 C w3 / C c2 .w1 C w3 / C c3 .w1 C w2 / D 0 then .c2 C c3 /w1 C .c1 C c3 /w2 C 9 (a) The four vectors in R3 are the columns of a 3 by 4 matrix A. There is a nonzero

10 The plane is the nullspace of A D ? 1 2 11 (a) Line in R3

solution to Ax D 0 because there is at least one free variable (b) Two vectors are dependent if ? v 1 v2 ? has rank 0 or 1. (OK to say “they are on the same line” or “one is a multiple of the other” but not “v2 is a multiple of v1 ” —since v1 might be 0.) (c) A nontrivial combination of v1 and 0 gives 0: 0v1 C 3.0; 0; 0/ D 0.

3 1 ?. Three free variables give three solutions .x; y; z; t / D .2; 1 0 0/ and .3; 0; 1; 0/ and .1; 0; 0; 1/. Combinations of those special solutions give more solutions (all solutions). (b) Plane in R3 (c) All of R3 (d) All of R3 .

12 b is in the column space when Ax D b has a solution; c is in the row space when 13 The column space and row space of A and U all have the same dimension = 2. The row

AT y D c has a solution. False. The zero vector is always in the row space.

spaces of A and U are the same, because the rows of U are combinations of the rows of A (and vice versa!).

14 v D

C w/ C 1 .v w/ and w D 1 .v C w / 1 .v w/. The two pairs span the 2 2 2 same space. They are a basis when v and w are independent. If they are the columns of A then m is not less than n .m n/.

1 .v 2

15 The n independent vectors span a space of dimension n. They are a basis for that space.

38

16 These bases are not unique!

Solutions to Exercises

(a) .1; 1; 1; 1/ for the space of all constant vectors .c; c; c; c/ (b) .1; 1; 0; 0/; .1; 0; 1; 0/; .1; 0; 0; 1/ for the space of vectors with sum of components = 0 (c) .1; 1; 1; 0/; .1; 1; 0; 1/ for the space perpendicular to .1; 1; 0; 0/ and .1; 0; 1; 1/ (d) The columns of I are a basis for its column space, the empty set is a basis (by convention) for N .I / D {zero vector}. 1 0 1 0 1 17 The column space of U D is R2 so take any bases for R2 ; (row 1 0 1 0 1 0 and row 2) or (row 1 and row 1 C row 2) and (row 1 and row 2) are bases for the row spaces of U .

18 (a) The 6 vectors might not span R4

(b) The 6 vectors are not independent

(c) Any four might be a basis.

19 n-independent columns ) rank n. Columns span Rm ) rank m. Columns are basis 20 One basis is .2; 1; 0/, . 3; 0; 1/. A basis for the intersection with the xy plane is

for Rm ) rank D m D n. The rank counts the number of independent columns.

.2; 1; 0/. The normal vector .1; 2; 3/ is a basis for the line perpendicular to the plane.

21 (a) The only solution to Ax D 0 is x D 0 because the columns are independent

(b) Ax D b is solvable because the columns span R5 . Key point: A basis gives exactly one solution for every b. (b) False because the basis vectors for R6 might not be in S.

22 (a) True

23 Columns 1 and 2 are bases for the (different) column spaces of A and U ; rows 1 and

2 are bases for the (equal) row spaces of A and U ; .1; 1; 1/ is a basis for the (equal) nullspaces.

24 (a) False A D ? 1 1 ? has dependent columns, independent row

(b) False column 0 1 (c) True: Both dimensions D 2 if A is inspace ¤ row space for A D 0 0 vertible, dimensions D 0 if A D 0, otherwise dimensions D 1 (d) False, columns may be dependent, in that case not a basis for C .A/. c d has rank 2 except when c D d or 25 A has rank 2 if c D 0 and d D 2; B D d c c D d. # # " # " " 0 0 0 0 0 0 1 0 0 26 (a) 0 0 0 ; 0 1 0 ; 0 0 0 0 0 0 0 0 0 0 0 1 # " # " # " 0 0 1 0 0 0 0 1 0 (b) Add 1 0 0 ; 0 0 0 , 0 0 1 1 0 0 0 1 0 0 0 0 # # " " # " 0 1 0 0 0 1 0 0 0 1 0 0 ; 0 0 0 ; 0 0 1 . (c) 0 1 0 0 0 0 1 0 0 These are simple bases (among many others) for (a) diagonal matrices (b) symmetric matrices (c) skew-symmetric matrices. The dimensions are 3; 6; 3.

Solutions to Exercises

"

39

# " # " # " # " # 1 0 0 1 0 0 1 1 0 1 0 1 1 0 0 27 I , 0 1 0 , 0 2 0 , 0 1 0 , 0 1 0 , 0 1 1 ; echelon matri0 0 2 0 0 1 0 0 1 0 0 1 0 0 1 ces do not form a subspace; they span the upper triangular matrices (not every U is echelon). 1 0 0 0 1 0 0 0 1 1 1 0 1 0 1 28 , , ; and . 1 0 0 0 1 0 0 0 1 1 1 0 1 0 1

29 (a) The invertible matrices span the space of all 3 by 3 matrices

(b) The rank one matrices also span the space of all 3 by 3 matrices (c) I by itself spans the space of all multiples cI . 1 2 0 1 0 2 0 0 0 0 0 0 30 , , , . 0 0 0 0 0 0 1 2 0 1 0 2

31 (a) y.x/ D constant C

33 (a) y.x/ D e 2x is a basis for, all solutions to y 0 D 2y (b) y D x is a basis for all 34 y1 .x/; y2 .x/; y3 .x/ can be x; 2x; 3x .dim 1/ or x; 2x; x 2 .dim 2/ or x; x 2 ; x 3 .dim 3/. 35 Basis 1, x , x 2 , x 3 , for cubic polynomials; basis x

32 y.0/ D 0 requires A C B C C D 0. One basis is cos x

(b) y.x/ D 3x this is one basis for the 2 by 3 matrices with .2; 1; 1/ in their nullspace (4-dim subspace). (c) y.x/ D 3x C C D yp C yn solves dy=dx D 3. cos 2x and cos x cos 3x .

solutions to dy=dx D y=x (First-order linear equation ) 1 basis function in solution space). 1, x 2 1, x 3 1 for the subspace

36 Basis for S: .1; 0; 1; 0/, .0; 1; 0; 0/, .1; 0; 0; 1/; basis for T: .1; 1; 0; 0/ and .0; 0; 2; 1/; 37 The subspace of matrices that have AS D SA has dimension three.

with p.1/ D 0.

S \ T D multiples of .3; 3; 2; 1/ D nullspace for 3 equation in R4 has dimension 1. (b) No, 4 vectors in R3 are dependent (c) Yes, a basis (d) No, these three vectors are dependent

38 (a) No, 2 vectors don’t span R3

39 If the 5 by 5 matrix ? A b ? is invertible, b is not a combination of the columns of A.

40

If ? A b ? is singular, and the 4 columns of A are independent, b is a combination of those columns. In this case Ax D b has a solution. (a) The functions y D sin x , y D cos x , y D e x , y D e to d 4 y=dx 4 D y.x/.

x

are a basis for solutions

(b) A particular solution to d 4 y=dx 4 D y.x/ C 1 is y.x/ D 1. The complete solution is y.x/ D 1 C c; sin x C c2 cos x C c3 e x C c4 e x (or use another basis for the nullspace of the 4th derivative). " # " # " # " # " # 1 1 1 1 1 The six P ’s 1 C 1 1 1 41 I D 1 C . . are dependent 1 1 1 1 1 Those ?ve are independent: The 4th has P11 D 1 and cannot be a combination of the others. Then the 2nd cannot be (from P32 D 1) and also 5th (P32 D 1). Continuing, a nonzero combination of all ?ve could not be zero. Further challenge: How many independent 4 by 4 permutation matrices?

40

42 The dimension of S spanned by all rearrangements of x is

Solutions to Exercises

(a) zero when x D 0 (b) one when x D .1; 1; 1; 1/ (c) three when x D .1; 1; 1; 1/ because all rearrangements of this x are perpendicular to .1; 1; 1; 1/ (d) four when the x ’s are not equal and don’t add to zero. No x gives dim S D 2. I owe this nice problem to Mike Artin—the answers are the same in higher dimensions: 0; 1; n 1; n. The problem is to show that the u’s, v’s, w’s together are independent. We know the u’s and v’s together are a basis for V , and the u’s and w’s together are a basis for W . Suppose a combination of u’s, v’s, w’s gives 0. To be proved: All coef?cients D zero. Key idea: In that combination giving 0, the part x from the u’s and v’s is in V . So the part from the w’s is x . This part is now in V and also in W . But if x is in V \ W it is a combination of u’s only. Now the combination uses only u’s and v’s (independent in V !) so all coef?cients of u’s and v’s must be zero. Then x D 0 and the coef?cients of the w’s are also zero. The inputs to an m by n matrix ?ll Rn . The outputs (column space!) have dimension r . The nullspace has n r special solutions. The formula becomes r C .n r/ D n. If the left side of dim.V/ C dim.W/ D dim.V \ W/ C dim.V C W/ is greater than n, then dim.V \ W/ must be greater than zero. So V \ W contains nonzero vectors. If A2 D zero matrix, this says that each column of A is in the nullspace of A. If the column space has dimension r , the nullspace has dimension 10 r , and we must have r 10 r and r 5.

43

44 45 46

Problem Set 3.6, page 190

1 (a) Row and column space dimensions D 5, nullspace dimension D 4, dim.N .AT // 2

3

4

5

6

D 2 sum D 16 D m C n (b) Column space is R3 ; left nullspace contains only 0. A: Row space basis D row 1 D .1; 2; 4/; nullspace . 2; 1; 0/ and . 4; 0; 1/; column space basis D column1 D .1; 2/; left nullspace . 2; 1/. B : Row space basis D both rows D .1; 2; 4/ and .2; 5; 8/; column space basis D two columns D .1; 2/ and .2; 5/; nullspace . 4; 0; 1/; left nullspace basis is empty because the space contains only y D 0. Row space basis D rows of U D .0; 1; 2; 3; 4/ and .0; 0; 0; 1; 2/; column space basis D pivot columns (of A not U ) D .1; 1; 0/ and .3; 4; 1/; nullspace basis .1; 0; 0; 0; 0/, .0; 2; 1; 0; 0/, .0; 2; 0; 2; 1/; left nullspace .1; 1; 1/ D last row of E 1 ! # " 1 0 9 3 (a) 1 0 (b) Impossible: r C.n r/ must be 3 (c) ? 1 1 ? (d) 3 1 0 1 (e) Impossible Row space D column space requires m D n. Then m r D n r ; nullspaces have the same dimension. Section 4.1 will prove N .A/ and N .AT / orthogonal to the row and column spaces respectively—here those are the same space. 1 1 1 2 1 has the A D has those rows spanning its row space B D 1 2 1 0 same rows spanning its nullspace and BAT D 0. A: dim 2; 2; 2; 1: Rows .0; 3; 3; 3/ and .0; 1; 0; 1/; columns .3; 0; 1/ and .3; 0; 0/; nullspace .1; 0; 0; 0/ and .0; 1; 0; 1/; N .AT / .0; 1; 0/. B : dim 1; 1; 0; 2 Row space (1), column space .1; 4; 5/, nullspace: empty basis, N .AT / . 4; 1; 0/ and . 5; 0; 1/.

Solutions to Exercises

41

7 Invertible 3 by 3 matrix A: row space basis D column space basis D .1; 0; 0/, .0; 1; 0/, 8 9 10 11

12

13

14

15 16 17

18 19

20

21

22

.0; 0; 1/; nullspace basis and left nullspace basis are empty. Matrix B D A A : row space basis .1; 0; 0; 1; 0; 0/, .0; 1; 0; 0; 1; 0/ and .0; 0; 1; 0; 0; 1/; column space basis .1; 0; 0/, .0; 1; 0/, .0; 0; 1/; nullspace basis . 1; 0; 0; 1; 0; 0/ and .0; 1; 0; 0; 1; 0/ and .0; 0; 1; 0; 0; 1/; left nullspace basis is empty. I 0 and I I I 0 0 and 0 D 3 by 2 have row space dimensions D 3; 3; 0 D column space dimensions; nullspace dimensions 2; 3; 2; left nullspace dimensions 0; 2; 3. (a) Same row space and nullspace. So rank (dimension of row space) is the same (b) Same column space and left nullspace. Same rank (dimension of column space). For rand .3/, almost surely rankD 3, nullspace and left nullspace contain only .0; 0; 0/. For rand .3; 5/ the rank is almost surely 3 and the dimension of the nullspace is 2. (a) No solution means that r < m. Always r n. Can’t compare m and n here. (b) Since m r > 0, the left nullspace must contain a nonzero vector. " # " # 1 1 2 2 1 1 0 1 A neat choice is 0 2 D 2 4 0 ; r C .n r/ D n D 3 does 1 2 0 1 0 1 0 1 not match 2 C 2 D 4. Only v D 0 is in both N .A/ and C .AT /. (a) False: Usually row space ¤ column space (same dimension!) (b) True: A and A have the same four subspaces (c) False (choose A and B same size and invertible: then they have the same four subspaces) Row space basis can be the nonzero rows of U : .1; 2; 3; 4/, .0; 1; 2; 3/, .0; 0; 1; 2/; nullspace basis .0; 1; 2; 1/ as for U ; column space basis .1; 0; 0/, .0; 1; 0/, .0; 0; 1/ (happen to have C.A/ D C.U / D R3 ); left nullspace has empty basis. After a row exchange, the row space and nullspace stay the same; .2; 1; 3; 4/ is in the new left nullspace after the row exchange. If Av D 0 and v is a row of A then v v D 0. Row space D yz plane; column space D xy plane; nullspace D x axis; left nullspace D z axis. For I C A: Row space D column space D R3 , both nullspaces contain only the zero vector. Row 3 2 row 2 C row 1 D zero row so the vectors c.1; 2; 1/ are in the left nullspace. The same vectors happen to be in the nullspace (an accident for this matrix). (a) Elimination on Ax D 0 leads to 0 D b3 b2 b1 so . 1; 1; 1/ is in the left nullspace. (b) 4 by 3: Elimination leads to b3 2b1 D 0 and b4 C b2 4b1 D 0, so . 2; 0; 1; 0/ and . 4; 1; 0; 1/ are in the left nullspace. Why? Those vectors multiply the matrix to give zero rows. Section 4.1 will show another approach: Ax D b is solvable (b is in C .A/) when b is orthogonal to the left nullspace. (a) Special solutions . 1; 2; 0; 0/ and . 1 ; 0; 3; 1/ are perpendicular to the rows of 4 T R (and then ER). (b) A y D 0 has 1 independent solution D last row of E 1 . 1 (E A D R has a zero row, which is just the transpose of AT y D 0). (a) u and w (b) v and z (c) rank < 2 if u and w are dependent or if v and z are dependent (d) The rank of uvT C wzT is 2. " # " 3 2 # has column space spanned 1 2 T 1 0 A D u w v zT D 2 2 D 4 2 by u and w, row space 1 1 4 1 5 1 spanned by v and z:

42

Solutions to Exercises

23 As in Problem 22: Row space basis .3; 0; 3/; .1; 1; 2/; column space basis .1; 4; 2/,

24 25

26 27 28

29 30 31

32

.2; 5; 7/; the rank of (3 by 2) times (2 by 3) cannot be larger than the rank of either factor, so rank 2 and the 3 by 3 product is not invertible. AT y D d puts d in the row space of A; unique solution if the left nullspace (nullspace of AT ) contains only y D 0. (a) True (A and AT have the same rank) (b) False A D ? 1 0 ? and AT have very different left nullspaces (c) False (A can be invertible and unsymmetric even if C .A/ D C .AT /) (d) True (The subspaces for A and A are always the same. If AT D A or AT D A they are also the same for AT ) The rows of C D AB are combinations of the rows of B . So rank C rank B . Also rank C rank A, because the columns of C are combinations of the columns of A. b Choose d D bc=a to make a c d a rank-1 matrix. Then the row space has basis .a; b/ and the nullspace has basis . b; a/. Those two vectors are perpendicular ! B and C (checkers and chess) both have rank 2 if p ¤ 0. Row 1 and 2 are a basis for the row space of C , B T y D 0 has 6 special solutions with 1 and 1 separated by a zero; N.C T / has . 1; 0; 0; 0; 0; 0; 0; 1/ and .0; 1; 0; 0; 0; 0; 1; 0/ and columns 3; 4; 5; 6 of I ; N.C / is a challenge. a11 D 1; a12 D 0; a13 D 1; a22 D 0; a32 D 1; a31 D 0; a23 D 1; a33 D 0; a21 D 1. The subspaces for A D uvT are pairs of orthogonal lines (v and v? , u and u? ). If B has those same four subspaces then B D cA with c ¤ 0. (a) AX D 0 if each column of X is a multiple of .1; 1; 1/; dim.nullspace/ D 3. (b) If AX D B then all columns of B add to zero; dimension of the B ’s D 6. (c) 3 C 6 D dim.M 33 / D 9 entries in a 3 by 3 matrix. The key is equal row spaces. First row of A D combination of the rows of B : only possible combination (notice I ) is 1 (row 1 of B ). Same for each row so F D G .

Problem Set 4.1, page 202

1 Both nullspace vectors are orthogonal to the row space vector in R3 . The column space 2

3

4

5

is perpendicular to the nullspace of AT (two lines in R2 because rank D 1). The nullspace of a 3 by 2 matrix with rank 2 is Z (only zero vector) so x n D 0, and row space D R2 . Column space D plane perpendicular to left nullspace D line in R3 . " # " # " # " # " # 1 2 3 2 1 1 1 2 3 1 (b) Impossible, 3 not orthogonal to 1 (c) 1 and 0 in (a) 3 5 2 5 1 1 0 1 C .A/ and N .AT / is impossible: not perpendicular (d) Need A2 D 0; take A D 1 1 1 (e) .1; 1; 1/ in the nullspace (columns add to 0) and also row space; no such matrix. If AB D 0, the columns of B are in the nullspace of A. The rows of A are in the left nullspace of B . If rank D 2, those four subspaces would have dimension 2 which is impossible for 3 by 3. (a) If Ax D b has a solution and AT y D 0, then y is perpendicular to b. bT y D .Ax /T y D x T .AT y / D 0. (b) If AT y D .1; 1; 1/ has a solution, .1; 1; 1/ is in the row space and is orthogonal to every x in the nullspace.

Solutions to Exercises

43

6 Multiply the equations by y1 ; y2 ; y3 D 1; 1; 1. Equations add to 0 D 1 so no solution: 7

8 9 10 11

12 13

14

15 16

19 L? is the 2-dimensional subspace (a plane) in R3 perpendicular to L. Then .L? /? is

spanned by .1; 1; 1/, then S ? is the plane spanned by .1; 1; 0/ and .1; 0; 1/. If S is spanned by .2; 0; 0/ and .0; 0; 3/, then S ? is the line spanned by .0; 1; 0/. 1 5 1 ? . Therefore S ? is a subspace even if S is not. 18 S is the nullspace of A D 2 2 2 a 1-dimensional subspace (a line) perpendicular to L? . In fact .L? /? is L.

17 If S is the subspace of R3 containing only the zero vector, then S ? is R3 . If S is

y D .1; 1; 1/ is in the left nullspace. Ax D b would need 0 D .y T A/x D y T b D 1. Multiply the 3 equations by y D .1; 1; 1/. Then x1 x2 D 1 plus x2 x3 D 1 minus x1 x3 D 1 is 0 D 1. Key point: This y in N .AT / is not orthogonal to b D .1; 1; 1/ so b is not in the column space and Ax D b has no solution. x D x r C x n , where x r is in the row space and x n is in the nullspace. Then Ax n D 0 and Ax D Ax r C Ax n D Ax r . All Ax are in C .A/. Ax is always in the column space of A. If AT Ax D 0 then Ax is also in the nullspace of AT . So Ax is perpendicular to itself. Conclusion: Ax D 0 if AT Ax D 0. (a) With AT D A, the column and row spaces are the same (b) x is in the nullspace and z is in the column space = row space: so these “eigenvectors” have x T z D 0. For A: The nullspace is spanned by . 2; 1/, the row space is spanned by .1; 2/. The column space is the line through .1; 3/ and N .AT / is the perpendicular line through .3; 1/. For B: The nullspace of B is spanned by .0; 1/, the row space is spanned by .1; 0/. The column space and left nullspace are the same as for A. x splits into x r C x n D .1; 1/ C .1; 1/ D .2; 0/. Notice N .AT / is a plane .1; 0/ D .1; 1/=2 C .1; 1/=2 D x r C x n . V T W D zero makes each basis vector for V orthogonal to each basis vector for W . Then every v in V is orthogonal to every w in W (combinations of the basis vectors). x Ax D B b x means that ? A B ? D 0. Three homogeneous equations in four b x unknowns always have a nonzero solution. Here x D .3; 1/ and b x D .1; 0/ and Ax D B b x D .5; 6; 5/ is in both column spaces. Two planes in R3 must share a line. A p-dimensional and a q -dimensional subspace of Rn share at least a line if p C q > n. (The p C q basis vectors of V and W cannot be independent.) AT y D 0 leads to .Ax /T y D x T AT y D 0. Then y ? Ax and N .AT / ? C .A/.

1 2 2 3 21 For example . 5; 0; 1; 1/ and .0; 1; 1; 0/ span S D nullspace of A D . 1 3 3 2 22 .1; 1; 1; 1/ is a basis for P ? . A D 1 1 1 1 has P as its nullspace and P ? as row space.

?

20 If V is the whole space R4 , then V ? contains only the zero vector. Then .V ? /? D

R4 D V .

23 x in V ? is perpendicular to any vector in V . Since V contains all the vectors in S ,

x is also perpendicular to any vector in S . So every x in V ? is also in S ? .

44

24 AA

Solutions to Exercises

D I : Column 1 of A nth rows of A. " 2 1 2 2 2 1

1 1

is orthogonal to the space spanned by the 2nd, 3rd, : : :,

25 If the columns of A are unit vectors, all mutually perpendicular, then AT A D I . 26 A D

27 The lines 3x C y D b1 and 6x C 2y D b2 are parallel. They are the same line if

# 1 This example shows a matrix with perpendicular columns. 2 , AT A D 9I is diagonal: .AT A/ij D .column i of A/ .column j of A/. 2 When the columns are unit vectors, then AT A D I .

28 (a) .1; 1; 0/ is in both planes. Normal vectors are perpendicular, but planes still in-

b2 D 2b1 . In that case .b1 ; b2 / is perpendicular to . 2; 1/. The nullspace of the 2 by 2 matrix is the line 3x C y D 0. One particular vector in the nullspace is . 1; 3/.

tersect! (b) Need three orthogonal vectors to span the whole orthogonal complement. (c) Lines can meet at the zero vector without being orthogonal. " # " # 1 2 3 1 1 1 A has v D .1; 2; 3/ in row space and column space 1 0 ; B has v in its column space and nullspace. 29 A D 2 1 0 ; B D 2 3 0 1 3 0 1 v can not be in the nullspace and row space, or in the left nullspace and column space. These spaces are orthogonal and vT v ¤ 0.

30 When AB D 0, the column space of B is contained in the nullspace of A. Therefore 31 null.N / produces a basis for the row space of A (perpendicular to N.A/). 32 We need r T n D 0 and c T ` D 0. All possible examples have the form acr T with a ¤ 0. 33 Both r ’s orthogonal to both n’s, both c ’s orthogonal to both ` ’s, each pair independent.

the dimension of C .B/ dimension of N .A/. This means rank.B/ 4

0

rank.A/.

All A’s with these subspaces have the form ?c 1 c 2 ?M ?r 1 r 2 ?T for a 2 by 2 invertible M .

Problem Set 4.2, page 214

2 (a) The projection of b D .cos ; sin / onto a D .1; 0/ is p D .cos ; 0/ 1 (a) aT b=aT a D 5=3; p D 5a=3; e D . 2; 1; 1/=3 (b) aT b=aT a D 1; p D a; e D 0.

(b) The projection of b D .1; 1/ onto a D .1; 1/ is p D .0; 0/ since aT b D 0. " # " # " # " # 1 1 1 1 1 1 5 1 1 3 1 1 1 1 and P1 b D 5 . P2 D 3 9 3 and P2 b D 3 . 3 P1 D 3 1 1 1 3 5 11 1 3 1 1 1 1 1 0 1 P1 projects onto .1; 0/, P2 projects onto .1; 1/ 4 P1 D , P2 D . 0 0 1 1 P1 P2 ¤ 0 and P1 C P2 is not a projection matrix. 2 " # " # 1 2 2 4 4 2 1 1 2 4 4 , P2 D 4 4 2 . P1 and P2 are the projection 5 P1 D 9 9 2 4 4 2 2 1 matrices onto the lines through a1 D . 1; 2; 2/ and a2 D .2; 2; 1/ P1 P2 D zero matrix because a1 ? a2 . XXX Above solution does not ?t in 3 lines.

2 ; 9 2 / 9 4 4 and p 2 D . 9 ; 9; 2 / 9

6 p1 D . 1 ; 9

and p3 D . 4 ; 9

2 4 ; /. 9 9

So p1 C p2 C p3 D b.

Solutions to Exercises

45

7

8 9 10

11 (a) p D A.AT A/ 12 P1 D

" # " # " # 1 2 2 4 4 2 4 2 4 1 1 1 2 4 4 C 4 4 2 C 2 1 2 D I. P1 C P2 C P3 D 9 9 9 2 4 4 2 2 1 4 2 4 We can add projections onto orthogonal vectors. This is important. The projections of .1; 1/ onto .1; 0/ and .1; 2/ are p1 D .1; 0/ and p2 D .0:6; 1:2/. Then p1 C p2 ¤ b. Since A is invertible, P D A.AT A/ 1 AT D AA 1 .AT / 1 AT D I : project on all of R2 . 0:2 0:4 0:2 1 0 0:2 This is not a1 D .1; 0/ P2 D , P2 a1 D , P1 D , P1 P2 a1 D . 0:4 0:8 0:4 0 0 0 No, P1 P2 ¤ .P1 P2 /2 . "

1 T A b D .2; 3; 0/, e D .0; 0; 4/, AT e D 0 (b) p D .4; 4; 6/, e D 0. #

1 0 0 0 1 0 D projection matrix onto the column space of A (the xy plane) 0 0 0 " # 0:5 0:5 0 Projection matrix onto the second column space. P2 D 0:5 0:5 0 = Certainly .P2 /2 D P2 . 0 0 1 2 3 2 3 2 3 2 3 1 0 0 1 0 0 0 1 1 60 1 07 60 1 0 07 627 627 13 A D 4 , P D square matrix D 4 , p D P 4 5 D 4 5. 0 0 15 0 0 1 05 3 3 0 0 0 0 0 0 0 4 0 14 The projection of this b onto the column space of A is b itself when b is in that space. " # " # 5 8 4 0 1 8 17 2 and b D P b D p D 2 . But P is not necessarily I . P D 21 4 2 20 4

15 2A has the same column space as A. b x for 2A is half of b x for A. 16

1 .1; 2; 2 2

17 If P D P then .I 18

19

20 21 22 23 24

P/2 D .I P /.I P / D I P I IP C P 2 D I P . When P projects onto the column space, I P projects onto the left nullspace. (a) I P is the projection matrix onto .1; 1/ in the perpendicular direction to .1; 1/ (b) I P projects onto the plane x C y C z D 0 perpendicular to .1; 1; 1/. " # 5=6 1=6 1=3 For any basis vectors in the plane x y 2z D 0, 1=6 5=6 1=3 . say .1; 1; 0/ and .2; 0; 1/, the matrix P is 1=3 1=3 1=3 " # " # " # 1 1=6 1=6 1=3 5=6 1=6 1=3 T 1 , Q D ee 1=6 1=6 1=3 , I Q D 1=6 5=6 1=3 . eD eTe D 2 1=3 1=3 2=3 1=3 1=3 1=3 2 A.AT A/ 1 AT D A.AT A/ 1 .AT A/.AT A/ 1 AT D A.AT A/ 1 AT . So P 2 D P . P b is in the column space (where P projects). Then its projection P .P b/ is P b. P T D .A.AT A/ 1 AT /T D A..AT A/ 1 /T AT D A.AT A/ 1 AT D P . (AT A is symmetric!) If A is invertible then its column space is all of Rn . So P D I and e D 0. The nullspace of AT is orthogonal to the column space C .A/. So if AT b D 0, the projection of b onto C .A/ should be p D 0. Check P b D A.AT A/ 1 AT b D A.AT A/ 1 0.

1/ C 3 .1; 0; 1/ D .2; 1; 1/. So b is in the plane. Projection shows P b D b. 2

46

Solutions to Exercises

26 A

25 The column space of P will be S . Then r D dimension of S D n.

1 T

27 If A Ax D 0 then Ax is in the nullspace of A . But Ax is always in the column space

exists since the rank is r D m. Multiply A2 D A by A

T

1

to get A D I .

of A. To be in both of those perpendicular spaces, Ax must be zero. So A and AT A have the same nullspace.

28 P 2 D P D P T give P T P D P . Then the .2; 2/ entry of P equals the .2; 2/ entry of

P T P which is the length squared of column 2.

aaT 1 9 12 3 30 (a) The column space is the line through a D so PC D T D . 4 a a 25 12 25 T T (b) The row space is the line through v D .1; 2; 2/ and PR D vv =v v. Always PC A D A (columns of A project to themselves) and APR D A. Then PC APR D A !

31 The error e D b

29 A D B T has independent columns, so AT A (which is BB T ) must be invertible.

p must be perpendicular to all the a’s.

32 Since P1 b is in C .A/; P2 .P1 b/ equals P1 b. So P2 P1 D P1 D aaT =aT a where 33 If P1 P2 D P2 P1 then S is contained in T or T is contained in S .

a D .1; 2; 0/.

34 BB T is invertible as in Problem 29. Then .AT A/.BB T / D product of r by r invertible

matrices, so rank r . AB can’t have rank < r , since AT and B T cannot increase the rank. Conclusion: A (m by r of rank r ) times B (r by n of rank r ) produces AB of rank r .

Problem Set 4.3, page 226

1 61 1 AD4 1 1 3 2 3 0 0 17 4 8 36 687 T T and b D 4 5 give A A D and A b D . 35 8 8 26 112 4 20 2 3 2 3 1 1 1 5 37 6 7 6 AT Ab x D AT b gives b xD and p D Ab x D 4 5 and e D b p D 4 5 4 13 5 17 E D ke k2 D 44 3 2 3 2 3 2 3 1 0 0 1 1 61 17 C 6 8 7 This Ax D b is unsolvable 6 5 7 2 4 D . ; b x D exactly solves 4 8 5 Change b to p D P b D 4 13 5 1 35 D 4 2

3 In Problem 2, p D A.AT A/

4 E D .C C 0D/2 C .C C 1D

8/2 C .C C 3D 8/2 C .C C 4D 20/2 . Then @E=@C D 2C C 2.C C D 8/ C 2.C C 3D 8/ C 2.C C 4D 20/ D 0 and @E=@D D 1 2.C C D 8/ 3D 8/ C C 3 2.C C 4 2.C C 4D 20/ D 0. These 4 8 C 36 normal equations are again D . 8 26 D 112

AT b D .1; 5; 13; 17/ and e D b p D . 1; 3; 5; 3/. p e is perpendicular to both columns of A. This shortest distance ke k is 44.

1

1 4 Ab x D p.

20

17

Solutions to Exercises

5 E D .C

T

47

6 a D .1; 1; 1; 1/ and b D .0; 8; 8; 20/ give b x D aT b=aT a D 9 and the projection is 7 A D ? 0 1 3 4 ?T , AT A D ? 26 ? and AT b D ? 112 ?. Best D D 112=26 D 56=13.

0/2 C .C 8/2 C .C 8/2 C .C 20/2 . AT D ? 1 1 1 1 ? and AT A D ? 4 ?. A b D ? 36 ? and .AT A/ 1 AT b D 9 D best height C . Errors e D . 9; 1; 1; 11/.

b x a D p D .9; 9; 9; 9/. Then e T a D . 9; 1; 1; 11/T .1; 1; 1; 1/ D 0 and ke k D p 204.

12 (a) a D .1; : : : ; 1/ has aT a D m, aT b D b1 C C bm . Therefore b x D aT b=m is the P

11 (a) The best line x D 1 C 4t gives the center point b b D 9 when b t D 2.

Columns of A were not perpendicular so we can’t project separately to ?nd C and D . 2 3 2 3 " #" # " # 1 0 0 " # 0 Parabola C 4 8 26 C 36 61 1 17 6 87 T D D 4 5. A Ab D D 112 . 9 Project b 4 x D 8 26 92 1 3 95 8 4D to 3D E 26 92 338 E 400 1 4 16 20 2 32 3 2 3 2 3 2 3 1 0 0 0 C 0 C 0 Exact cubic so p D b, e D 0. 6 1 1 1 1 76 D 7 6 8 7 6 D 7 1 6 47 7 This Vandermonde matrix . 10 4 D . Then 4 5 D 4 1 3 9 27 54 E 5 4 8 5 E 28 5 gives exact interpolation 3 1 4 16 64 F 20 F 5 by a cubic at 0; 1; 3; 4 (b) The ?rst equation C m C D (b) e D b P ti D P bi divided by m gives C C Db t Db b.

m i D1 .bi

8 b x D 56=13, p D .56=13/.0; 1; 3; 4/. .C; D/ D .9; 56=13/ don’t match .C; D/ D .1; 4/.

mean of the b ’s

(c)

b x a b D .1; 2; b/ ke k2 D " # 1 1 1 1 p D .3; 3; 3/ T 1 1 1 . p e D 0. P D e D . 2; 1; 3/ 3 1 1 1

1

b x /2 D variance x.

13 .AT A/

A T .b

.A A/ 2 =m. 1 9 1 16 b10 C b x9 D .b1 C C b10 /. Knowing b x 9 avoids adding all b ’s. 10 10 10 " # " # 1 1 7 C 9 3 2 C 35 1 7 . The solution b 17 1 D xD comes from D . D 4 2 6 D 42 1 2 21

18 p D Ab x D .5; 13; 17/ gives the heights of the closest line. The error is b 19 If b D error e then b is perpendicular to the column space of A. Projection p D 0.

15 When A has 1 column of ones, Problem 14 gives the expected error .b x

2 T 1 2

14 The matrix .b x

x /.b x x / is .A A/ A .b Ax /.b Ax / A.A A/ . When the average of .b Ax /.b Ax /T is 2 I , the average of .b x x /.b x x /T will be the T 1 T 2 T 1 output covariance matrix .A A/ A A.A A/ which simpli?es to 2 .AT A/ 1 . x/2 as D =m. By taking m measurements, the variance drops from 2 to

Ax / D b x

T

x . When e D b

T 1 T

Ax averages to 0, so does b x

T T 1

.2; 6; 4/. This error e has P e D P b of A.

Pp D p

p D 0.

p D

20 If b D Ab x D .5; 13; 17/ then b x D .9; 4/ and e D 0 since b is in the column space 21 e is in N.AT /; p is in C.A/; b x is in C.AT /; N.A/ D f0g D zero vector only.

48

Solutions to Exercises

1.

5 0 C 5 22 The least squares equation is D . Solution: C D 1, D D 0 10 D 10 Line 1 t . Symmetric t ’s ) diagonal AT A

24 The derivatives of kAx

T

23 e is orthogonal to p ; then ke k2 D e T .b

T

Orthogonal to .1; 1; 1/ and .t1 ; t2 ; t3 / is y D .t2 t3 ; t3 t1 ; t1 t2 / in the left nullspace. b is in the column space. Then y T b D 0 is the same equal slopes condition written as .b2 b1 /.t3 t2 / D .b3 b2 /.t2 t1 /. 2 3 2 3 " # " # " # 1 1 0 " # 0 C 4 0 0 8 C 0 17 61 617 D D 4 5 has AT A D 0 2 0 ; AT b D 2 ; D D 26 4 5 1 1 0 3 E 0 0 2 3 E 1 0 1 4 " # 2 1 . At x; y D 0; 0 the best plane 2 x 3 y has height C D 2 D average of 2 3=2 0; 1; 3; 4.

27 The shortest link connecting two lines in space is perpendicular to those lines. 28 Only 1 plane contains 0; a1 ; a2 unless a1 ; a2 are dependent. Same test for a1 ; : : : ; an . 29 There is exactly one hyperplane containing the n points 0; a1 ; : : : ; an

1 When the n 1 vectors a1 ; : : : ; an 1 are linearly independent. (For n D 3, the vectors a1 and a2 must be independent. Then the three points 0; a1 ; a2 determine a plane.) The equation of the plane in Rn will be aT n x D 0. Here an is any nonzero vector on the line (it is only a line!) perpendicular to a1 ; : : : ; an 1 .

25 3 points on a line: Equal slopes .b2 b1 /=.t2 t1 / D .b3 b2 /=.t3 t2 /. Linear algebra:

b k2 D x T AT Ax 2bT Ax C bT b (this term is constant) are zero when 2A Ax D 2A b, or x D .AT A/ 1 AT b.

p/ D e T b D bT b

bT p.

Problem Set 4.4, page 239

1 (a) Independent (b) Independent and orthogonal (c) Independent and orthonormal.

For orthonormal vectors, (a) becomes .1; 0/, .0; 1/ and (b) is .:6; :8/, .:8; :6/. " 5=9 2=9 Divide by length 3 to get 1 0 T T 2=9 8=9 2 Q Q D but QQ D 1 2 2 0 1 q1 D . 2 ; 2; 1 /. q 2 D . 3 ; 3 ; 3 /: 3 3 3 4=9 2=9 (b) AT A will be diagonal with entries 1, 4, 9. " # " # 1 0 1 0 0 T 4 (a) Q D 0 1 , QQ D 0 1 0 ¤ I . Any Q with n < m has QQT ¤ 0 0 0 0 0 I . (b) .1; 0/ and .0; 0/ are orthogonal, not independent. Nonzero orthogonal vecp tors are independent. (c) Starting from q D .1; 1; 1/= 3 my favorite is q 2 D 1 p p .1; 1; 0/= 2 and q 3 D .1; 1; 2/= b .

5 Orthogonal vectors are .1; 1; 0/ and .1; 1; 1/.

1 p .p ; 1 ; 3 3 1 p /. 3 1 ; Orthonormal are . p 2 1 p ; 0/, 2

# 4=9 2=9 . 5=9

3 (a) AT A will be 16I

Solutions to Exercises

T T T 6 Q1 Q2 is orthogonal because .Q1 Q2 /T Q1 Q2 D Q2 Q1 Q1 Q2 D Q2 Q2 D I .

49

7 When Gram-Schmidt gives Q with orthonormal columns, QT Qb x D QT b becomes

T 8 If q 1 and q 2 are orthonormal vectors in R5 then .q T 1 b/q 1 C .q 2 b/q 2 is closest to b.

b x D QT b.

9

10

11 12

13 14

1 2/, q 2 D 3 .2; 1; 2/, q 3 D 1 .2; 2; 1/ (b) The nullspace 3 of A contains q 3 (c) b x D .AT A/ 1 AT .1; 2; 7/ D .1; 2/. 16 The projection p D .aT b=aT a/a D 14a=49 D 2a=7 is closest to b; q 1 D a=kak D a=7 is .4; 5; 2; 2/=7. B D b p D . 1; 4; 4; 4/=7 has kB k D 1 so q 2 D B . p 17 p D .aT bp =aT a/a D .3; 3; 3/ and e D . 2; 0; 2/. q 1 D .1; 1; 1/= 3 and q 2 D . 1; 0; 1/= 2. 1 1 18 A D a D .1; 1; 0; 0/I B D b p D . 2 ; 2 ; 1; 0/I C D c pA p B D . 1 ; 1 ; 1 ; 1/. 3 3 3 1 1 1 1 5 Notice the pattern in those orthogonal A ; B ; C . In R , D would be . 4 ; 4 ; 4 ; 4 ; 1/.

15 (a) q 1 D

T

# " # :8 :6 1 0 0 T :8 has P D QQ D 0 1 0 (a) Q D :6 (b) .QQT/.QQT / D 0 0 0 0 0 Q.QTQ/QT D QQT . (a) If q 1 , q 2 , q 3 are orthonormal then the dot product of q 1 with c1 q 1 C c2 q 2 C c3 q 3 D 0 gives c1 D 0. Similarly c2 D c3 D 0. Independent q ’s (b) Qx D 0 ) Q T Q x D 0 ) x D 0. 1 1 (a) Two orthonormal vectors are q 1 D 10 .1; 3; 4; 5; 7/ and q 2 D 10 . 7; 3; 4; 5; 1/ (b) Closest in the plane: project QQT .1; 0; 0; 0; 0/ D .0:5; 0:18; 0:24; 0:4; 0/. T T (a) Orthonormal a’s: aT 1 b D a1 .x1 a1 C x2 a2 C x3 a3 / D x1 .a1 a1 / D x1 T T (b) Orthogonal a’s: a1 b D a1 .x1 a1 C x2 a2 C x3 a3 / D x1 .aT 1 a1 /. Therefore x1 D T aT 1 b =a 1 a 1 (c) x1 is the ?rst component of A 1 times b. Tb aT b The multiple to subtract is a aT a . Then B D b aT a a D .4; 0/ 2 .1; 1/ D .2; 2/. p p p p kak q T b 1 4 1=p2 1=p2 2 2p2 1 D q1 q2 D D QR. 1 0 0 kB k 1= 2 1= 2 0 2 2

1 .1; 2; 3

"

19 If A D QR then AT A D RT QT QR D RT R D lower triangular times upper triangular

2 2 (b) True. Qx D x1 q 1 C x2 q 2 . kQx k2 D x1 C x2 because q 1 q 2 D 0. p 21 The orthonormal vectors are q 1 D .1; 1; 1; 1/=2 and q 2 D . 5; 1; 1; 5/= 52. Then b D . 4; 3; 3; 0/ projects to p D . 7; 3; 1; 3/=2. And b p D . 1; 3; 7; 3/=2 is orthogonal to both q 1 and q 2 . 22 A D .1; 1; 2/, B D .1; 1; 0/, C D . 1; 1; 1/. These are not yet unit vectors.

20 (a) True

(this Cholesky factorization of AT A uses the same R as Gram-Schmidt!). The example " # " # 1 1 1 2 3 3 1 2 1 D 3 2 1 has A D D QR and the same R appears in 0 3 2 4 2 2 9 9 3 0 3 3 AT A D D D RT R. 9 18 3 3 0 3

50

Solutions to Exercises

" # " # " # " #" # 1 0 0 1 0 0 1 2 4 0 3 6 D 23 You can see why q 1 D 0 , q 2 D 0 , q 3 D 1 . A D 0 0 1 0 1 0 0 1 0 0 0 5 QR. 24 (a) One basis for the subspace S of solutions to x1 C x2 C x3 x4 D 0 is v1 D .1; 1; 0; 0/, v2 D .1; 0; 1; 0/, v3 D .1; 0; 0; 1/ (b) Since S contains solutions to .1; 1; 1; 1/T x D 0, a basis for S ? is .1; 1; 1; 1/ (c) Split .1; 1; 1; 1/ D b1 C b2 ? 1 1 1 1 3 1 1 1 ; 2 ; 2 ; 2 /. by projection on S and S : b2 D . 2 ; 2 ; 2 ; 2 / and b1 D . 2 25 This question shows 2 by 2 formulas for QR ; breakdown R 22 D 0 when A is sin 1 2 1 5 3 1 1 2 1 1 1 1 1 gular. D p p . Singular D p 1 1 2 1 1 1 5 1 5 0 1 2 1 1 2 2 p . The Gram-Schmidt process breaks down when ad bc D 0. 2 0 0

B c B because q D B 26 .q T T 2 2 C /q 2 D

and the extra q 1 in C is orthogonal to q 2 . kB k B B 27 When a and b are not orthogonal, the projections onto these lines do not add to the projection onto the plane of a and b. We must use the orthogonal A and B (or orthonormal q 1 and q 2 ) to be allowed to add 1D projections.

28 There are mn multiplications in (11) and 1 m2 n multiplications in each part of (12). 2 30 The columns of the wavelet matrix W are orthonormal. Then W 31 29 q 1 D

1 .2; 2; 3 1 D W T . See Section 7.2 for more about wavelets : a useful orthonormal basis with many zeros. 1 (a) c D 2 normalizes all the orthogonal columns to have unit length (b) The pro1 T T jection .a b=a a/a of b D .1; 1; 1; 1/ onto the ?rst column is p1 D 2 . 1; 1; 1; 1/. (Check e D 0.) To project onto the plane, add p2 D 1 .1; 1; 1; 1/ to get .0; 0; 1; 1/. 2 " # 1 0 0 1 0 0 1 across plane y C z D 0. Q1 D re?ects across x axis, Q2 D 0 0 1 0 1 0 Orthogonal and lower triangular ) ˙1 on the main diagonal and zeros elsewhere. (a) Qu D .I 2uuT /u D u 2uuT u. This is u, provided that uT u equals 1 (b) Qv D .I 2uuT /v D u 2uuT v D u, provided that uT v D 0. Starting from A D .1; 1; 0; 0/, the orthogonal (not orthonormal) vectors B D .1; 1; 2; 0/ and C D .1; 1; 1; 3/ and D D .1; 1; 1; 1/ are in the directions of q 2 ; q 3 ; q 4 . The 4 by 4 and 5 by 5 matrices with integer orthogonal columns rows, 2 (not orthogonal3 2 3 1 1 1 1 1 1 17 6 1 since not orthonormal Q!) are 4 A B C D 5 D 4 and 0 2 1 15 0 0 3 1 2 3 1 1 1 1 1 6 1 1 1 1 17 6 7 2 1 1 17 6 0 4 0 0 3 1 15 0 0 0 4 1 1 1/, q 2 D 1 .2; 1; 2/, q 3 D 3 .1; 2; 2/. 3

T

32 33 34 35

Solutions to Exercises

51

36 ?Q; R? D q r .A/ produces from A (m by n of rank n) a “full-size” square Q D ? Q1 Q2 ?

R . The columns of Q1 are the orthonormal basis from Gram-Schmidt of the 0 column space of A. The m n columns of Q2 are an orthonormal basis for the left nullspace of A. Together the columns of Q D ? Q1 Q2 ? are an orthonormal basis for Rm . and columns q 1 ; : : : ; q n (instead of using those q ’s separately). Start from a, subtract its projection p D QT a onto the earlier q ’s, divide by the length of e D a QT a to get q nC1 D e =ke k.

37 This question describes the next q nC1 in Gram-Schmidt using the matrix Q with the

Problem Set 5.1, page 251

2 det. 1 A/ D . 1 /3 det A D 2 2 1 det.2A/ D 8I det. A/ D . 1/4 det A D

1 8

det.A

1

3 (a) False: det.I C I / is not 1 C 1

4 Exchange rows 1 and 3 to show jJ3 j D

(b) True: The product rule extends to ABC (use 0 0 0 1 it twice) (c) False: det.4A/ is 4 det A (d) False: A D ,B D , 0 1 1 0 0 1 AB BA D is invertible. 1 0

n

/D

1.

and det. A/ D . 1/3 det A D 1; det.A2 / D 1;

1 I 2

det.A2 / D 1 I det.A 4

1

/ D 2 D det.AT /

1

.

1. Exchange rows 1 and 4, then 2 and 3 to

6 To prove Rule 6, multiply the zero row by t D 2. The determinant is multiplied by 2 7 det.Q/ D 1 for rotation and det.Q/ D

5 jJ5 j D 1, jJ6 j D 1, jJ7 j D 1. Determinants 1; 1; 1; 1 repeat so jJ101 j D 1.

show jJ4 j D 1.

(Rule 3) but the matrix is the same. So 2 det.A/ D det.A/ and det.A/ D 0. 1 for re?ection .1 2 sin2

9 det A D 1 from two row exchanges . det B D 2 (subtract rows 1 and 2 from row 3, then 10 If the entries in every row add to zero, then .1; 1; : : : ; 1/ is in the nullspace: singular

8 QT Q D I ) jQj2 D 1 ) jQj D ˙1; Qn stays orthogonal so det can’t blow up.

2 cos2 D 1/.

columns 1 and 2 from column 3). det C D 0 (equal rows) even though C D A C B !

A has det D 0. (The columns add to the zero column so they are linearly dependent.) If every row adds to one, then rows of A I add to zero (not necessarily det A D 1). 11 CD D DC ) det CD D . 1/n det DC and not det DC . If n is even we can have an invertible CD .

12 det.A

1

/ divides twice by ad

1 . ad bc

ad bc bc (once for each row). This gives .ad D bc/2

13 Pivots 1; 1; 1 give determinant D 1; pivots 1; 2; 3=2 give determinant D 3. 14 det.A/ D 36 and the 4 by 4 second difference matrix has det D 5. 15 The ?rst determinant is 0, the second is 1

2t 2 C t 4 D .1

t 2 /2 .

52

Solutions to Exercises

16 A singular rank one matrix has determinant D 0. The skew-symmetric K also det K D

0 (see #17).

17 Any 3 by 3 skew-symmetric K has det.K T / D det. K/ D . 1/3 det.K/. This is

det.K/. But always det.K T / D det.K/. So we must have det.K/ D 0 for 3 by 3. ˇ ˇ ˇ ˇ ˇ ˇ 1 a a2 ˇ ˇ 1 ˇ ˇ a a2 ˇ ˇ ˇ ˇ ˇ b a b 2 a2 ˇ 2 ˇ 2 2 ˇ ˇ ˇ ˇ ˇ (to reach 2 by 2, 18 ˇ 1 b b ˇ D ˇ 0 b a b a ˇ D ˇ c a c 2 a2 ˇ ˇ 1 c c2 ˇ ˇ 0 c a c 2 a2 ˇ eliminate a and a2 in row ˇ 1 by column ˇoperations). Factor out b a and c a from ˇ 1 bCa ˇ ˇ the 2 by 2: .b a/.c a/ ˇ ˇ 1 c C a ˇ D .b a/.c a/.c b/.

19 For triangular matrices, just multiply the diagonal entries: det.U / D 6; det.U

1

20 21 22

23

24 25 26 27 28

29 30

31 The Hilbert determinants are 1, 810

2 , 4:610 4 , 1:610 7, 3:710 12, 5:410 18 , 4:8 10 25 , 2:7 10 33 , 9:7 10 43 , 2:2 10 53 . Pivots are ratios of determinants so the 10th pivot is near 10 10 . The Hilbert matrix is numerically dif?cult (illconditioned).

/D 1 , 6 2 2 2 2 and det.U / D 36. 2 by 2 matrix: det.U / D ad; det.U / D a d . If ad ¤ 0 then det.U 1 / D 1=ad . a Lc b Ld det reduces to .ad bc/.1 L`/. The determinant changes if you c `a d `b do two row operations at once. Rules 5 and 3 give Rule 2. (Since Rules 4 and 3 give 5, they also give Rule 2.) 1 det.A/ D 3; det.A 1 / D 3 ; det.A I / D 2 4 C 3. The numbers D 1 and D 3 give det.A I / D 0. Note to instructor: If you discuss this exercise, you can explain that this is the reason determinants come before eigenvalues. Identify D 1 and D 3 as the eigenvalues of A. 3 1 18 7 1 1 2 2 1 has det 10 . det.A/ D 10, A D , det.A / D 100, A D 10 14 11 2 4 det.A I / D 2 7 C 10 D 0 when D 2 or D 5; those are eigenvalues. Here A D LU with det.L/ D 1 and det.U / D 6 product of pivots, so also det.A/ D D 1= det.A/ and det.U 1 L 1 A/ is det I D 1. 6. det.U 1 L 1 / D 1 6 When the i , j entry is ij , row 2 D 2 times row 1 so det A D 0. When the ij entry is i C j , row 3 row 2 D row2 row 1 so A is singular: det A D 0. det A D abc , det B D abcd , det C D a.b a/.c b/ by doing elimination. (a) True: det.AB/ D det.A/ det.B/ D 0 (b) False: A row exchange gives det D product of pivots. (c) False: A D 2I and B D I have A B D I but the determinants have 2n 1 ¤ 1 (d) True: det.AB/ D det.A/ det.B/ D det.BA/. A is rectangular so det.AT A/ ¤ .det AT /.det A/: these determinants are not de?ned. 2 3 d b @f =@a @f =@c 6 7 Derivatives of f D ln.ad bc/: D 4 ad c bc ad a bc 5 D @f =@b @f =@d ad bc ad bc d b 1 1 DA . ad bc c a

Solutions to Exercises

53

32 Typical determinants of rand.n/ are 106 ; 1025 ; 1079 ; 10218 for n D 50; 100; 200; 400. randn.n/ with normal distribution gives 1031 ; 1078 ; 10186 , Inf which means 21024 . MATLAB allows 1:999999999999999 21023 1:8 10308 but one more 9 gives Inf! 33 I now know that maximizing the determinant for 1,

1 matrices is Hadamard’s problem (1893): see Brenner in American Math. Monthly volume 79 (1972) 626-630. Neil Sloane’s wonderful On-Line Encyclopedia of Integer Sequences (research.att.com/ njas) includes the solution for small n (and more references) when the problem is changed to 0; 1 matrices. That sequence A003432 starts from n D 0 with 1, 1, 1, 2, 3, 5, 9. Then the 1; 1 maximum for size n is 2n 1 times the 0; 1 maximum for size n 1 (so .32/.5/ D 160 for n D 6 in sequence A003433). To reduce the 1; 1 problem from 6 by 6 to the 0; 1 problem for 5 by 5, multiply the six rows by ˙1 to put C1 in column 1. Then subtract row 1 from rows 2 to 6 to get a 5 by 5 submatrix S of 2; 0 and divide S by 2. Here is an advanced MATLAB code and a 1; 1 matrix with largest det A D 48 for n D 5: n D 5I p D .n 1/^2I A0 Dones.n/; maxdetD 0; for k D 0 W 2^p 1 Asub D rem(?oor(k: 2:^. p C 1 W 0//; 2/I A D A0I A.2 W n; 2 W n/ D 1 reshape(Asub, n 1; n 1/; if abs(det(A// > maxdet, maxdet D abs(det(A)); maxA D A;

end end

2

Output: maxA =

1 1 1 1 1

1 1 1 1 1

1 1 1 1 1

1 1 1 1 1

1 1 1 1 1

maxdet = 48.

34 Reduce B by row operations to ? row 3I row 2I row 1 ?. Then det B D

6 (odd per-

mutation).

Problem Set 5.2, page 263

1 det A D 1 C 18 C 12 9 4 6 D 12, rows are independent; det B D 0, row 1 C row 2 D 2 3 4 5 6

row 3; det C D 1, independent rows (det C has one term, odd permutation) det A D 2, independent; det B D 0, dependent; det C D 1, independent. All cofactors of row 1 are zero. A has rank 2. Each of the 6 terms in det A is zero. Column 2 has no pivot. a11 a23 a32 a44 gives 1, because 2 $ 3, a14 a23 a32 a41 gives C1, det A D 1 1 D 0; det B D 2 4 4 2 1 4 4 1 D 64 16 D 48. Four zeros in the same row guarantee det D 0. A D I has 12 zeros (maximum with det ¤ 0). (a) If a11 D a22 D a33 D 0 then 4 terms are sure zeros (b) 15 terms must be zero.

54

Solutions to Exercises

7 5?=2 D 60 permutation matrices have det D C1. Move row 5 of I to the top; starting

from .5; 1; 2; 3; 4/ elimination will do four row exchanges.

8 Some term a1? a2ˇ an! in the big formula is not zero! Move rows 1, 2, . . ., n into

rows ? , ˇ , . . ., ! . Then these nonzero a’s will be on the main diagonal.

9 To get C1 for the even permutations, the matrix needs an even number of 10

11

12 13 14

15

16

17

18 19

20 21

22

1’s. To get C1 for the odd P ’s, the matrix needs an odd number of 1’s. So all six terms D C1 in the big formula and det D 6 are impossible: max.det/ D 4. The 4?=2 D 12 even permutations are .1; 2; 3; 4/; .2; 1; 4; 3/; .3; 1; 4; 2/; .4; 3; 2; 1/, and 8 P’s with one number in place and even permutation of the other three numbers. det.I C Peven ) D 16 or 4 or 0 (16 comes from I C I ). " # 0 42 35 det B D 1.0/ C 2.42/ C 3. 35/ D 21. d b 0 21 14 . C D .DD c a Puzzle: det D D 441 D . 21/2 . Why? 3 6 3 " # " # 3 2 1 4 0 0 C D 2 4 2 and AC T D 0 4 0 . Therefore A 1 D 1 C T D C T = det A. 4 1 2 3 0 0 4 (a) C1 D 0, C2 D 1, C3 D 0, C4 D 1 (b) Cn D Cn 2 by cofactors of row 1 then cofactors of column 1. Therefore C10 D C8 D C6 D C4 D C2 D 1. We must choose 1’s from column 2 then column 1, column 4 then column 3,and so on. Therefore n must be even to have det An ¤ 0. The number of row exchanges is n=2 so Cn D . 1/n=2 . The 1; 1 cofactor of the n by n matrix is En 1 . The 1; 2 cofactor has a single 1 in its ?rst column, with cofactor En 2 : sign gives En 2 . So En D En 1 En 2 . Then E1 to E6 is 1, 0, 1, 1, 0, 1 and this cycle of six will repeat: E100 D E4 D 1. The 1; 1 cofactor of the n by n matrix is Fn 1 . The 1; 2 cofactor has a 1 in column 1, with cofactor Fn 2 . Multiply by . 1/1C2 and also . 1/ from the 1; 2 entry to ?nd Fn D Fn 1 C Fn 2 (so these determinants are Fibonacci numbers). " # " # 1 1 1 1 1 1 1 2 1 1 2 jB4 j D 2 det C det D 2jB3 j det D 1 2 1 2 1 1 2jB3 j jB2 j. jB3 j and jB2 j are cofactors of row 4 of B4 . Rule 3 (linearity in row 1) gives jBn j D jAn j jAn 1 j D .n C 1/ n D 1. Since x , x 2 , x 3 are all in the same row, they are never multiplied in det V4 . The determinant is zero at x D a or b or c , so det V has factors .x a/.x b/.x c/. Multiply by the cofactor V3 . The Vandermonde matrix Vij D .xi /j 1 is for ?tting a polynomial p.x / D b at the points xi . It has det V D product of all xk xm for k > m. G2 D 1, G3 D 2, G4 D 3, and Gn D . 1/n 1 .n 1/ D (product of the ’s ). S1 D 3; S2 D 8; S3 D 21. The rule looks like every second number in Fibonacci’s sequence : : : 3; 5; 8; 13; 21; 34; 55; : : : so the guess is S4 D 55. Following the solution to Problem 30 with 3’s instead of 2’s con?rms S4 D 81 C 1 9 9 9 D 55. Problem 33 directly proves Sn D F2nC2 . Changing 3 to 2 in the corner reduces the determinant F2nC2 by 1 times the cofactor of that corner entry. This cofactor is the determinant of Sn 1 (one size smaller) which is F2n . Therefore changing 3 to 2 changes the determinant to F2nC2 F2n which is F2nC1 .

Solutions to Exercises

55

23 (a) If we choose an entry from B we must choose an entry from the zero block; re-

24 (a) All L’s have det D 1I det Uk D det Ak D 2; 6; 6 for k D 1; 2; 3 25 Problem 23 gives det

sult zero. This leaves entries from A times entries from to .det Dleading A/.det D/ 1 0 0 0 0 1 0 0 (b) and (c) Take A D ,B D ,C D ,DD . See #25. 0 0 1 0 0 0 0 1 (b) Pivots 2; 3 ; 2

1 . 3

26

27 28 29

30

I 0 A B D 1 and det D jAj times jD CA 1 B j C D CA 1 I which is jAD ACA 1 B j. If AC D CA this is jAD CAA 1 B j D det.AD CB/. If A is a row and B is a column then det M D det AB D dot product of A and B . If A is a column and B is a row then AB has rank 1 and det M D det AB D 0 (unless m D n D 1). This block matrix is invertible when AB is invertible which certainly requires m n. (a) det A D a11 C11 C C a1n C1n . Derivative with respect to a11 D cofactor C11 . Row 1 2 row 2 C row 3 D 0 so this matrix is singular. There are ?ve nonzero products, all 1’s with a plus or minus sign. Here are the (row, column) numbers and the signs: C .1; 1/.2; 2/.3; 3/.4; 4/ C .1; 2/.2; 1/.3; 4/.4; 3/ .1; 2/.2; 1/.3; 3/.4; 4/ .1; 1/.2; 2/.3; 4/.4; 3/ .1; 1/.2; 3/.3; 2/.4; 4/. Total 1. The 5 products in solution 29 change to 16 C 1 4 4 4 since A has 2’s and -1’s: .2/.2/.2/.2/ C . 1/. 1/. 1/. 1/ . 1/. 1/.2/.2/ .2/. 1/. 1/.2/: .2/.2/. 1/. 1/

31 det P D

32 The problem is to show that F2nC2 D 3F2n

1 because the cofactor of P14 D 1 in row one has sign . 1/1C4 . The big formula for det P has only one term .1 1 1 1/ with minus sign because exchanges three 0 I 2 D take 4; 1; 2; 3 into 1; 2; 3; 4; det.P / D .det P /.det P / D C1 so det I 0 0 1 is not right. det 1 0 F2nC2 D F2nC1 C F2n D F2n C F2n F2n 2 . Keep using Fibonacci’s rule: C F D 2F2n C .F2n F2n 2 / D 3F2n F2n 1 2n

2:

33 The difference from 20 to 19 multiplies its 3 by 3 cofactor D 1: then det drops by 1. 34 (a) The last three rows must be dependent

(b) In each of the 120 terms: Choices from the last 3 rows must use 3 columns; at least one of those choices will be zero. 35 Subtracting 1 from the n; n entry subtracts its cofactor Cnn from the determinant. That cofactor is Cnn D 1 (smaller Pascal matrix). Subtracting 1 from 1 leaves 0.

Problem Set 5.3, page 279

ˇ ˇ ˇ ˇ ˇ ˇ ˇ 2 5 ˇ ˇ 1 5 ˇ ˇ 2 1 ˇ ˇ ˇ ˇ ˇ ˇ ˇ D 3 so x1 D 6=3 D 2 and x2 D 1 (a) ˇ D 3; ˇ D 6; ˇ 1 4 ˇ 2 4 ˇ 1 2 ˇ 3=3 D 1 (b) jAj D 4; jB1 j D 3; jB2 j D 2; jB3 j D 1: Therefore x1 D 3=4 and x2 D 1=2 and x3 D 1=4.

56 ˇ 1ˇ ˇa bˇ 2 (a) y D ˇ a c 0 ˇ = ˇ c d ˇ D c=.ad

3 (a) x1 D 3=0 and x2 D

Solutions to Exercises

ˇ ˇ ˇ ˇ bc/ (b) y D det B2 = det A D .fg id /=D .

singular since a column is repeated. Therefore x1 D jB1 j=jAj D 1 and x2 D x3 D 0. 2 3 2 3 2 1 0 3 2 1 3 1 An invertible symmetric matrix 6 7 1 (b) 4 2 4 2 5. 6 (a) 4 0 05 3 has a symmetric inverse. 4 7 1 2 3 0 1

3

5 If the ?rst column in A is also the right side b then det A D det B1 . Both B2 and B3 are

2=0: no solution (b) x1 D x2 D 0=0: undetermined. 4 (a) x1 D det ? b a2 a3 ? = det A, if det A ¤ 0 (b) The determinant is linear in its ?rst column so x1 ja1 a2 a3 jCx2 ja2 a2 a3 jCx3 ja3 a2 a3 j. The last two determinants are zero because of repeated columns, leaving x1 ja1 a2 a3 j which is x1 det A.

7 If all cofactors D 0 then A

would be the zero matrix if it existed; cannot exist. (And 1 1 the cofactor formula gives det A D 0.) A D has no zero cofactors but it is not 1 1 invertible. " # " # 6 3 0 3 0 0 This is .det A/I and det A D 3. T 3 1 1 and AC D 0 3 0 . The 1; 3 cofactor of A is 0. 8 C D 6 2 1 0 0 3 Multiplying by 4 or 100: no change.

9 If we know the cofactors and det A D 1, then C T D A

1 and also det A 1 D 1. Now A is the inverse of C T , so A can be found from the cofactor matrix for C . 1

1

10 Take the determinant of AC T D .det A/I . The left side gives det AC T D .det A/.det C / 11 The cofactors of A are integers. Division by det A D ˙1 gives integer entries in A 12 Both det A and det A

while the right side gives .det A/n . Divide by det A to reach det C D .det A/n

.

1

.

1

1 are integers since the matrices contain only integers. But det A 1= det A so det A must be 1 or 1. " # " # 0 1 3 1 2 1 1 3 6 2 and A 1 D C T . 13 A D 1 0 1 has cofactor matrix C D 5 2 1 0 1 3 1

D

14 (a) Lower triangular L has cofactors C21 D C31 D C32 D 0

15 For n D 5, C contains 25 cofactors and each 4 by 4 cofactor has 24 terms. Each term

(b) C12 D C21 ; C31 D C13 ; C32 D C23 make S 1 symmetric. (c) Orthogonal Q has cofactor matrix C D .det Q/.Q 1 /T D ˙Q also orthogonal. Note det Q D 1 or 1.

needs 3 multiplications: total 1800 multiplications vs.125 for Gauss-Jordan. ˇ ˇ 2ˇ 16 (a) Area ˇ 3 (b) and (c) Area 10=2 D 5, these triangles are half of the 1 4 D 10 parallelogram in (a). ˇ ˇ ˇ ˇ ˇ3 1 1ˇ ˇi j kˇ C 8k Area of faces D ˇ D 2i 2j p 3 1ˇ 17 Volume D ˇ D 20. Dˇ 3 1 1 ˇ1 ˇ ˇ ˇ length of cross product length D 6 2 1 13 1 3 1 ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ 1ˇ 2 1 1 ˇ 1ˇ 2 1 1ˇ 18 (a) Area 2 ˇ 3 4 1 ˇ D 5 (b) 5 C new triangle area 2 ˇ 0 5 1 ˇ D 5 C 7 D 12. ˇ 19 ˇ 2 2 ˇ ˇ 1ˇ D 4 D ˇ2

3 0 51 1 3

ˇ 2 ˇ because the transpose has the same determinant. See #22.

1 0 1

Solutions to Exercises

20 The edges of the hypercube have length

57

21

22

23

24

p 1 C 1 C 1 C 1 D 2. The volume det H is 24 D 16. (H=2 has orthonormal columns. Then det.H=2/ D 1 leads again to det H D 16:) The maximum volume L1 L2 L3 L4 is in R4 . preached when the edges are orthogonal 4 With entries 1 and 1 all lengths are 4 D 2. The maximum p 3 determinant is 2 D 16, achieved in Problem 20. For a 3 by 3 matrix, det A D . 3/ can’t be achieved by ˙1. This question is still waiting for a solution! An 18:06 student showed me how to transform the parallelogram for A to the parallelogram for AT , without changing its area. (Edges slide along themselves, so no change in baselength or height or area.) 2 T3 2 T 3 a a a 0 0 det AT A D .kakkb kkc k/2 AT A D 4 bT 5 a b c D 4 0 bT b 0 5 has det A D ˙kakkb kkc k cT 0 0 cTc " # 1 0 0 The box has height 4 and volume D det 0 1 0 D 4. i j D k and .k w/ D 4. 2 3 4

1 n

25 The n-dimensional cube has 2n corners, n2n 26 27

28

29

30

31 Base area 10, height 2, volume 20.

34 .w u/ v D .v w/ u D .u v/ w W Even permutation of .u; v; w/ keeps the same

2 1 32 The volume of the box is det 1 ˇ ˇ ˇ ˇ ˇ u1 u2 u3 ˇ ˇ v2 v3 ˇ ˇ ˇ ˇ 33 ˇ v1 v2 v3 ˇ D u1 ˇ ˇ w2 w3 ˇ ˇw w w ˇ 1 2 3

edges and 2n .n 1/-dimensional faces. Coef?cients from .2 C x/ in Worked Example 2.4A. Cube from 2I has volume 2n . 1 1 The pyramid has volume 1 . The 4-dimensional pyramid has volume 24 (and n? in Rn ) 6 x D r cos ; y D r sin give J D r . The columns are orthogonal and their lengths are 1 and r . ˇ ˇ ˇ sin ' cos cos ' sin sin ' sin ˇ ˇ ˇ J D ˇ sin ' sin cos ' sin sin ' cos ˇ D 2 sin ' . This Jacobian is needed ˇ ˇ cos ' sin ' for triple integrals inside spheres. ˇ ˇ ˇ ˇ ˇ ˇ ˇ @r=@x @r=@y ˇ ˇ x=r ˇ cos y=r ˇ sin ˇ ˇ ˇ ˇ ˇ ˇ ˇ From x; y to r; : ˇ D D @=@x @=@y ˇ ˇ y=r 2 x=r 2 ˇ ˇ . sin /=r .cos /=r ˇ 1 1 . D D r Jacobian in 27 ? The triangle with corners .0; 0/; .6; 0/; .1; 4/ has area 24. Rotated ˇ by D 60 the area ˇ ˇ cos sin ˇ ˇ ˇ D is unchanged. The determinant of the rotation matrix is J D ˇ sin cos ˇ ˇ ˇ p ˇ 1=2 3=2 ˇ ˇ p ˇ D 1. ˇ 3=2 1=2 ˇ " # 4 0 3 0 D 20. 2 2 ˇ ˇ ˇ ˇ v1 v3 ˇ ˇ ˇ C u3 ˇ v1 u2 ˇ ˇ w1 w3 ˇ ˇ w1

determinant. Odd permutations reverse the sign.

ˇ v2 ˇ ˇ. This is u .v w/. w2 ˇ

58

Solutions to Exercises

35 S D .2; 1; 1/, area kPQ PS k D k. 2; 2; 1/k D 3. The other four corners

can be .0; 0; 0/, .0; 0; 2/, .1; 2; 2/, .1; 1; 0/. The volume of the tilted box is j det j D 1. 2 3 xyz 36 If .1; 1; 0/, .1; 2; 1/, .x; y; z/ are in a plane the volume is det 4 1 1 0 5 D x y C z D 0. The “box” with those edges is ?attened to zero height. 121 " # x y z 37 det 2 3 1 D 7x 5y C z will be zero when .x; y; z/ is a combination of .2; 3; 1/ 1 2 3 and .1; 2; 3/. The plane containing those two vectors has equation 7x 5y C z D 0.

38 Doubling each row multiplies the volume by 2n .

39 AC

40 The cofactor formula adds 1 by 1 determinants (which are just entries) times their co-

D .det A/I gives .det A/.det C / D .det A/n . Then det A D .det C /1=3 with n D 4. With det A 1 D 1= det A, construct A 1 using the cofactors. Invert to ?nd A.

T

Then 2 det A D det.2A/ only if n D 1.

factors of size n 1. Jacobi discovered that this formula can be generalized. For n D 5, Jacobi multiplied each 2 by 2 determinant from rows 1-2 (with columns a < b ) times a 3 by 3 determinant from rows 3-5 (using the remaining columns c < d < e ). The key question is C or sign (as for cofactors). The product is given a C sign when a, b , c , d , e is an even permutation of 1, 2, 3, 4, 5. This gives the correct determinant C1 for that permutation matrix. More than that, all other P that permute a, b and separately c , d , e will come out with the correct sign when the 2 by 2 determinant for columns a; b multiplies the 3 by 3 determinant for columns c; d; e .

41 The Cauchy-Binet formula gives the determinant of a square matrix AB (and AAT in

particular) when the factors A, B are rectangular. For (2 by 3) times (3 by 2) there are 3 products of 2 by 2 determinants from A and B (printed in boldface): "g j # "g j # a b c h k h k d e f i ` i ` # " 1 1 14 30 AB D BD 2 4 Check 30 66 3 7 Cauchy-Binet: .4 2/.4 2/ C .7 3/.7 3/ C .14 12/.14 12/ D 24 .14/.66/ .30/.30/ D 24 a d b e "g j # h k i ` 1 2 3 AD 1 4 7 c f a d b e c f

Problem Set 6.1, page 293

1 The eigenvalues are 1 and 0:5 for A, 1 and 0:25 for A2 , 1 and 0 for A1 . Exchanging

2 A has 1 D

3 A has 1 D 2 and 2 D

1 and 2 D 5 with eigenvectors x1 D . 2; 1/ and x2 D .1; 1/. The matrix A C I has the same eigenvectors, with eigenvalues increased by 1 to 0 and 6. That zero eigenvalue correctly indicates that A C I is singular. x 2 D .2; 1/. A

1

the rows of A changes the eigenvalues to 1 and 0:5 (the trace is now 0:2 C 0:3/. Singular matrices stay singular during elimination, so D 0 does not change.

1 (check trace and determinant) with x 1 D .1; 1/ and has the same eigenvectors, with eigenvalues 1= D 1 and 1. 2

Solutions to Exercises

4 A has 1 D

59

5 A and B have eigenvalues 1 and 3. A C B has 1 D 3, 2 D 5. Eigenvalues of A C B

3 and 2 D 2 (check trace D 1 and determinant D 6) with x1 D .3; 2/ and x2 D .1; 1/. A2 has the same eigenvectors as A, with eigenvalues 2 1 D 9 and 2 2 D 4. are not equal to eigenvalues of A plus eigenvalues of B . p

6 A and B have 1 D 1 and 2 D 1. AB and BA have D 2 ˙

3. Eigenvalues of AB are not equal to eigenvalues of A times eigenvalues of B . Eigenvalues of AB and BA are equal (this is proved in section 6.6, Problems 18-19). its diagonal) are all 1’s. The eigenvalues of A are not the same as the pivots.

7 The eigenvalues of U (on its diagonal) are the pivots of A. The eigenvalues of L (on 8 (a) Multiply Ax to see x which reveals

(b) Solve .A

2 1

9 (a) Multiply by A: A.Ax / D A.x / D Ax gives A x D 2 x

I /x D 0 to ?nd x .

10 A has 1 D 1 and 2 D :4 with x 1 D .1; 2/ and x 2 D .1; 1/. A1 has 1 D 1 and

A 1 : x D A 1 Ax D A .A C I /x D . C 1/x .

1

x D A

1

x gives A

x D

1

x

(b) Multiply by (c) Add I x D x :

2 D 0 (same eigenvectors). A100 has 1 D 1 and 2 D .:4/100 which is near zero. So A100 is very near A1 : same eigenvectors and close eigenvalues.

11 Columns of A 1 I are in the nullspace of A 2 I because M D .A 2 I /.A 1 I / 12 The projection matrix P has D 1; 0; 1 with eigenvectors .1; 2; 0/, .2; 1; 0/, .0; 0; 1/. 13 (a) P u D .uuT /u D u.uT u/ D u so D 1

D zero matrix [this is the Cayley-Hamilton Theorem in Problem 6.2.32]. Notice that M has zero eigenvalues .1 2 /.1 1 / D 0 and .2 2 /.2 1 / D 0.

14 Two eigenvectors of this rotation matrix are x 1 D .1; i / and x 2 D .1; i / (more 15 The other two eigenvalues are D 16 Set D 0 in det.A

(b) P v D .uuT /v D u.uT v/ D 0 (c) x 1 D . 1; 1; 0; 0/, x 2 D . 3; 0; 1; 0/, x 3 D . 5; 0; 0; 1/ all have P x D 0x D 0. generally c x 1 , and d x 2 with cd ¤ 0).

1 . 2

Add the ?rst and last vectors: .1; 2; 1/ also has D 1. Note P 2 D P leads to 2 D so D 0 or 1.

19 (a) rank D 2 20 A D

I / D .1 / : : : .n / to ?nd det A D .1 /.2 / .n /. p p 1 .a C d C .a d /2 C 4bc/ and 2 D 1 .a C d / add to a C d . 17 1 D 2 2 If A has 1 D 3 and 2 D 4 then det.A I / D . 3/. 4/ D 2 7 C 12. 4 0 3 2 2 2 18 These 3 matrices have D 4 and 5, trace 9, det 20: ; ; . 0 5 1 6 3 7 0 1 has trace 11 and determinant 28, so D 4 and 7. Moving to a 3 by 3 28 11 " # 0 1 0 0 1 has det.C I / D 3 C 62 11 C 6 D companion matrix, C D 0 6 11 6 .1 /.2 /.3 /. Notice the trace 6 D 1 C 2 C 3, determinant 6 D .1/.2/.3/, and also 11 D .1/.2/ C .1/.3/ C .2/.3/. (b) det.B T B/ D 0 (d) eigenvalues of .B 2 C I /

1 1 1 are 1; 2 ; 5.

p 1 ˙ i 3/; the three eigenvalues are 1; 1; 1.

60

Solutions to Exercises

I / has the same determinant as .A I /T 1 0 1 1 have different and 0 0 eigenvectors. because every square matrix has det M D det M T . 1 0 0 1

1 (so sum of eigenvalues D trace D 1 /. 2 2 2 0 0 1 1 1 Always A is the zero matrix if D 0 and 0, , , . 0 0 0 1 1 by the Cayley-Hamilton Theorem in Problem 6.2.32.

21 .A

22 D 1 (for Markov), 0 (for singular), 23

24 D 0; 0; 6 (notice rank 1 and trace 6) with x 1 D .0; 2; 1/, x 2 D .1; 2; 0/, x 3 D

.1; 2; 1/.

25 With the same n ’s and x ’s, Ax D c1 1 x 1 C C cn n x n equals B x D c1 1 x 1 C 26 The block matrix has D 1, 2 from B and 5, 7 from D . All entries of C are multiplied

C cn n x n for all vectors x . So A D B . by zeros in det.A

I /, so C has no effect on the eigenvalues.

27 A has rank 1 with eigenvalues 0; 0; 0; 4 (the 4 comes from the trace of A). C has rank

31 Eigenvector .1; 3; 4/ for A with D 11 and eigenvector .3; 1; 4/ for PAP T . Eigenvec32 (a) u is a basis for the nullspace, v and w give a basis for the column space

1, 1, 1, 3 and C has D 1; 1; 1; 3. Both have det D 3. p p 3; Rank-1 matrix: .C / D 29 Triangular matrix: .A/ D 1; 4; 6; .B/ D 2, 3, 0; 0; 6. a b 1 aCb 1 30 D D .a C b / ; 2 D d b to produce the correct trace c d 1 cCd 1 .a C b/ C .d b/ D a C d . tors with ¤ 0 must be in the column space since Ax is always in the column space, and x D Ax =. (b) x D .0; 1 ; 1 / is a particular solution. Add any c u from the nullspace 3 5 (c) If Ax D u had a solution, u would be in the column space: wrong dimension 3.

28 B has D

2 (ensuring two zero eigenvalues) and .1; 1; 1; 1/ is an eigenvector with D 2. With trace 4, the other eigenvalue is also D 2, and its eigenvector is .1; 1; 1; 1/.

33 If vT u D 0 then A2 D u.vT u/vT is the zero matrix and 2 D 0; 0 and D 0; 0

and trace .A/ D 0. This zero trace also comes from adding the diagonal entries of A D uvT : u1 u1 v1 u1 v2 v1 v2 D AD has trace u1 v1 C u2 v2 D vT u D 0 u2 u2 v1 u2 v2 I / D 0 gives the equation 4 D 1. This re?ects the fact that P 4 D I . The solutions of 4 D 1 are D 1; i; 1; i: The real eigenvector x 1 D .1; 1; 1; 1/ is not changed by the permutation P . Three more eigenvectors are .i; i 2 ; i 3 ; i 4 / and .1; 1; 1; 1/ and . i; . i /2 ; . i /3 ; . i /4 /: or 1. The pivots are always 1 (but there may be row exchanges). The trace of P can be 3 (for P D I ) or 1 (for row exchange) or 0 (for double exchange). The possible eigenvalues are 1 and 1 and e 2i=3 and e 2i=3 .

34 det.P

35 3 by 3 permutation matrices: Since P T P D I gives .det P /2 D 1, the determinant is 1

Solutions to Exercises

2i=3 36 1 D and 2 e D e 2i=3

61

give det 1 2 D 1 and trace 1 C 2 D 1. 2 sin with D has this trace and det. So does every M 1 AM ! cos 3 37 (a) Since the columns of A add to 1, one eigenvalue is D 1 and the other is c :6 (to give the correct trace c C :4). cos AD sin

n (c) If c D :8, the eigenvectors for D 1 are multiples of (1, 3). Since all powers A 1 1 1 also have column sums D 1, An will approach D rank-1 matrix A1 with 4 3 3 eigenvalues 1; 0 and correct eigenvectors. .1; 3/ and .1; 1/.

(b) If c D 1:6 then both eigenvalues are 1, and all solutions to .A multiples of x D .1; 1/.

I / x D 0 are

Problem Set 6.2, page 307

1 1 1 1 1 ; D 1 3 3 1 Put the eigenvectors in S 1 1 2 2 A D S?S 1 D and eigenvalues in ?. 0 1 0

3 If A D S?S

1

1 2 1 1 1 0 1 D 0 3 0 1 0 3 0

1 3 0 5

4 (a) False: don’t know ’s 5 With S D I; A D S?S

then the eigenvalue matrix for A C 2I is ? C 2I and the eigenvector matrix is still S . A C 2I D S.? C 2I /S 1 D S?S 1 C S.2I /S 1 D A C 2I . (b) True (c) True (d) False: need eigenvectors of S

1

0 0 0 4 1 0

" 3

1 2 3 D . 1 0 5

4 1 4

1 4 1 4

#

.

triangular, so S?S

1

D ? is a diagonal matrix. If S is triangular, then S is also triangular. 1 1 1 2 1 1 1 =2 D 1 1 C 2 1 2

1

is

1

6 The columns of S are nonzero multiples of .2;1/ and .0;1/: either order. Same for A 7 A D S?S

1

b for any a and b . a 1 1 1 1 2 1 0 1 2 1 8 A D S?S D D . S?k S 1 0 1 1 0 2 1 1 1 2 k 1 2nd component is Fk 1 2 1 0 1 2 1 D . 1 1 1 0 .k k 2 / 0 k 1 2 1 1 2 /=.1 2 :5 :5 with x 1 D .1; 1/, x 2 D .1; 2/ 9 (a) A D has 1 D 1, 2 D 1 2 1 0 # " # n " 2 1 2 1 1 1 1 0 3 3 3 3 n 1 (b) A D !A D 2 1 1 1 1 2 0 . :5/n a b

3 3 3 3

D

.

1 1

1 2 =2 D 1 C 2

1

D

10 The rule Fk C2 D Fk C1 C Fk produces the pattern: even, odd, odd, even, odd, odd, : : : 11 (a) True (no zero eigenvalues)

eigenvectors)

(b) False (repeated D 2 may have only one line of (c) False (repeated may have a full set of eigenvectors)

62

12 (a) False: don’t know 13 A D

Solutions to Exercises

(b) True: an eigenvector is missing (c) True. 8 3 9 4 10 5 only eigenvectors (or other), A D , AD ; 3 2 4 1 5 0 are x D .c; c/:

14 The rank of A 15 Ak D S?k S 16

1

3I is r D 1. Changing any entry except a12 D 1 makes A diagonalizable (A will have two different eigenvalues)

17

18

19

1 k approaches zero if and only if every jj < 1; Ak 1 ! A1 ; A2 ! 0 . 1 1 1 0 1 1 1 0 2 2 : steady ?D and S D I ?k ! and S?k S 1 ! 1 1 0 :2 1 1 0 0 2 2 state. :9 0 3 3 3 3 10 3 10 3 10 10 ?D , S D ; A2 D .:9/ , A2 D .:3/ , 0 :3 1 1 1 1 1 1 6 3 3 6 3 3 A10 D .:9/10 C .:3/10 because is the sum of C . 2 0 1 1 0 1 1 1 1 1 1 2 1 1 1 0 1 1 1 1 0 k D and A D 1 2 1 0 3 1 1 1 1 0 3k 2 1 2 k 1 1C3 1 3k 1 1 . . Multiply those last three matrices to get Ak D 1 1 2 1 3k 1 C 3k k k 1 1 5 0 1 1 5 5k 4k k B D D . 0 1 0 4 0 1 0 4k 1

20 det A D .det S /.det ?/.det S

diagonalizable.

/ D det ? D 1 n . This proof works when A is

1

21 trace S T D .aq C bs/ C .cr C dt / is equal to .qa C rc/ C .sb C t d / D trace T S .

Diagonalizable case: the trace of S?S

22 AB BA D I is impossible since trace AB

trace BA AB BAD D zero ¤ trace I . 1 0 1 0 T T C is possible when trace .C / D 0, and E D has EE E ED . 1 1 0 1 1 A 0 S 0 ? 0 S 0 1 23 If A D S?S then B D D . So B has 0 2A 0 S 0 2? 0 S 1 the additional eigenvalues 21 ; : : : ; 2n .

24 The A’s form a subspace since cA and A1 C A2 all have the same S . When S D I

D trace of .?S

1

/S D ?: sum of the ’s.

the A’s with those eigenvectors give the subspace of diagonal matrices. Dimension 4.

25 If A has columns x 1 ; : : : ; x n then column by column, A2 D A means every Ax i D x i .

All vectors in the column space (combinations of those columns x i ) are eigenvectors with D 1. Always the nullspace has D 0 (A might have dependent columns, so there could be less than n eigenvectors with D 1). Dimensions of those spaces add to n by the Fundamental Theorem, so A is diagonalizable (n independent eigenvectors altogether). There may not be r independent eigenvectors in the column space.

26 Two problems: The nullspace and column space can overlap, so x could be in both.

Solutions to Exercises

63

p p p p 2 1 1 27 R D S ?S D has R2 D A. B needs D 9 and 1, trace is not real. 1 2 p 1 0 0 1 Note that can have 1 D i and i , trace 0, real square root . 1 0 0 1

28 AT D A gives x T AB x D .Ax /T .B x / kAx kkB x k by the Schwarz inequality.

29 30

31

32

B T D B gives x T BAx D .B x /T .Ax / kAx kkB x k. Add to get Heisenberg’s Uncertainty Principle when AB BA D I . Position-momentum, also time-energy. The factorizations of A and B into S?S 1 are the same. So A D B . (This is the same as Problem 6.1.25, expressed in matrix form.) A D S?1 S 1 and B D S?2 S 1 . Diagonal matrices always give ?1 ?2 D ?2 ?1 . Then AB D BA from S?1 S 1 S?2 S 1 D S ?1 ?2 S 1 D S ?2 ?1 S 1 D S?2 S 1 S?1 S 1 D BA. a b 0 b a d b (a) A D has D a and D d : .A aI /.A dI / D 0 d 0 d a 0 0 0 0 1 1 2 1 D . (b) A D has A2 D and A2 A I D 0 is true, match0 0 1 0 1 1 ing 2 1 D 0 as the Cayley-Hamilton Theorem predicts. When A D S?S 1 is diagonalizable, the matrix A j I D S.? j I /S 1 will have 0 in the j; j diagonal entry of ? j I . In the product p.A/ D .A 1 I / .A n I /, each inside S 1 cancels S . This leaves S times (product of diagonal matrices ? j I ) times S 1 . That product is the zero matrix because the factors produce a zero in each diagonal position. Then p.A/ D zero matrix, which is the Cayley-Hamilton Theorem. (If A is not diagonalizable, one proof is to take a sequence of diagonalizable matrices approaching A.) Comment I have also seen this reasoning but I am not convinced: Apply the formula AC T D .det A/I from Section 5.3 to A I with variable . Its cofactor matrix C will be a polynomial in , since cofactors are determinants: .A I / cof .A I /T D det.A I /I D p./I:

“For ?xed A, this is an identity between two matrix polynomials.” Set D A to ?nd the zero matrix on the left, so p.A/ D zero matrix on the right—which is the CayleyHamilton Theorem. I am not certain about the key step of substituting a matrix for . If other matrices B are substituted, does the identity remain true? If AB ¤ BA, even the order of multiplication seems unclear : : :

33 D 2; 1; 0 are in ? and the eigenvectors are in S (below). Ak D S ?k S

1

is 1 1 1 #

"

2 1 1

1 1 1

# " 0 1 2 k 1 ? 2 6 0 1

1 2 3

# " 1 2k 4 2 D 2 6 2 3

# " 2 2 1 . 1/k 1 1 C 1 3 1 1 1

1 1 1

Check k D 4. The .2; 2/ entry of A4 is 24 =6 C . 1/4 =3 D 18=6 D 3. The 4-step paths that begin and end at node 2 are 2 to 1 to 1 to 1 to 2, 2 to 1 to 2 to 1 to 2, and 2 to 1 to 3 to 1 to 2. Much harder to ?nd the eleven 4-step paths that start and end at node 1.

64

Solutions to Exercises

diagonal b D c D 0 . The nullspace forthe following equation is 2-dimensional: 1 0 a b a b 1 0 0 b 0 0 AB BA D D D . The 0 2 c d c d 0 2 c 0 0 0 coef?cient matrix has rank 4 2 D 2. p 35 B has D i and i , so B 4 has 4 D 1 and 1 and B 4 D I . C has D .1 ˙ 3i /=2. This is exp.˙ i=3/ so 3 D 1 and 1. Then C 3 D I and C 1024 D C . cos sin 36 The eigenvalues of A D are D e i and e i (trace 2 cos and sin cos det D 1). Their eigenvectors are .1; i / and .1; i /: 1 1 e i n i 1 n n 1 A D S? S D =2i i i i 1 e i n i n .e C e i n /=2 cos n sin n D D : sin n cos n .e i n e i n /=2i Geometrically, n rotations by give one rotation by n .

37 Columns of S times rows of ?S

1

34 If AB D BA, then B has the same eigenvectors .1; 0/ and .0; 1/ as A. So B is also

38 Note that ones.n/ ones.n/ D n ones.n/. This leads to C D 1=.n C 1/.

will give r rank-1 matrices .r D rank of A/.

AA

1

D eye.n/ C .1 C C C C n/ ones.n/ D eye.n/:

D .eye.n/ C ones.n// .eye.n/ C C ones.n//

Problem Set 6.3, page 325

1 u1 D e

4t

2 z.t / D 2e t ; then dy=dt D 4y

1 1 1 t 4t 1 t , u2 D e . If u.0/ D .5; 2/, then u.t / D 3e C 2e . 0 1 0 1

Problem 1.

6e t with y.0/ D 5 gives y.t / D 3e 4t C 2e t as in

3 (a) If every column of A adds to zero, this means that the rows add to the zero row. So

the rows are dependent, and A is singular, and D 0 is an eigenvalue. 2 3 (b) The eigenvalues of A D are 1 D 0 with eigenvector x 1 D .3; 2/ and 2 3 2 D 5 (to give trace 2 D .1; 1/. Then the usual 3 steps: D 5 ) with x 4 3 1 1. Write u.0/ D as C D x1 C x2 1 2 1 2. Follow those eigenvectors by e 0t x 1 and e 5t x 2 3. The solution u.t / D x 1 C e 5t x 2 has steady state x 1 D .3; 2/. 1 1 4 d.v C w/=dt D .w v/ C .v w/ D 0, so the total v C w is constant. A D 1 1 1 D 0 1 1 v.1/ D 20 C 10e 2 v.1/ D 20 has with x 1 D , x2 D ; 2 D 2 1 1 w.1/ D 20 10e 2 w.1/ D 20

Solutions to Exercises

d 5 dt

65

v 1 1 D has D 0 and C2: v.t / D 20 C 10e 2t 1 as t ! 1. w 1 1 a 1 6 AD has real eigenvalues a C 1 and a 1. These are both negative if a < 1, 1 a b 1 0 and the solutions of u D Au approach zero. B D has complex eigenvalues 1 b b C i and b i . These have negative real parts if b < 0, and all solutions of v0 D B v approach zero.

7 A projection matrix has eigenvalues D 1 and D 0. Eigenvectors P x D x ?ll

the subspace that P projects onto: here x D .1; 1/. Eigenvectors P x D 0 ?ll the perpendicular subspace: here x D .1; 1/. For the solution to u0 D P u, 3 2 1 2 1 1 u.0/ D D C u.t / D e t C e 0t approaches : 1 2 1 2 1 1

8

9 10

11

12

13 (a) y.t / D cos 3t and sin 3t solve y 00D

6 2 2 1 has 1 D 5, x 1 D , 2 D 2, x 2 D ; rabbits r.t / D 20e 5t C 10e 2t , 2 1 1 2 w.t / D 10e 5t C 20e 2t . The ratio of rabbits to wolves approaches 20=10; e 5t dominates. 4 1 1 1 1 4 cos t (a) D2 C2 . (b) Then u.t / D 2e it C2e it D . 0 i i i i 4 sin t 0 d y y 0 1 y 0 1 D .AD has det.A I / D 2 5 4 D 0. 0 D y 00 4 5 y0 4 5 dt y Directly substituting y D e t into y 00 D 5y 0 C 4y also gives 2 D 5 C 4 and the same p two values of . Those values are 1 .5 ˙ 41/ by the quadratic formula. 2 0 1 1 t y.t / 1 t y.0/ At e D I Ct C zeros D . Then D 0 0 0 1 y 0 .t / 0 1 y 0 .0/ 0 y.0/ C y .0/t . This y.t / D y.0/ C y 0 .0/t solves the equation. y 0 .0/ 0 1 AD has trace 6, det 9, D 3 and 3 with one independent eigenvector .1; 3/. 9 6 9y . It is 3 cos 3t that starts with y.0/ D 3 0 1 and y 0 .0/ D 0. (b) A D has det D 9: D 3i and 3i with x D .1; 3i / 9 0 1 1 3 cos 3t 3 3it 3it e C e D . and .1; 3i /. Then u.t / D 3 2 2 3i 3i 9 sin 3t up D 4 1 0 4 and u.t / D c1 e t C c2 e t C . 2 t 1 2

14 When A is skew-symmetric, ku.t /k D ke At u.0/k is ku.0/k. So e At is orthogonal. 15 up D 4 and u.t / D ce t C 4;

16 Substituting u D e ct v gives ce ct v D Ae ct v

.A

cI /

1

e ct b or .A cI /v D b or v D b D particular solution. If c is an eigenvalue then A cI is not invertible.

66 1 0

Solutions to Exercises

0 1 0 1 1 17 (a) (b) (c) . These show the unstable cases 1 0 1 1 1 (a) 1 < 0 and 2 > 0 (b) 1 > 0 and 2 > 0 (c) D a ˙ i b with a > 0

1 1 1 1

This is exactly Ae , the derivative we expect. 1 Bt 2 19 e D I C Bt (short series with B D 0) D 0

21

18 d=dt .e At / D A C A2 t C 2 A3 t 2 C 6 A4 t 3 C D A.I C At C 2 A2 t 2 C 6 A3 t 3 C /.

At

20 The solution at time t C T is also e A.t CT / u.0/. Thus e At times e AT equals e A.t CT / .

4t 0 . Derivative D 1 0 et 0 0 1 1 0

4 D B. 0

1 0

4 1 D 0 0

4 1

1 0 0 0

1 0

4 1 ; 1 0

4 1

e 4.e 1/ 1 4 B 23 e D from 21 and e D from 19. By direct multiplication 0 1 0 1 e 0 e A e B ¤ e B e A ¤ e ACB D . 0 1 t 1 3t 1 .e et / e 1 1 1 1 1 0 1 At 2 2 . 24 A D D D 1 . Then e 0 3 0 2 0 3 0 0 e 3t 2 2 1 3 1 3 2 25 The matrix has A D D D A. Then all An D A. So e At D 0 0 0 0 t e 3.e t 1/ I C .t C t 2 =2? C /A D I C .e t 1/A D as in Problem 22. 0 0

A

22 A2 D A gives e At D I C At C 1 At 2 C 1 At 3 C D I C .e t 2 6

t 4 e 4e t 4 D . 1 0 1 t e et 1 1/A D . 0 1

26 (a) The inverse of e At is e

At

2 . It does have the same eigenvalues as the original matrix. 0 1 ?t 1 1 28 Centering produces U nC1 D U n . At ?t D 1, has D 1 0 ?t 1 .?t /2 e i=3 and e i=3 . Both eigenvalues have 6 D 1 so A6 D I . Therefore U 6 D A6 U 0 comes exactly back to U 0 . 1 2n 2n First A has D ˙i and A4 D I . n 29 A D . 1/n Linear growth. 2n 2n C 1 Second A has D 1; 1 and 1 1 a2 2a U n. 30 With a D ?t =2 the trapezoidal step is U nC1 D 2a 1 a2 1 C a2 .y; x/ is

31 (a) .cos A/x D .cos /x

27 .x; y/ D .e 4t ; e 4t / is a growing solution. The correct matrix for the exchanged u D

To see e At x , write .I C At C 2 4

1 2 2 A t 2

(b) If Ax D x then e At x D e t x and e t ¤ 0. C /x D .1 C t C 1 2 t 2 C /x D e t x . 2

That matrix has orthonormal columns ) orthogonal matrix ) kU nC1 k D kU n k

(b) .A/ D 2 and 0 so cos D 1; 1 and cos A D I (c) u.t / D 3.cos 2 t /.1; 1/C1.cos 0t /.1; 1/ ? u 0 D Au has exp; u 00 D Au has cos ?

Solutions to Exercises

67

Problem Set 6.4, page 337

Note A way to complete the proof at the end of page 334, (perturbing the matrix to produce distinct eigenvalues) is now on the course website: “Proofs of the Spectral Theorem.” math.mit.edu/linearalgebra. " # " # 1 3 6 0 1 2 D 1 .A C AT / C 1 .A AT / 2 2 0 3 1 AD 3 3 3 C 1 6 3 5 2 3 0 D symmetric C skew-symmetric:

2 .AT CA/T D AT C T .AT /T D AT CA. When A is 6 by 3, C will be 6 by 6 and the triple

product AT CA is 3 by 3.

3 D 0; 4; 2; unit vectors ˙.0; 1; 1/= 2 and ˙.2; 1; 1/= 6 and ˙.1; 1; 1/= 3. 4 D 10 and

5

6

7

8

9 If is complex then is also an eigenvalue .Ax D x /. Always C is real. The

10 0 1 2 5 in ? D ,xD and have to be normalized to unit 0 5 2 1 1 1 2 vectors in Q D p . 2 1 5 " # 2 1 2 1 The columns of Q are unit eigenvectors of A 2 2 1 . QD Each unit eigenvector could be multiplied by 1 3 1 2 2 9 12 A D has D 0 and 25 so the columns of Q are the two eigenvectors: 12 16 :8 :6 QD or we can exchange columns or reverse the signs of any column. :6 :8 1 2 (a) has D 1 and 3 (b) The pivots have the same signs as the ’s (c) trace 2 1 D 1 C 2 D 2, so A can’t have two negative eigenvalues. 0 1 3 3 If A D 0 then all D 0 so all D 0 as in A D . If A is symmetric then 0 0 A3 D Q?3 QT D 0 requires ? D 0. The only symmetric A is Q 0 QT D zero matrix. trace is real so the third eigenvalue of a 3 by 3 real matrix must be real. " 3 1 D2 1 3

1 2 1 2 1 2 1 2

p

p

p

11

:48 :36 :48 C25 :36 :48 :64 " # xT 1 T 12 ? x 1 x 2 ? is an orthogonal matrix so P1 C P2 D x 1 x T C x x D ? x x ? D I; 2 2 1 2 1 xT 2 T T P1 P2 D x 1 .x1 x2 /x2 D 0. Second proof: P1 P2 D P1 .I P1 / D P1 P1 D 0 since 2 P1 D P1 . 0 b A 0 0 A 13 A D has D i b and i b . The block matrices and are b 0 0 A A 0 also skew-symmetric with D i b (twice) and D i b (twice). C4

1 2 1 2 1 2 1 2

10 If x is not real then D x T Ax =x T x is not always real. Can’t assume real eigenvectors!

#

"

# 9 12 :64 ; D0 12 16 :48

68

14 M is skew-symmetric and orthogonal; ’s must be i , i , 15 A D

Solutions to Exercises

i , i to have trace zero.

good property for complex matrices is not AT D A (symmetric) but A D A (Hermitian with real eigenvalues and orthogonal eigenvectors: see Problem 20 and Section 10:2). z ? D ? AzI AT y ? D ? y I z ?. So is also an eigenvalue of B . (b) A Az D AT .y / D 2 z. (c) D 1, 1, 1, 1; x 1 D .1; 0; 1; 0/, x 2 D .0; 1; 0; 1/, x 3 D .1; 0; 1; 0/, x 4 D .0; 1; 0; 1/. " # " # 0 0 1 1 p p 1 , 17 The eigenvalues of B D 0 0 1 are 0; 2; 2 by Problem 16 with x 1 D 1 1 0 0 2 3 2 3 1 1 1 5, x 3 D 4 p 1 5. x2 D 4 p 2 2

T

i 1

1 has D 0; 0 and only one independent eigenvector x D .i; 1/. The i

T

16 (a) If Az D y and AT y D z then B? y I

18 1. y is in the nullspace of A and x is in the column space D row space because A D

AT . Those spaces are perpendicular so y T x D 0. 2. If Ax D x and Ay D ˇ y then shift by ˇ : .A ˇI /x D . ˇ/x and .A ˇI /y D 0 and again x ?y . " # " # Perpendicular for A 1 1 0 1 0 1 1 0 ; B has S D 0 1 0 . Not perpendicular for B 19 A has S D 1 0 0 1 0 0 2d since B T ¤ B T 1 3 C 4i 20 A D is a Hermitian matrix .A D A/. Its eigenvalues 6 and 4 are 3 4i 1 real. Adjust equations .1/–.2/ in the text to prove that is always real when A D A: Ax D x leads to Ax D x : Transpose to x T A D x T using A D A: Then x T Ax D x T x and also x T Ax D x T x : So D is real: 1 2 21 (a) False. A D 0 1

T T T

0 1 22 A and A have the same ’s but the order of the x ’s can change. A D has 1 0 T 1 D i and 2 D i with x 1 D .1; i / ?rst for A but x 1 D .1; i / ?rst for A . agonalizable, Markov. A allows QR; S?S

1

(b) True from AT D Q?QT (c) True from A 1 D Q? 1 QT

(d) False!

23 A is invertible, orthogonal, permutation, diagonalizable, Markov; B is projection, di-

; Q?QT ; B allows S?S

1

and Q?QT .

24 Symmetry gives Q?QT if b D 1; repeated and no S if b D

25 Orthogonal and symmetric requires 1 and real, so D˙ or jj D 1. Then A D ˙I

1; singular if b D 0.

A D Q?QT D

cos sin

sin cos

1 0

0 1

cos sin

sin cos 2 D cos sin 2

sin 2 . cos 2

26 Eigenvectors .1; 0/ and .1; 1/ give a 45? angle even with AT very close to A.

Solutions to Exercises

27 The roots of 2 C b C c D 0 are 2 . b ˙

1

69 p

p b 2 4ac/. Then 1 2 is b 2 4c . For det.A C tB I / we have b D 3 8t andp c D 2 C 16t t 2 . The minimum of b 2 4c is 1=17 at t D 2=17. Then 2 1 D 1= 17. T 4 2Ci 28 A D D A has real eigenvalues D 5 and 1 with trace D 4 and 2 i 0 det D 5. The solution to 20 proves that is real when A D A is Hermitian; I did not intend to repeat this part.

T

x T Ax : (b) zT Az is pure imaginary, its real part is x Ax C y Ay D 0 C 0 (c) det A D 1 : : : n 0 W pairs of ’s D i b; i b . 32 Since A is diagonalizable with eigenvalue matrix ? D 2I , the matrix A itself has to be S?S 1 D S.2I /S 1 D 2I . (The unsymmetric matrix ?2 1 I 0 2? also has D 2; 2.)

31 (a) x T .Ax / D .Ax /T x D x T AT x D

T T

(diagonal!) (b) step 2: The 1; 1 entries of T T T and T T T are jaj2 and jaj2 C jb j2 . This makes b D 0 and T D ?. T 1 q 11 : : : n q 1n max jq11 j2 C C jq1n j2 D max . 30 a11 is q11 : : : q1n

29 (a) A D Q?Q T times A T D Q? T Q T equals A T times A because ?? T D ? T ?

Problem Set 6.5, page 350

(i) The eigenvalues have the same sign because 1 2 D det D ac b 2 > 0. (ii) That sign is positive because 1 C 2 > 0 (it equals the trace a C c > 0). 1 10 2 2 2 Only A4 D has two positive eigenvalues. x T A1 x D 5x1 C 12x1 x2 C 7x2 10 101 is negative for example when x1 D 4 and x2 D 3: A1 is not positive de?nite as its determinant con?rms. b 0 Positive de?nite 1 0 1 1 0 1 1 b 3 D D LDLT for 3 < b < 3 b 1 0 9 b2 b 1 0 9 b2 0 1 Positive de?nite 1 0 2 4 1 0 2 0 1 2 D D LDLT . for c > 8 2 1 0 c 8 2 1 0 c 8 0 1

1 Suppose a > 0 and ac > b 2 so that also c > b 2 =a > 0.

5 x 2 C 4xy C 3y 2 D .x C 2y/2

y 2 D difference of squares is negative at x D 2, y D 1, where the ?rst square is zero. 0 1 x 0 1 6 A D produces f .x; y/ D x y D 2xy . A has D 1 and 1 0 1 0 y 1. Then A is an inde?nite matrix and f .x; y/ D 2xy has a saddle point. " # 2 3 3 1 2 6 5 T T T 7 R R D and R R D are positive de?nite; R R D 3 5 4 is 2 13 5 6 3 4 5 singular (and positive semide?nite). The ?rst two R’s have independent columns. The 2 by 3 R cannot have full column rank 3, with only 2 rows. Pivots 3; 4 outside squares, `ij inside. 1 0 3 0 1 2 3 6 . T D 8 AD 6 16 2 1 0 4 0 1 x Ax D 3.x C 2y/2 C 4y 2

4 f .x; y/ D x 2 C 4xy C 9y 2 D .x C 2y/2 C 5y 2 ; x 2 C 6xy C 9y 2 D .x C 3y/2 .

70 " " 4 4 8 2 1 0 4 4 8 1 2 1 8 8 16 0 1 2 # # has only one pivot D 4, rank A D 1, eigenvalues are 24; 0; 0; det A D 0. has pivots BD ; 4; 2; 3 2 3 " 2 1 1 1 2 1

Solutions to Exercises

9 AD 10 A D

# " # " # 1 1 0 1 is singular; B 1 D 0 . 2 1 0

1, and .c 1/2 .c C 2/ > 0. B is never positive de?nite (determinants d 4 and 4d C 12 are never both positive). 1 5 13 A D is an example with a C c > 2b but ac < b 2 , so not positive de?nite. 5 10

14 The eigenvalues of A

12 A is positive de?nite for c > 1; determinants c; c 2

11 Corner determinants jA1 j D 2, jA2 j D 6, jA3 j D 30. The pivots are 2=1; 6=2; 30=6.

15 Since x T Ax > 0 and x T B x > 0 we have x T .A C B/x D x T Ax C x T B x > 0 for

are positive because they are 1=.A/. And the entries of A pass the determinant tests. And x T A 1 x D .A 1 x /T A.A 1 x / > 0 for all x ¤ 0.

1

1

16 x T Ax is zero when .x1 ; x2 ; x3 / D .0; 1; 0/ because of the zero on the diagonal. Actu17 If ajj were smaller than all ’s, A

all x ¤ 0. Then A C B is a positive de?nite matrix. The second proof uses the test A D RT R (independent columns in R): If A D RT R and B D S T S pass this test, then T R ACB D R S also passes, and must be positive de?nite. S ally x T Ax goes negative for x D .1; 10; 0/ because the second pivot is negative. ajj I would have all eigenvalues > 0 (positive ajj I has a zero in the .j; j / position; impossible by Problem 16.

de?nite). But A

18 If Ax D x then x T Ax D x T x . If A is positive de?nite this leads to D x T Ax =x T x > 19 All cross terms are x T i x j D 0 because symmetric matrices have orthogonal eigenvec20 (a) The determinant is positive; all > 0

0 (ratio of positive numbers). So positive energy ) positive eigenvalues. tors. So positive eigenvalues ) positive energy.

21 A is positive de?nite when s > 8; B is positive de?nite when t > 5 by determinants.

2

(b) All projection matrices except I are singular (c) The diagonal entries of D are its eigenvalues (d) A D I has det D C1 when n is even. 1 4 1

3 2p 15 4 9 1

22 R D

p 2

p 54 1

32

1 15 2 1 4 0 3 1 1 1 T D ; R D Q Q D . p 1 2 0 2 1 3 2

3

1=b so a D 1= 1 and b D 1= 2 . The ellipse 9x C 16y D 1 has axes with 1 half-lengths a D 3 and b D 1 . The points . 1 ; 0/ and .0; 1 / are at the ends of the axes. 4 3 4 p p p 24 The ellipse x 2 C xy C y 2 D 1 has axes with half-lengths 1= D 2 and 2=3. 9 3 4 8 1 0 4 0 1 2 2 4 T 25 A D C C D ; D and C D 3 5 8 25 2 1 0 9 0 1 0 3

23 x 2 =a2 C y 2 =b 2 p is x T Ax when A p D diag.1=a2 ; 1=b 2 /. Then 1 D 1=a2 and 2 D 2 2 2

Solutions to Exercises

p T 26 The Cholesky factors C D L D D 2 # 1 3 0 0 0 1 2 and C D 4 0 0 0 2 0 T T square roots of the pivots from D . Note again C C D LDL D A. "

71 3 1 1 1 p 1 5 have 0 5

2

b y/2 C ac a b y 2 . 27 Writing out x T Ax D x T LDLT x gives ax 2 C 2bxy C cy 2 D a.x C a

28

29

30 31 32

33

So the LDLT from elimination is exactly the same as completing the square. The example 2x 2 C 8xy C 10y 2 D 2.x C 2y/2 C 2y 2 with pivots 2; 2 outside the squares and multiplier 2 inside. det A D .1/.10/.1/ D 10; D 2 and 5; x 1 D .cos ; sin /, x 2 D . sin ; cos /; the ’s are positive. So A is positive de?nite. 2 6x 2x 1 2 H1 D is semide?nite; f1 D . 2 x C y/2 D 0 on the curve 1 x 2 C y D 0; 2 2x 2 6x 1 0 1 H2 D D is inde?nite at .0; 1/ where 1st derivatives D 0. This is a 1 0 1 0 saddle point of the function f2 .x; y/. ax 2 C 2bxy C cy 2 has a saddle point if ac < b 2 . The matrix is inde?nite ( < 0 and > 0) because the determinant ac b 2 is negative. If c > 9 the graph of z is a bowl, if c < 9 the graph has a saddle point. When c D 9 the graph of z D .2x C 3y/2 is a “trough” staying at zero along the line 2x C 3y D 0. Orthogonal matrices, exponentials e At , matrices with det D 1 are groups. Examples of subgroups are orthogonal matrices with det D 1, exponentials e An for integer n. Another subgroup: lower triangular elimination matrices E with diagonal 1’s. A product AB of symmetric positive de?nite matrices comes into many applications. The “generalized” eigenvalue problem K x D M x has AB D M 1 K . (often we use eig.K; M / without actually inverting M .) All eigenvalues are positive: AB x D x gives .B x /T AB x D .B x /T x: Then D x T B T AB x =x T B x > 0:

34 The ?ve eigenvalues of K are 2

p p D2 3; 2 1; 2; 2 C 1; 2 C 3. The 2 cos k 6 product of those eigenvalues is 6 D det K . 35 Put parentheses in x T AT CAx D .Ax /T C.Ax /. Since C is assumed positive de?nite, this energy can drop to zero only when Ax D 0. Sine A is assumed to have independent columns, Ax D 0 only happens when x D 0. Thus AT CA has positive energy and is positive de?nite. My textbooks Computational Science and Engineering and Introduction to Applied Mathematics start with many examples of AT CA in a wide range of applications. I believe this is a unifying concept from linear algebra.

Problem Set 6.6, page 360

1 B D GC G

so M D F G 1 . C similar to A and B ) A similar to B . 1 0 3 0 0 1 1 2 AD is similar to B D D M AM with M D . 0 3 0 1 1 0 D GF AF G

1

1

1

72 1 0 1 0 1 0 1 0 D DM 0 1 1 1 0 1 1 1 1 1 1 0 1 1 1 0 BD D ; 1 1 0 1 1 1 0 1 1 4 3 0 1 1 2 0 1 BD D . 2 1 1 0 3 4 1 0 1 0

1

Solutions to Exercises

1

3 BD

AM ;

4 A has no repeated so it can be diagonalized: S 5

6 Eight families of similar matrices: six matrices have D 0, 1 (one family); three 7 (a) .M 8 9 10 11

1

1 0 1 0

1 0 0 1 0 0 , , , 0 1 1 1 0 0 0 0 is by itself and also 1 1

1 are similar (they all have eigenvalues 1 and 0). 1 1 is by itself with eigenvalues 1 and 1. 0

AS D ? makes A similar to ?.

12

0 0 0 0 0 m42 m43 0 That means m21 D m22 D m23 D m24 D 0. M is not invertible, J not similar to K . 13 The ?ve 4 by 4 Jordan forms with D 0; 0; 0; 0 are J1 D zero matrix and 2 3 2 3 0 1 0 0 0 1 0 0 60 0 0 07 60 0 1 07 J2 D 4 J3 D 4 0 0 0 05 0 0 0 05 0 0 0 0 0 0 0 0 2 3 2 3 0 1 0 0 0 1 0 0 60 0 0 07 60 0 1 07 J4 D 4 J5 D 4 0 0 0 15 0 0 0 15 0 0 0 0 0 0 0 0

AM /.M 1 x / D M 1 .Ax / D M 1 0 D 0 (b) The nullspaces of A and of M AM have the same dimension . Different vectors and different bases. Same ? 0 1 0 2 have the same line of eigenvectors But A D and B D Same S 0 0 0 0 and the same eigenvalues D 0; 0. 1 2 1 3 1 k 1 0 1 1 2 3 k 0 1 A D ,A D , every A D .A D and A D . 0 1 0 1 0 1 0 1 0 1 2 k c 2c c kc k 1 c 1 c 2 k 0 1 J2 D ; J D I and J D . 2 and J D 0 c 0 c 1 0 ck du dv 5 v.0/ 1 u.0/ D D . The equation D u has D v C w and 2 w.0/ 0 dt dt dw D w . Then w.t / D 2e t and v.t / must include 2t e t (this comes from the dt repeated ). To match v.0/ D 5, the solution is v.t / D 2t e t C 5e t . 2 3 2 3 m21 m22 m23 m24 0 m12 m13 0 6 0 6 7 0 0 0 7 7 D MKD6 0 m22 m23 0 7. If M 1 JM D K then JM D6 4 m41 m42 m43 m44 5 4 0 m32 m33 0 5

1

matrices have D 1, 1 and three have D 0, 0 (twop families each!); one has D 1 1, 1; one has D 2, 0; two matrices have D 2 .1 ˙ 5/ (they are in one family).

Solutions to Exercises

73

14 (1) Choose Mi D reverse diagonal matrix to get Mi

Problem 12 showed that J3 and J4 are not similar, even with the same rank. Every matrix with all D 0 is “nilpotent” (its nth power is An D zero matrix). You see J 4 D 0 for these matrices. How many possible Jordan forms for n D 5 and all D 0?

1

AM I / D det.M 1 AM M 1 IM /. This is det.M 1 .A I /M /. By the product rule, the determinants of M and M 1 cancel to leave det.A I /. a b d c b a c d 16 is similar to ; is similar to . So two pairs of similar c d b a d c a b 1 0 0 1 matrices but is not similar to : different eigenvalues! 0 1 1 0

15 det.M

Ji Mi D MiT in each block (2) M0 has those diagonal blocks Mi to get M0 1 JM0 D J T . (3) AT D .M 1 /T J T M T equals .M 1 /T M0 1 JM0 M T D .MM0 M T / 1 A.MM0 M T /, and AT is similar to A.

1

17 (a) False: Diagonalize a nonsymmetric A D S?S 1 . Then ? is symmetric and similar

(b) True: A singular matrix has D 0. (c) False: (they have D ˙1)

1

0 1 0 1 and are similar 1 0 1 0 (d) True: Adding I increases all eigenvalues by 1 4 zeros.

19 Diagonal blocks 6 by 6, 4 by 4; AB has the same eigenvalues as BA plus 6 20 (a) A D M

2 to B . 3 (c) 0 3 (d) 0 1 2 1 1 1 2

18 AB D B

.BA/B so AB is similar to BA. If AB x D x then BA.B x / D .B x /.

2

21 J 2 has three 1’s down the second superdiagonal, and two independent eigenvectors for

BM ) A D .M BM /.M BM / D M B M . So A is similar 2 (b) A equals . A/2 but not be similar to B D A (it could be!). A may 3 0 1 because 1 ¤ 2 , sothesematrices are similar. is diagonalizableto 0 4 4 1 has only one eigenvector, sonot diagonalizable (e) PAP T is similar to A. 3 J3 "

# 0 1 0 0 1 . D 0. Its 5 by 5 Jordan form is with J3 D 0 0 1 and J2 D 0 0 J2 0 0 0 Note to professors: An interesting question: Which matrices A have (complex) square roots R2 D A? If A is invertible, no problem. But any Jordan blocks for D 0 must have sizes n1 n2 : : : nk nk C1 D 0 that come in pairs like 3 and 2 in this example: n1 D (n2 or n2 C 1) and n3 D (n4 or n4 C 1) and so on. # " # " a 1 0 a 0 0 A list of all 3 by 3 and 4 by 4 Jordan forms could be 0 b 0 , 0 a 0 , 0 0 c 0 0 b 3 2 " # a 1 a 1 0 (for any numbers a; b; c ) a 6 7 0 a 1 5, b with 3; 2; 1 eigenvectors; diag.a; b; c; d / and 4 0 0 a c 3 3 2 3 2 2 a 1 a 1 a 1 a 1 a 1 a 7 6 7 6 7 6 with 4; 3; 2; 1 eigenvectors. , 5, 4 4 a 15 a b 15 4 b a b

74

22 If all roots are D 0, this means that det.A

Solutions to Exercises

I / must be just n . The CayleyHamilton Theorem in Problem 6.2.32 immediately says that An D zero matrix. The key example is a single n by n Jordan block (with n 1 ones above the diagonal): Check directly that J n D zero matrix. 23 Certainly Q1 R1 is similar to R1 Q1 D Q1 1 .Q1 R1 /Q1 . Then A1 D Q1 R1 cs 2 I is similar to A2 D R1 Q1 cs 2 I: 24 A could have eigenvalues D 2 and D 1 (A could be diagonal). Then A 1 has the 2 same two eigenvalues (and is similar to A).

Problem Set 6.7, page 371

1 A D U ?V T D u1

1 2 2 This A D is a 2 by 2 matrix of rank 1. Its row space has basis v1 , its nullspace 3 6 has basis v2 , its column space has basis u1 , its left nullspace has basis u2 : 1 1 1 2 Row space p Nullspace p 2 1 5 5 1 1 1 3 Column space p ; N .AT / p : 1 10 3 10

2 2 which is the sum of all aij . (Each diagonal entry of AT A is the sum of aij down one column, so the trace is the sum down all columns.) Then 1 D square root of this sum, 2 2 and 1 D this sum of all aij . p p 5 But A is 3C 5 2 3 2 1 T T 2 4 A A D AA D has eigenvalues 1 D , 2 D . 1 1 inde?nite 2 2 p p 1 D .1 C 5/=2 D 1 .A/; 2 D . 5 1/=2 D 2 .A/; u1 D v1 but u2 D v2 . 5 A proof that eigshow ?nds the SVD. When V 1 D .1; 0/; V 2 D .0; 1/ the demo ?nds AV 1 and AV 2 at some angle . A 90? turn by the mouse to V 2 ; V 1 ?nds AV 2 and AV 1 at the angle . Somewhere between, the constantly orthogonal v1 and v2 must produce Av1 and Av2 at angle =2. Those orthogonal directions give u1 and u2 . p p 2 1 1=p2 1=p2 T 2 2 6 AA D has 1 D 3 with u1 D and 2 D 1 with u2 D . 1 2 1= 2 1= 2 p 2 3 2 3 p " # 1=p6 1= 2 1 1 0 T 2 2 A A D 1 2 1 has 1 D 3 with v1 D 4 2=p6 5, 2 D 1 with v2 D 4 0p 5; 0 1 1 1= 2 1= 6 p 3 2 p 1=p3 1 1 0 3 0 0 and v3 D 4 1=p3 5. Then D ? u1 u2 ? ? v1 v2 v3 ?T . 0 1 1 0 1 0 1= 3

u2

1 0

v1

32 3 3 2p 1 3 1 2 50 0 54 5 T 4 54 0 0 2 1 1 v2 D 3 p p 10 5 2

3 If A has rank 1 then so does AT A. The only nonzero eigenvalue of AT A is its trace,

Solutions to Exercises

7 The matrix A in Problem 6 had 1 D

75

p 3 and 2 D 1 in ?. The smallest change to rank 1 is to make 2 D 0. In the factorization

T A D U ?V T D u1 1 vT 1 C u2 2 v 2

10 A rank–1 matrix with Av D 12u would have u in its column space, so A D uwT

this change 2 ! 0 will leave the closest rank–1 matrix as u1 1 vT 1 . See Problem 14 for the general case of this problem. 8 The number max .A 1 /max .A/ is the same as max .A/=min .A/. This is certainly 1. It equals 1 if all ’s are equal, and A D U ?V T is a multiple of an orthogonal matrix. The ratio max =min is the important condition number of A studied in Section 9:2. 9 A D U V T since all j D 1, which means that ? D I . for some vector w. I intended (but didn’t say) that w is a multiple of the unit vector vD 1 .1; 1; 1; 1/ in the problem. Then A D 12uvT to get Av D 12u when vT v D 1. 2

2 2 agonal with entries 1 ; : : : ; n . So the ’s are de?nitely the singular values of A (as expected). The eigenvalues of that diagonal matrix AT A are the columns of I , so V D I in the SVD. Then the ui are Avi =i which is the unit vector wi =i .

11 If A has orthogonal columns w1 ; : : : ; wn of lengths 1 ; : : : ; n , then AT A will be di-

The SVD of this A with orthogonal columns is A D U ?V T D .A?

1

/.?/.I /:

2 2 2 12 Since AT D A we have 1 D 2 1 and 2 D 2 . But 2 is negative, so 1 D 3 and

13 14 15 16

17

2 D 2. The unit eigenvectors of A are the same u1 D v1 as for AT A D AAT and u2 D v2 (notice the sign change because 2 D 2 , as in Problem 4). Suppose the SVD of R is R D U ?V T . Then multiply by Q to get A D QR. So the SVD of this A is .QU /?V T . (Orthogonal Q times orthogonal U D orthogonal QU .) The smallest change in A is to set its smallest singular value 2 to zero. See # 7. The singular values of A C I are not j C 1. They come from eigenvalues of .A C I /T .A C I /. This simulates the random walk used by Google on billions of sites to solve Ap D p. It is like the power method of Section 9:3 except that it follows the links in one “walk” where the vector pk D Ak p0 averages over all walks. p p A D U ?V T D ?cosines including u4 ? diag.sqrt.2 2; 2; 2 C 2// ?sine matrix?T . AV D U ? says that differences of sines in V are cosines in U times ’s. The SVD of the derivative on ?0; ? with f .0/ D 0 has u D sin nx , D n, v D cos nx !

Problem Set 7.1, page 380

1 linearity gives T . 0/ D T .0/. This is a second proof that T .0/ D 0. 2 Combining T .c v/ D cT .v/ and T .d w/ D d T .w/ with addition gives T .c v C d w/ D cT .v/ C d T .w/. Then one more addition gives cT .v/ C d T .w/ C eT .u/. 3 (d) is not linear.

1 With w D 0 linearity gives T .v C 0/ D T .v/ C T .0/. Thus T .0/ D 0. With c D

76

4 (a) S.T .v// D v

Solutions to Exercises

(b) S.T .v1 / C T .v2 // D S.T .v 1 // C S.T .v2 //.

5 Choose v D .1; 1/ and w D . 1; 0/. Then T .v/ C T .w/ D .v C w/ but T .v C w/ D

.0; 0/.

6 (a) T .v/ D v=kvk does not satisfy T .v C w/ D T .v/ C T .w/ or T .c v/ D cT .v/

(b) and (c) are linear

8 (a) The range of T .v1 ; v2 / D .v1

7 (a) T .T .v // D v (b) T .T .v// D v C .2; 2/ (c) T .T .v // D v (d) T .T .v// D T .v/.

(d) satis?es T .c v/ D cT .v/.

v2 ; 0/ is the line of vectors .c; 0/. The nullspace is the line of vectors .c; c/. (b) T .v1 ; v2 ; v3 / D .v1 ; v2 / has Range R2 , kernel {(0; 0; v3 )} (c) T .v/ D 0 has Range f0g, kernel R2 (d) T .v1 ; v2 / D .v1 ; v1 / has Range = multiples of .1; 1/, kernel = multiples of .1; 1/. T .v /.

9 If T .v1 ; v2 ; v3 / D .v2 ; v3 ; v1 / then T .T .v// D .v3 ; v1 ; v2 /; T 3 .v/ D v; T 100 .v/ D 10 (a) T .1; 0/ D 0

(b) .0; 0; 1/ is not in the range

n m

11 For multiplication T .v/ D Av: V D R , W D R ; the outputs ?ll the column space; 12 T .v/ D .4; 4/I .2; 2/I .2; 2/; if v D .a; b/ D b.1; 1/C a 2 b .2; 0/ then T .v/ D b.2; 2/C

(c) T .0; 1/ D 0.

v is in the kernel if Av D 0. .0; 0/.

13 The distributive law (page 69) gives A.M1 C M2 / D AM1 C AM2 . The distributive

to get M D 0 and B . The kernel contains only the zero matrix M D 0. 2 2 0 0 15 This A is not invertible. AM D I is impossible. A D . The range 1 1 0 0 contains only matrices AM whose columns are multiples of .1; 3/. 0 0 0 1 16 No matrix A gives A D . To professors: Linear transformations on 1 0 0 0 matrix space come from 4 by 4 matrices. Those in Problems 13–15 were special. M DA

1

14 This A is invertible. Multiply AM D 0 and AM D B by A

law over c ’s gives A.cM / D c.AM /.

1

(d) False. 0 b a 0 18 T .I / D 0 but M D D T .M /; these M ’s ?ll the range. Every M D 0 0 c d is in the kernel. Notice that dim (range) C dim (kernel) D 3 C 1 D dim (input space of 2 by 2 M ’s).

19 T .T

1

17 For T .M / D M T (a) T 2 D I is True

(b) True

(c) True

(b) House squashes onto a line (c) Vertical lines stay vertical because T .1; 0/ D .a11 ; 0/. :7 :7 2 0 projects the house (since 21 D D doubles the width of the house. A D 0 1 :3 :3 A2 D A from trace D 1 and D 0; 1 ). The projection is onto the column space of 1 1 A D line through .:7; :3/. U D will shear the house horizontally: The point 0 1 at .x; y/ moves over to .x C y; y/.

20 (a) Horizontal lines stay horizontal, vertical lines stay vertical

.M // D M so T

1

.M / D A

1

MB

1

.

Solutions to Exercises

a 22 (a) A D 0

77

23 T .v/ D

0 with d > 0 leaves the house AH sitting straight up (b) A D 3I d cos sin expands the house by 3 (c) A D rotates the house. sin cos T .v / D

24 A code to add a chimney will be gratefully received! 25 This code needs a correction: add spaces between

v rotates the house by 180? around the origin. Then the af?ne transformation v C .1; 0/ shifts the rotated house one unit to the right.

10 10 10 10 1 0 :5 :5 26 compresses vertical distances by 10 to 1. projects onto the 45? line. 0 :1 :5 :5 p :5 :5 rotates by 45? clockwise and contracts by a factor of 2 (the columns have :5 :5 p 1 1 length 1= 2). has determinant 1 so the house is “?ipped and sheared.” One 1 0 way to see this is to factor the matrix as LDLT : 1 1 1 0 1 1 1 D D (shear) (?ip left-right) (shear): 1 0 1 1 1 0 1

27 Also 30 emphasizes that circles are transformed to ellipses (see ?gure in Section 6.7). 28 A code that adds two eyes and a smile will be included here with public credit given! 29 (a) ad

bc D 0 (b) ad bc > 0 (c) jad bc j D 1. If vectors to two corners transform to themselves then by linearity T D I . (Fails if one corner is .0; 0/.) v1 v2 transforms to the ellipse by rotating 30? and stretching the ?rst

30 The circle

axis by 2.

31 Linear transformations keep straight lines straight! And two parallel edges of a square

(edges differing by a ?xed v) go to two parallel edges (edges differing by T .v/). So the output is a parallelogram.

Problem Set 7.2, page 395

2 0 For S v D d 2 v=dx 2 0 6 1 v 1 , v 2 , v 3 , v 4 D 1, x , x 2 , x 3 The matrix for S is B D 4 0 Sv1 D Sv2 D 0, Sv3 D 2v1 , Sv4 D 6v2 ; 0 nullspace of the second derivative matrix B . 0 0 0 0 2 0 0 0 3 0 67 . 05 0

2 S v D d 2 v=dx 2 D 0 for linear functions v.x / D a C bx . All .a; b; 0; 0/ are in the 3 (Matrix A)2 D B when (transformation T )2 D S and output basis = input basis.

78

Solutions to Exercises

4 The third derivative matrix has 6 in the .1; 4/ position; since the third derivative of x 3

is 6. This matrix also comes from AB . The fourth derivative of a cubic is zero, and B 2 is the zero matrix. v 3 / gives T .v/ D 0; nullspace is .0; c; c/; solutions .1; 0; 0/ C .0; c; c/.

5 T .v1 C v2 C v3 / D 2w1 C w2 C 2w3 ; A times .1; 1; 1/ gives .2; 1; 2/. 6 v D c.v2

7 .1; 0; 0/ is not in the column space of the matrix A, and w1 is not in the range of

the linear transformation T . Key point: Column space of matrix matches range of transformation. A2 .

8 We don’t know T .w/ unless the w’s are the same as the v’s. In that case the matrix is 9 Rank of A D 2 D dimension of the range of T . The outputs Av (column space) match

12 (c) T

the outputs T .v/ (the range of T ). The “output space” W is like Rm : it contains all outputs but may not be ?lled up. " # " # " # 1 0 0 1 1 1 D 10 The matrix for T is A D 1 1 0 . For the output 0 choose input v D 1 1 1 0 0 " # 1 A 1 0 . This means: For the output w1 choose the input v1 v2 . 0 " # 1 0 0 1 1 0 so T 1 .w1 / D v1 v2 ; T 1 .w2 / D v2 v3 ; T 1 .w3 / D v3 . 11 A 1 D 0 1 1 The columns of A 1 describe T 1 from W back to V . The only solution to T .v/ D 0 is v D 0.

1

17 Recording basis vectors is done by a Permutation matrix. Changing lengths is done by

(b) T .v1 / D v1 ; T .v2 / D 0 has T 2 D T (c) If T 2 D I for part (a) and T 2 D T for part (b), then T must be I . 1 2 3 1 2 1 . must be 2A D inverse of (a) (c) A (b) 14 (a) 3 6 5 2 5 3 s r 0 1 r s ; this is the “easy” direcand to and transforms 15 (a) M D u 0 1 t t u 1 a b tion. (b) N D transforms in the inverse direction, back to the standard c d basis vectors. (c) ad D bc will make the forward matrix singular and the inverse impossible. 1 3 1 1 0 2 1 D . 16 M W D 7 3 1 2 5 3 a positive diagonal matrix.

13 (a) T .v1 / D v2 ; T .v2 / D v1 is its own inverse

.T .w1 // D w1 is wrong because w1 is not generally in the input space.

18 .a; b/ D .cos ;

sin /. Minus sign from Q

1

D QT .

Solutions to Exercises

1 1 a 5 19 M D ; D D ?rst column of M 4 5 b 4 1 1 . 4 5

20 w2 .x/ D 1

79

1

1 D coordinates of in basis 0

21

22

23 24 25 26 27 28

29 30 31

32

33 To ?nd coordinates in the wavelet basis, multiply by W

1 x 2 ; w3 .x/ D 2 .x 2 x/; y D 4w1 C 5w2 C 6w3 . # " # 0 1 0 1 1 1 0 :5 : v’s to w’s: inverse matrix D 1 0 0 . The key w’s to v’s: :5 :5 1 :5 1 1 1 idea: The matrix multiplies the coordinates in the v basis to give the coordinates in the w basis. 2 32 3 2 3 A 4 1 a a2 254B 5 4 4 The 3 equations to match 4; 5; 6 at x D a; b; c are 1 b b D 5 5. This 2 1 c c C 6 Vandermonde determinant equals .b a/.c a/.c b/. So a; b; c must be distinct to have det ¤ 0 and one solution A; B; C . The matrix M with these nine entries must be invertible. Start from A D QR. Column 2 is a2 D r12 q 1 C r22 q 2 . This gives a2 as a combination of the q ’s. So the change of basis matrix is R . Start from A D LU . Row 2 of A is `21 (row 1 of U / C `22 (row 2 of U ). The change of basis matrix is always invertible, because basis goes to basis. The matrix for T .vi / D i vi is ? D diag.1 ; 2 ; 3 /. If T is not invertible, T .v1 /; : : : ; T .v n / is not a basis. We couldn’t choose wi D T .v i /. 0 3 1 0 (a) gives T .v1 / D 0 and T .v2 / D 3v1 . (b) gives T .v1 / D v1 and 0 0 0 0 T .v1 C v2 / D v1 (which combine into T .v2 / D 0 by linearity). T .x; y/ D .x; y/ is re?ection across the x -axis. Then re?ect across the y -axis to get S.x; y/ D . x; y/. Thus S T D I . S takes .x; y/ to . x; y/. S.T .v // D . 1; 2/. S.v/ D . 2; 1/ and T .S.v// D .1; 2/. cos 2. ?/ sin 2. ?/ which is rotation Multiply the two re?ections to get sin 2. ?/ cos 2. ?/ by 2. ?/. In words: .1; 0/ is re?ected to have angle 2? , and that is re?ected again to angle 2 2? . False: We will not know T .v/ for energy v unless the n v’s are linearly independent. 2 1 1 1 1 3

"

1

1 1 0 0 2 2 Then e D C C and v D w3 C w4 . Notice again: W tells us how the bases change, W tells us how the coordinates change. 34 The last step writes 6, 6, 2, 2 as the overall average 4, 4, 4, 4 plus the difference 2, 2, 2, 2. Therefore c1 D 4 and c2 D 2 and c3 D 1 and c4 D 1. 1 w 4 1 1 w 4 2 1 1 w 2 3

6 6 D6 6 4

4 1 4 1 2

4 1 4 1 2

4 1 4

4 1 4

0

7 7 7. 0 7 5

80

Solutions to Exercises

35 The wavelet basis is .1; 1; 1; 1; 1; 1; 1; 1/ and the long wavelet and two medium wavelets

.1; 1; 1; 1; 0; 0; 0; 0/; .0; 0; 0; 0; 1; 1; 1; 1/ and 4 wavelets with a single pair 1; 1. 1 W c . The change of basis matrix is V 1 W . a b 1 0 a 0 37 Multiplying by gives T .v1 / D A D D av1 C c v3 . Similarly c d 0 0 c 0 T .v2 / D av2 C c v42 and T .v 3 / D b3 v1 C d v3 and T .v 4 / D b v2 C d v4 . The matrix a 0 b 0 60 a 0 b 7 for T in this basis is 4 . c 0 d 05 0 c 0 d " # 1 0 0 0 38 The matrix for T in this basis is A D 0 1 0 0 . 0 0 0 0

36 If V b D W c then b D V

Problem Set 7.3, page 406

1 AT A D 2

3 4 5

6 7 8

9 10

p 1 1 1 2 10 20 , v2 D p ; 1 D 50. has D 50 and 0, v1 D p 20 40 5 2 5 1 Orthonormal bases: v1 for row space, v2 for nullspace, u1 for column space, u2 for N.AT /. All matrices with those four subspaces are multiples cA, since the subspaces are just lines. Normally many more matrices share the same 4 subspaces. (For example, all n by n invertible matrices share Rn .) 1 1 7 1 10 20 p A D QH D p . H is semide?nite because A is singular. 1 7 40 50 20 p50 1 3 :2 :4 :1 :3 50 0 1= 1 T C C C A DV U D 50 ; A AD , AA D . 2 6 :4 :8 :3 :9 0 0 p 1 1 1 10 8 1 AT A D has D 18 and 2, v1 D p , v2 D p , 1 D 18 1 8 10 2 1 2 p and 2 D 2. p p 18 0 1 0 T AA D has u1 D , u2 D . The same 18 and 2 go into ?. 0 2 0 1 T v1 T T T 1 u1 2 u2 D 1 u1 v T 1 C 2 u2 v 2 . In general this is 1 u1 v1 C C r ur vr . vT 2 1 1 1 T T A D U ?V splits into QK (polar): Q D U V D p and K D V ?V T D 1 1 2 p 18 p 0 . 0 2 AC is A 1 because A is invertible. Pseudoinverse equals inverse when A 1 exists! " # " # " # " # 9 12 0 :6 :8 0 T :6 , v3 D 0 . A A D 12 16 0 has D 25; 0; 0 and v1 D :8 , v2 D 0 0 0 0 0 1 Here A D ? 3 4 0 ? has rank 1 and AAT D ? 25 ? and 1 D 5 is the only singular value in ? D ? 5 0 0 ?.

Solutions to Exercises

T

81

11 12 13 14 15

16 17 18 19

20 21

22 Keep only the r by r corner ?r of ? (the rest is all zero). Then A D U ?V T has the

b M1 ?r M T V b T with an invertible M D M1 ?r M T in the middle. required form A D U 2 2 0 A u Av u The singular values of A are 23 D D . v v eigenvalues of this block matrix. AT 0 AT u

" # " # " # :2 :12 :36 :48 0 C A D ? 1 ? ? 5 0 0 ?V and A DV 0 D :16 ; A AD :48 :64 0 I AACD? 1 ? 0 0 0 0 0 The zero matrix has no pivots or singular values. Then ? D same 2 by 3 zero matrix and the pseudoinverse is the 3 by 2 zero matrix. If det A D 0 then rank.A/ < n; thus rank.AC / < n and det AC D 0. A must be symmetric and positive de?nite, if ? D ? and U D V D eigenvector matrix Q is orthogonal. (a) AT A is singular (b) This x C in the row space does give AT Ax C D AT b (c) If .1; 1/ in the nullspace of A is added to x C , we get another solution to AT Ab x D AT b . But this b x is longer than x C because the added part is orthogonal to x C in the row space. x C in the row space of A is perpendicular to b x x C in the nullspace of AT A D 2 2 nullspace of A. The right triangle has c D a C b 2 . AAC p D p, AAC e D 0, AC Ax r D x r , AC Ax n D 0. :36 :48 C C AC D V ?C U T is 1 ? :6 :8 ? D ? :12 :16 ? and A A D ? 1 ? and AA D D 5 :48 :64 projection. L is determined by `21 . Each eigenvector in S is determined by one number. The counts are 1 C 3 for LU , 1 C 2 C 1 for LDU , 1 C 3 for QR, 1 C 2 C 1 for U ?V T , 2 C 2 C 0 for S?S 1 . LDLT and Q?QT are determined by 1 C 2 C 0 numbers because A is symmetric. P C Column times row multiplication gives A D U ?V T D i ui vT D i and also A P C T 1 T C V? U D i vi ui . Multiplying A A and using orthogonality of each u to all i P C C T C other u leaves the projection matrix A A : A A D 1 v v . Similarly AA D j i i P T T 1ui ui from V V D I .

C

Problem Set 8.1, page 418

# c1 C c2 c2 0 c2 c2 C c3 c3 1 Det AT is by direct calculation. Set c4 D 0 to 0 C0 A0 D 0 c3 c3 C c4 ?nd det AT 1 C1 A1 D c1 c2 c3 . 3" " #2 1 # c1 1 0 0 1 1 1 1 5 0 1 1 D 2 .AT D 1 1 0 4 c2 1 1 C1 A1 / 1 1 1 1 0 0 1 c 2 1 3 3 1 1 c1 c1 c1 1 1 5. 4c 1 c 1 C c 1 c 1 1 2 1 C c2 1 1 1 1 1 c1 c1 C c2 c1 C c2 C c3 1 "

82

Solutions to Exercises

3 The rows of the free-free matrix in equation (9) add to ? 0 0 0 ? so the right side needs

f1 C f2 C f3 D 0. f D . 1; 0; 1/ gives c2 u1 c2 u2 D 1; c3 u2 c3 u3 D 1; 0 D 0. Then uparticular D . c2 1 c3 1 ; c3 1 ; 0/. Add any multiple of unullspace D .1; 1; 1/. Z Z d du du 1 4 c.x/ dx D c.x/ D 0 (bdry cond) so we need f .x/ dx D 0. dx dx dx 0 Z x Z 1 dy 5 D f .x/ gives y.x/ D C f .t /dt . Then y.1/ D 0 gives C D f .t /dt dx 0 0 Z 1 and y.x/ D f .t /dt . If the load is f .x/ D 1 then the displacement is y.x/ D 1 x .

x

6 Multiply

7 For 5 springs and 4 masses, the 5 by 4 A has two nonzero diagonals: all ai i D 1 and ai C1;i D 1. With C D diag.c1 ; c2 ; c3 ; c4 ; c5 / we get K D AT CA, symmetric

as columns of AT 1 times c ’s times rows of A1 . The ?rst 3 by 3 “element matrix” c1 E1 D ? 1 0 0 ?T c1 ? 1 0 0 ? has c1 in the top left corner.

AT 1 C1 A1

8 The solution to 9

tridiagonal with diagonal entries Ki i D ci C ci C1 and off-diagonals Ki C1;i D ci C1 . With C D I this K is the 1; 2; 1 matrix and K.2; 3; 3; 2/ D .1; 1; 1; 1/ solves K u D ones.4; 1/. (K 1 will solve K u D ones.4/.)

1 u00 D 1 with u.0/ D u.1/ D 0 is u.x/ D 2 .x x 2 /. At x D 1 ; 2; 3; 4 5 5 5 5 2 this gives u D 2; 3; 3; 2 (discrete solution in Problem 7) times .?x/ D 1=25.

mgx 2 . From u.0/ D 0 we u 00 D mg has complete solution u.x/ D A C Bx 1 2 0 get A D 0. From u .1/ D 0 we get B D mg . Then u.x/ D 1 mg.2x x 2 / at 2 1 2 3 x D 3 ; 3 ; 3 equals mg=6; 4mg=9; mg=2. This u.x/ is not proportional to the discrete u D .3mg; 5mg; 6mg/ at the meshpoints. This imperfection is because the discrete problem uses a 1-sided difference, less accurate at the free end. Perfect accuracy is recovered by a centered difference (discussed on page 21 of my CSE textbook). ?xed case is .2:2

相关文章:

- Introduction to Linear Algebra(第四版答案) 修改版.pdf
*Introduction**to**Linear**Algebra(第四版答案)**修改版*_理学_高等教育_教育专区。*INTRODUCTION**TO**LINEAR*ALGEBRA Fourth Edition MANUAL FOR INSTRUCTORS Gilbert Strang ...

- Introduction to Linear Algebra Fourth Edition by Gi....pdf
*Introduction**to**Linear**Algebra*Fourth Edition by Gilbert Strang - 5 Star Review_经济学_高等教育_教育专区。review of the book inroduction*to**linear*alebra...

- 答案introduction to commutative algebra solutions.pdf
*答案introduction**to*commutative*algebra*solutions_理学_高等教育_教育专区。牛人的...*Introduction**to**Linear*... 77页 免费*Introduction**to**Linear*... 99页 免费 ...

- linear algebra.doc
- 2. On the Teaching of
*Linear**Algebra*, by Jean-Luc DORIER ,Kluwer Academic Publishers; 2002 3*Introduction**to**Linear**Algebra*, Strange, MIT*4*.*Linear*...

- 答案Introduction_to_Quantum_Mechanics.pdf
*答案Introduction*_*to*_Quantum_Mechanics_物理_自然科学_专业资料。*Introduction*_*to*_...*Linear**Algebra*2nd Edition 1st Edition Problem Correlation Grid 2 3 14 ...

- 线性代数第七版课后答案.pdf
- 线性代数第七版课后
*答案*_理学_高等教育_教育专区。SEVENTH EDITION*LINEAR**ALGEBRA*WITH APPLICATIONS Instructor’s Solutions Manual Steven J. Leon PSEVENTH...

- Introduction to linear optimization.pdf
*Introduction**to**linear*optimization_理学_高等教育_教育专区。线性规划经典书籍*INTRODUCTION**TO**LINEAR*OPTIMIZATION Dimitris Bertsimas and John Tsitsiklis Errata ...

- Linear Algebra Introduction by MIT.pdf
*Linear**Algebra**Introduction*by MIT_理学_高等教育_教育专区。MIT麻省理工线性代数介绍 18.06*Linear**Algebra*, Fall 2011 Lecturer: Alan Edelman (o?ce 2-343,...

- 语言学教程(第四版) 教材及习题 配套PPT Chapter 1~2_图文.pdf
- 语言学教程
*(第四版)*教材及习题 配套PPT Chapter 1~2_英语学习_外语学习_...An*Introduction**to*Linguistics, by Stuart C. Poole P.F. Productions 18 ?...

- Linear Algebra(线性代数).doc
*Linear**Algebra(*线性代数)_工学_高等教育_教育专区...*to*-One Function 一对一函数 Open Interval 开区间...线性代数第七版课后*答案*179页 1下载券*Linear*Algebra...

- 数据库系统概论(第四版)王珊等的课件答案等(完全版收藏....ppt
- 数据库系统概论
*(第四版)*王珊等的课件*答案*等(完全版收藏版) - 数据库系统概论 An*Introduction**to*Database System 教材及参考书 ? 教材 ? 萨师煊,...

- TCPIP协议族 第四版 第十三章答案.pdf
- TCPIP协议族
*第四版*第十三章*答案*- CHAPTER 13*Introduction**to*Transport Layer Exercises 1. The sequence numb...

- 1Introduction to EES_3_23_12_图文.ppt
- 1
*Introduction**to*EES_3_23_12_经济学_高等教育_...EES solves non-*linear*sets of equations ? ...Either successive substitution or*algebra*would be ...

- 线性代数试题2012年期中试题.pdf
- 线性代数试题2012年期中试题 - The Midterm Exam Of
*Introduction**To**Linear**Algebra (*i)Choose the best answer fro...

- Lecture14 Introduction to Matroids.pdf
- Lecture14
*Introduction**to*Matroids_理学_高等教育_教育专区。CS 598CSC: ...notion of base here is similar*to*that of a basis in*linear**algebra*. Lemma...

- Introduction to Process Algebra.pdf
*Introduction**to**Linear*A... 14页 免费*Introduction**to*design p... 20页...Wan Fokkink*Introduction**to*Process*Algebra*Computer Science Monograph (Engli...

- 语言学教程第四版 练习 第一章.doc
- 语言学教程
*第四版*练习 第一章_英语考试_外语学习_教育专区。《语言学教程》*第四版*练习 第一章 Chapter One*Introduction**to*Linguistics LIUMING Chapter One ...

- 数字信号处理-基于计算机的方法(第四版)答案 8-11章.pdf
- 数字信号处理-基于计算机的方法
*(第四版)答案*8-11章_高等教育_教育专区。...From these equations, we get after some*algebra*A2 ( z) = 1 = X1 D...

- TCPIP协议族第四版课后答案.pdf
- TCPIP协议族
*第四版*课后*答案*_理学_高等教育_教育专区。CHAPTER 1*Introduction*...Figure 8.E9 Solution*to*Exercise 9 B C D 1.1.1.2 A 1.1.1.6 2...

- Linear Algebra_彭国华_第五章课后答案.pdf
*Linear**Algebra*_彭国华_第五章课后*答案*_理学_高等教育_教育专区。Chapter 5 ...Proof. Since A ∈ Mn (C), we know that A is similar*to*a Jordan ...

更多相关标签:

- Introduction to Linear Algebra
- Introduction to Non-Linear Algebra
- Introduction to Linear Algebra练习题答案
- INTRODUCTION TO LINEAR BIALGEBRA
- 3 Introduction to Linear Programming
- Introduction to linear programming
- 1 Introduction Numerical linear algebra software
- 答案introduction to commutative algebra solutions
- An introduction to pseudo-linear algebra
- Introduction to Commutative Algebra - M