Linear Algebra Exercises

My answers to the exercices on “The Dark Art of Linear Algebra: An Intuitive Geometric Approach”

Answers
Books
Author

Natnael Getahun

Published

March 9, 2026

Chapter 1 (Vectors)

Exercises (page 5)

1.

False

2.

The idea revolves around reversing the direction of w.

3.

  1. Division

  2. The operation * where \[a * b = \frac{a + b}{2}\]

4.

  1. In order for the figure to be a prallelogram, the two fectors must be parallel and equal. We can see they have opposite directions in the figure. Thus, their sum will become 0.

\[\mathbf{a} + \mathbf{b} + \mathbf{c} + \mathbf{d} = \mathbf{0}\]

Let the two dashed vectors be v and w. From a, we know, v + w = 0. \[\begin{aligned} \mathbf{v} &= \frac{1}{2}\mathbf{a} + \frac{1}{2}\mathbf{b} \\ \mathbf{w} &= \frac{1}{2}\mathbf{c} + \frac{1}{2}\mathbf{d} \\ \mathbf{v} + \mathbf{w} &= \frac{1}{2}\mathbf{a} + \frac{1}{2}\mathbf{b} + \frac{1}{2}\mathbf{c} + \frac{1}{2}\mathbf{d} \\ 2\left(\mathbf{v} + \mathbf{w} \right) &= \mathbf{a} + \mathbf{b} + \mathbf{c} + \mathbf{d} = \mathbf{0} \\ \mathbf{v} + \mathbf{w} &= \mathbf{0}\\ \end{aligned}\]

5.

  1. associative property… commutative property… associative property

  2. scalar identity… distributive property

\[\begin{aligned} \mathbf{v} + \left(4\mathbf{w} + 2\mathbf{v}\right) &= \mathbf{v} + \left(2\mathbf{v} + 4\mathbf{w}\right)...\quad commutative \\ &= \left(\mathbf{v} + 2\mathbf{v} \right) + 4\mathbf{w}...\quad associative \\ &= \left(1\mathbf{v} + 2\mathbf{v}\right) + 4\mathbf{w}...\quad scalar identity \\ &= \left(1 + 2\right)\mathbf{v} + 4\mathbf{w}...\quad distributive\\ &= 3\mathbf{v} + 4\mathbf{w}\\ \end{aligned}\]

Exercises (page 11)

6.

  1. No. \(\vec{PQ}\) has two times the length of \(\vec{RS}\).

  2. Yes.

  3. \[\lVert\vec{PQ}\rVert = 2\sqrt{3}\] \[\lVert\vec{RS}\rVert = \sqrt{3}\]

7.

  1. False. (They need to have the same direction.)

  2. True

  3. True

  4. False. \[\lVert\mathbf{v}\rVert = \sqrt{\sum_{i=1}^{100} 1^2} = \sqrt{100} = 10\]

8.

Let \(\mathbf{v} = (a, b, ... , n)\)

\(\frac{\mathbf{v}}{\lVert\mathbf{v}\rVert} = \left(\frac{a}{\lVert\mathbf{v}\rVert}, \frac{b}{\lVert\mathbf{v}\rVert},..., \frac{n}{\lVert\mathbf{v}\rVert}\right)\)

\[\begin{aligned} \lvert\frac{\mathbf{v}}{\lvert\mathbf{v}\rVert}\rVert &= \sqrt{\left(\frac{a}{\lVert\mathbf{v}\rVert}\right)^2 + \left(\frac{b}{\lVert\mathbf{v}\rVert}\right)^2 +...+ \left(\frac{n}{\lVert\mathbf{v}\rVert}\right)^2} \\ &= \sqrt{\frac{a^2 + b^2 + ... + n^2}{\lVert\mathbf{v}\rVert^2}}\\ &= \sqrt{\frac{a^2 + b^2 + ... + n^2}{a^2 + b^2 + ... + n^2}}\\ &= \sqrt{1} = 1 \end{aligned}\]

We make the “non-zero” distiction as the length of a zero vector is zero.

9.

  1. (-4, 4, 1)

  2. (3, 17, 12)

  3. (1, 1, 1)

  4. \(\sqrt{11}\)

  5. \(\frac{1}{2}\sqrt{33}\)

  6. (-1/\(\sqrt{30}\), 5/\(\sqrt{30}\), 2/\(\sqrt{30}\))

  7. 1

  8. 1

10.

8

11.

  1. Two lines which have the same x and y, but the second lines is one plus the z positon of the second line.

  2. For two lines to be parallel they must lie on the same plane and have the same direciton. Picking randomly, that is mathematically rare.

  3. Two planes with z=0 and w=0 can intersect at the point (0, 0, 0, 0).

12.

“Non-vertical” was written because vetical lines have an undefined slope. But they still can be written in ax + by = c form by using b=0.

“Vertical” planes (ones perpendicular to the xy-plane) can’t be explained interms of changes to z with respect to changes in x and y (undefined slopes). Taking c to be zero we can express it in ax + by + cz = d form.

\(\mathbf{w} = 6 + 2\mathbf{x} + \mathbf{y} + 7\mathbf{z}\)

The intersection of two lines in 2-D is a point. The intersection of two planes in 3-D is a line. The intersection of two 3-D hyperplanes in 4-D is a 2-D plane.

The intersection of 2 3-D hyperplanes in 5-D is a line.

Exercises (page 17)

13.

\(\mathbf{v}\cdot \mathbf{w} = 2 (2 + 6) = 16\)

\(\mathbf{a} \cdot \mathbf{b} = (-4) 3 = -12\)

14.

Positive: acute angle. Think of as the two vectors working together.

Negative: obtuse angle. Think of it as the two vectors working against one another.

15.

  1. \(77.3\textdegree\)

  2. \(119.7\textdegree\)

  3. \(65.1\textdegree\)

16.

Yes.

17.

Any vector \(\mathbf{w} = a_1\mathbf{e_1} + a_2\mathbf{e_2} + a_3\mathbf{e_3} + a_4\mathbf{e_4} + a_5\mathbf{e_5}\), that statisfies: \[a_1 + 2a_2 - 3a_3 -a_4 + 2a_6 = 0 \]

\(\mathbf{w} = (0, 1, 1, -1, 0)\) can be one example.

18.

  1. The projection of a vector that is perpendicualr to another vector on to this second vector is clearly zero. The reverse is true.

  2. This can be easily seen by drawing two vectors that have an obtuse angle between them and then rotating the paper. In both rotations, one of the vectors will project on to the other vector in an opposite direction.

19.

In the case of \(cos\theta\) will be zero.

For obtuse case, \(cos\theta\) is negative, thus making the projections of \(\mathbf{v}\) onto \(\mathbf{w}\) have a direction opposite of \(\mathbf{w}\).

20.

Lets take the standard \(\mathbf{i}, \mathbf{j}, \mathbf{k}\).

\[\mathbf{i} \cdot \mathbf{j} = \mathbf{i} \cdot \mathbf{k} = 0\] This doesn’t mean \(\mathbf{j} = \mathbf{k}\).

21.

The results of the dot products in the brackets will be a scalar. After that, a dot product between a scalar and vector doesn’t make sense.

22.

\[\begin{aligned} -1 &\leq cos\theta \leq 1 \\ -1 &\leq \frac{\mathbf{v} \cdot \mathbf{w}}{\lVert\mathbf{v}\rVert \lVert\mathbf{w}\rVert} \leq 1 \\ - \lVert\mathbf{v}\rVert \lVert\mathbf{w}\rVert &\leq \mathbf{v} \cdot \mathbf{w} \leq \lVert\mathbf{v}\rVert \lVert\mathbf{w}\rVert\\ \lvert \mathbf{v} \cdot \mathbf{w} \rvert &\leq \lVert\mathbf{v}\rVert \lVert\mathbf{w}\rVert \end{aligned}\]

The two sides can be equal assuming the two vectors are parallel. By putting the ristriction the two vectors are not parallel and non are zero vectors, we can strengthen the Cauchy-Schwartz inequality to strictly less than.

23.

\[\begin{aligned} \lVert \mathbf{v} + \mathbf{w} \rVert^2 \\ &=(\mathbf{v} + \mathbf{w}) \cdot (\mathbf{v} + \mathbf{w}) \\ &=(\mathbf{v} + \mathbf{w}) \cdot \mathbf{v} + (\mathbf{v} + \mathbf{w}) \cdot \mathbf{w}\\ &=\mathbf{v} \cdot \mathbf{v} + \mathbf{w} \cdot \mathbf{w} + 2\mathbf{v} \cdot \mathbf{w}\\ &\leq \lVert \mathbf{v} \rVert^2 + 2 \lVert \mathbf{v} \rVert \lVert \mathbf{w} \rVert + \lVert \mathbf{w} \rVert^2\\ &= (\lVert \mathbf{v} \rVert + \lVert \mathbf{w} \rVert)^2 \end{aligned}\]

The two sides are equal only if the two vectors are parallel and point in the same direction.

Chapter 2 (Vocabulary)

Exercises (page 25)

1.

  1. True. We can be sure \(c_i=0\) for every i will lead to \(\mathbf{0}\)

  2. True. For reasons a is true.

  3. True. Since \(\pi\mathbf{i} - e\mathbf{j}\) is on a two dimensional plane.

  4. True. It can happen if two of the three vectors are linearly independent.

  5. False. Even if the two vectors are linearly independents, they can only span a 2D plane.

  6. False. The first two linearly independent vectors will span a 2D plane. For the third vector to be linearly independent of the two vectors, it can’t lie on this plane spanned by the two vecors. But in a 2D space, where will it go?

  7. True. The first two linearly independent vectors will span a 2D plane. And now the thrid vectors that is linearly independent of the two vectors can rest outsire of this 2D plane in a 3D space. In fact, if we pick the third vector randomly, it has a high chance of not lying on this plane.

  8. True. For the same reasons as g. There three linearly independent vectors will span a 3D hyperplane. If we wanted, we could even have a fourth linearly independent vector. In fact, if we pick the fourth vector at random, it has an even higher chance of not being in on the 3D hyperplane spanned by the three lienarly independent vectors.

  9. False. For it to be the same, \(\mathbf{d}\) has to expressed as a linear combination of the other vectors, thus making the four vectors linearly dependent.

2.

  1. No. \(\mathbf{u} = 4\mathbf{v} + 3\mathbf{w}\)

  2. Yes. The only way to get \(\mathbf{0}\) as linear combinations of the three vectors is if all weights are 0.

  3. No. \(\mathbf{0} = 2\mathbf{i} + 0\mathbf{j} - \left(2\mathbf{i} + 3\mathbf{k}\right) + 3\mathbf{k}\)

3.

No. It can always be expressed as a linear combination of ther other vector/s where the coefficients of the combination are 0.

4.

  1. A line.

  2. The entire \(\mathbb{R}^2\) plane.

  3. The entire \(\mathbb{R}^2\) plane.

  4. A line.

  5. A point.

5.

  1. (0,0), (1,3), (2,6), (3,9), (4,12)

  2. (0,0,1), (0,1,0), (0,2,2)

  3. (1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1), (2,0,0,0)

6.

  1. \(\frac{31}{8}\mathbf{v} + \frac{3}{8}\mathbf{w} = \mathbf{u}\)

  2. \(a_1\mathbf{a} + a_2\mathbf{b} + a_3\mathbf{c} = \mathbf{d}\) will be true for any \(a_i\) values that obey:

\[a_1 + a_2 = 3\] and \[a_2 + 2a_3 = 1\]

To name for values for \(a_1, a_2, a_3\) consequtively,

\[3, -1, 1\] \[5, -3, 2\] \[-1, 3, -1\] \[1, 1, 0\]

Exercises (Page 29)

7.

  1. False. Not all vectors in a plane are linearly independent. Not all vectors in a plane span \(\mathbb{R}^2\). We also have to think of \(\mathbf{0}\).

  2. False. Same reason as above. (eg. (1,1), (2,2)… linearly dependent and they span a line)

  3. True. Any two linearly indepenednt vecots in the plane also span \(\mathbb{R}^2\).

  4. True. It is closed under additon and scalar multiplication in 2D space.

  5. False. Itself (plane) is also a subsbace. Also the origin.

  6. False. We need atleast n linearly independent vectors to span \(\mathbb{n}\).

  7. True. That is litreally the definition.

  8. False. There linearly independent vectors can’t exist in 2D.

  9. False. False. They must be linearly independent.

  10. True. We have to be careful to only remove redundant ones though.

  11. False. They must span the subspace.

  12. True. We can add linearly independent vectors until the number of vectors equals the dimension of the subspace.

  13. True. That is the origin.

8.

  1. {zero-vector, infinite lines, a plane}

  2. {zero-vector, infinite lines, infinite planes, a 3D hyperplane}

  3. {zero-vector, infinite lines, infinite planes, infinite 3D hyperplanes, a 4D hyperplane}

  4. {zero-vector, infinite lines, infinite planes, infinite 3D hyperplanes, infinite 4D hyperplanes, a 5D hyperplane}

9.

  1. Yes.

  2. Yes.

  3. Yes.

  4. No.

10.

  1. \(\mathbf{b_1}\) and \(\mathbf{b_2}\) span a plane, and \(\mathbf{b_3}\) doesn’t lie on this plane.

  2. (2, 10, 0)

11.

It is necessary. To give a counterexample, think of a V-shaped line that bounces at the origin (like the graph of an absolute value funtion). This can be closed under scalar multiplication, but not under additon.

12.

Closed under addition. Closed under multiplication. Closed under differentiation. Not closed under integration.

13.

  1. Yes. Linearly independent and span \(\mathbb{R}^2\)

  2. Yes. Same reason as a.

  3. No. They are linearly dependent and don’t span \(\mathbb{R}^2\).

  4. Yes. Same reason as a.

  5. Yes. Same reason as a.

  6. No. They are not linearly independent.

  7. No. They span \(\mathbb{R}^2\), but v and y are linearly dependent.

  8. No. Doesn’t span \(\mathbb{R}^2\).

  9. No. They are not liearly independent.

14.

  1. Yes. The basis can be (1,0,0) and (0,1,0). It identifies a plane.

  2. No. Doesn’t contain the zero-vector.

  3. Yes. The basis can be (1, 0, -1) and (0, 1, -1). It identifies a plane.

  4. No. Doesn’t contian the zero-vector.

  5. No. Not closed under addition or scalar multiplication.

  6. Yes. The basis can be (1, 0, 2) and (0, 1, -3). It identifies a plane.

  7. No. Not closed under addition or scalar multiplication. This is a sphere. Doesn’t contain the zero-vector.

  8. Yes. This has only one solution: the zero-vector. The basis is empty set.

Exercises (page 33)

15.

\[\begin{aligned} \mathbf{A} &= \mathbf{v} + 2\mathbf{w_1} - \mathbf{w_2}\\ \mathbf{B} &= \mathbf{v} - \mathbf{w_1} + 2\mathbf{w_2}\\ \mathbf{c} &= \mathbf{v} + 3\mathbf{w_1} + 3\mathbf{w_2} \end{aligned}\]

16.

  1. Line. affine.

  2. Line. affine.

  3. Line. affine.

  4. Plane. Subspace.

  5. Plane. affine.

  6. 3D hyperplane. Subspace.

  7. Plane. affine.

  8. Line. Subspace.

  9. Plane. affine.

17.

This is so because we can pick any point on the line to describe the basis vector. Changing the basis vector will also change the second vector we add. There are many second vector choices that are parallel to the line. We can also add infinitely new vectors that are linearly dependent to the second vector in the parameterc equation.

(0,2) + t (-3, 2), (-3, 4) + t(3, -2), (0, 2) + t(3, -2)

18.

x = 3 + 4t

y = 1 -2t

x = 3 + 4t

y = 1 - t

z = 2 + t

x = 3 + t - 5s

y = 1 - 2t + 2s

z = 2 - t - s

19.

x = 2t + 6s

y = t + s

z = t

Plane.

x = 2t

y = 1t

z = 1t

Line.

20.

  1. 4

  2. m+1

  3. No, not all will determine a unique plane. If the points are collinear, they won’t define a unique plane. (0,0,0), (1,1,1), (2,2,2) can be an example.

Chapter 3 (Linear Transformations and Matrices)

Exercises (page 41)

1.

  1. True. When dealing with linear transformations we always start from the standard basises, which are positions vectors that start from the origin. For every linear transformation the property of scalar multiplication must hold, and if we let the scalar be 0, we always get the origin. If it doesn’t fix the origin, it wouldn’t be a linear transformation, but an affine one.

  2. False. Identity transfromation is one example. If we project a 3D space onto an xy-plane, any point that was already sitting on that plane doesn’t change.

2.

  1. \(\begin{pmatrix} -1 &0 \\ 0 &1\end{pmatrix}\) \(\begin{pmatrix}2 \\3\end{pmatrix}\) = \(\begin{pmatrix}-2\\3\end{pmatrix}\)

  2. \(\begin{pmatrix} 1 &0 \\ 0 &-1\end{pmatrix}\) \(\begin{pmatrix}2 \\3\end{pmatrix}\) = \(\begin{pmatrix}2\\-3\end{pmatrix}\)

  3. The line y=1 doesn’t pass through the origin, thus we can’t represent it using a 2 by 2 matrix. We will have to convert (2,3) into (2,3,1) and continue this way.

  4. \(\begin{pmatrix} cos\theta &-sin\theta \\ sin\theta &cos\theta\end{pmatrix}\)

  5. \(\begin{pmatrix} cos(30\textdegree) &-sin(30\textdegree) \\ sin(30\textdegree) &cos(30\textdegree)\end{pmatrix}\) \(\begin{pmatrix}2 \\3\end{pmatrix}\) \(\approx\) \(\begin{pmatrix}0.23\\3.60\end{pmatrix}\)

  6. \(\begin{pmatrix} cos(-45\textdegree) &-sin(-45\textdegree) \\ sin(-45\textdegree) &cos(-45\textdegree)\end{pmatrix} \begin{pmatrix}2 \\3\end{pmatrix} \approx \begin{pmatrix}3.54\\0.71\end{pmatrix}\)

  7. \(\begin{pmatrix} 1&0\\0&1\end{pmatrix} \begin{pmatrix} 2\\3\end{pmatrix} = \begin{pmatrix} 2\\3\end{pmatrix}\)

4.

  1. \(\begin{pmatrix} 1 &0.5\\0 &1\end{pmatrix}\)

  2. \(\begin{pmatrix}21\\22\end{pmatrix}\)

  3. \(\begin{pmatrix}-1\\22\end{pmatrix}\)

  4. No, it doesn’t. The height of the parallelogram still remains \(\mathbf{j}\).

  5. No, it doesn’t. Since the area of the parallelogram the sheep is enclosed in remains the same, its area doesn’t change. Mathematically, the determinant of the transformation matrix is 1, keeping area of the sheared sheep and the parallelogram constant.

5.

  1. \(\begin{pmatrix} 1&2\\0&0\end{pmatrix}\), \(\begin{pmatrix} 1&3\\0&0\end{pmatrix}\)

  2. \(\begin{pmatrix} 1&2\\2&4\end{pmatrix}\), \(\begin{pmatrix} 1&3\\2&6\end{pmatrix}\)

  3. \(\begin{pmatrix} 1&2&3\\0&0&0\end{pmatrix}\), \(\begin{pmatrix} 2&3&4\\0&0&0\end{pmatrix}\)

6.

It linearly maps any n-dimension into a point.

7.

  1. \(\begin{pmatrix} 1&0&0\\0&1&0\\0&0&-1\end{pmatrix}\)

  2. \(\begin{pmatrix} -1&0&0\\0&-1&0\\0&0&-1\end{pmatrix}\)

  3. \(\begin{pmatrix} -1&0&.&.&.&0\\0&-1&.&.&.&0\\.&.&&&&.\\.&.&&&&.\\.&.&&&&.\\0&0&.&.&.&-1\\\end{pmatrix}_{n\times n}\)

  4. \(\begin{pmatrix} 1&0&.&.&.&0\\0&1&.&.&.&0\\.&.&&&&.\\.&.&&&&.\\.&.&&&&.\\0&0&.&.&.&1\\\end{pmatrix}_{n\times n}\)

9.

  1. \(\begin{pmatrix}23\\34\end{pmatrix}\)

  2. \(\begin{pmatrix}12\\-9\\5\end{pmatrix}\)

  3. \(\begin{pmatrix}-3\\-1\\4\\-4\end{pmatrix}\)

Exercises (page 42)

10.

n

11.

From \(\mathbb{R}^n\) to \(\mathbb{R}^m\)

12.

  1. From \(\mathbb{R}^2\) onto a plane in \(\mathbb{R}^3\)

  2. From \(\mathbb{R}^3\) onto a plane in \(\mathbb{R}^2\)

  3. From \(\mathbb{R}^3\) onto a line in \(\mathbb{R}^2\)

  4. From \(\mathbb{R}^2\) onto the origin in \(\mathbb{R}^3\)

  5. From \(\mathbb{R}^3\) onto a hyperplane in \(\mathbb{R}^4\)

  6. From \(\mathbb{R}^4\) onto a plane in \(\mathbb{R}^2\) ( which will be all of \(\mathbb{R}^2\))

Exercises (page 44)

14.

  1. The area of the girds formaed by \(2A\) are \(2^n\) times more than that of \(A\).

\[\begin{aligned} A(c\mathbf{v} &= (i^{th} row of A) \cdot c\mathbf{v}\\ &= c(i^{th} row of A \cdot \mathbf{v})\\ &= c(A\mathbf{v})\end{aligned}\]

\[\begin{aligned} A(c\mathbf{v} + d\mathbf{w}) &= A(c\mathbf{v}) + A(d\mathbf{w})\\ &= c(A\mathbf{v}) + d(A\mathbf{w})\end{aligned}\]

15.

  1. (0,0,0,0,1)

  2. (0,1,1,0,0)

  3. (0,0,0,3,0)

  4. (-5, 0, 0, 3, 0)

Exercises (page 48)

16.

  1. \(\begin{pmatrix} -3&5\\-1&5\end{pmatrix}\)

  2. \(\begin{pmatrix} 4&2\\1&-2\end{pmatrix}\)

  3. \(\begin{pmatrix} 0&6&0\\1&0&3\\0&6&0\end{pmatrix}\)

17.

This can happen if one of the transofmrations does nothing. This means one of the vectors is an identity vector.

Two reflections, both on the line y=x, will be commutative. \(\begin{pmatrix}0&1\\1&0\end{pmatrix}\)

A counter clockwise linear rotation, followed by a clockwise rotation of the same angle. \(\begin{pmatrix} cos\theta&-sin\theta\\sin\theta&cos\theta\end{pmatrix}\) with the other matrix being \(-\theta\).

18.

The identity matrix does no transformation. So doing no transformation followed by a transformation, doing a transformation followed by no transformation, and just doing the transformation are all the same.

19.

  1. In AB, we first apply transformation B, which maps \(\mathbb{R}^3\) to \(\mathbb{R}^2\). Then a maps \(\mathbb{R}^2\) to \(\mathbb{R}^5\). But in the case BA, we first apply transformation A, which maps \(\mathbb{R}^2\) to a plane \(\mathbb{R}^5\). Then B tries to start from \(\mathbb{R}^3\) but this is not possible..

  2. \(5\times 3\)

  3. Yes. \(2\times 3\)

20.

\(\begin{pmatrix}4&-1&1\\6&-9&3\\-3&7&-2\end{pmatrix}\), \(\begin{pmatrix}-9&5\\4&3\end{pmatrix}\), not defined for a non-square matrix, not defined for a non-square, \(\begin{pmatrix}-13&32&-9\\10&-10&4\end{pmatrix}\)

21.

  1. \(\approx\begin{pmatrix}-0.5&-0.866\\-0.866&0.5\end{pmatrix}\)

  2. \(\approx\begin{pmatrix}-8.428\\6.598\end{pmatrix}\)

  3. \(\begin{pmatrix}1&0\\0&1\end{pmatrix}\), If we rotate 120\(\textdegree\) counter clockwise, reflect along the horizontal axis, and rotate counter clockwaise for the same degreee and reflect along the horizontal axis, we will return where we started having rotated full 360\(\textdegree\).

22.

\[\begin{aligned} (cA)(dB) &= (cA's i^{th} row)\cdot(dB's j^{th} column)\\ &= (cd)(A's i^{th} row \cdot B's j^{th} column)\\ &= (cd)(AB)\end{aligned}\]

23.

  1. It rotates the grid by \(90\textdegree\) clockwise. \(\begin{pmatrix}0&-1\\1&0\end{pmatrix}\)

  2. If they aren’t square matrices, the two identity matrices generated by commuting the matrix multiplion will be in diffenre \(\mathbb{R}\) dimension. Unless they are square matrices, their matrix multiplication can’t be commmutative.

\(\begin{aligned} AB = I = AC\\ AB = AC\\ BAB = BAC\\ B = C \end{aligned}\)

  1. Square matrices that lower the dimension the transformation (by having linearly dependent columns) aren’t inversible, as we can’t return it to a higher dimension using another square matrix.

  2. Take off your shoes, take of your socks, put your socks and put your shoes, revers your previous action, take your shoes and then take of your socks

\(\begin{aligned} (AB)^{-1}AB&=B^{-1}A^{-1}AB\\ &= B^{-1}(A^{-1}A)B\\ &= B^{-1}IB\\ &= B^{-1}B\\ &= I \end{aligned}\)

  1. \(\begin{pmatrix}1&0\\0&-1\end{pmatrix}\) \(\begin{pmatrix}cos\theta &-sin\theta\\sin\theta &cos\theta \end{pmatrix}\) \(\begin{pmatrix}1&0\\0&-1\end{pmatrix}\) = \(\begin{pmatrix}cos\theta &sin\theta\\-sin\theta &cos\theta \end{pmatrix}\) = \(\begin{pmatrix}-0.5 &0.866\\-0.866 &-0.5 \end{pmatrix}\), We can also find this by just thinking of how to do a clockwise rotation.

  2. The scalars scales the transformation matrix. The new grid will have parallepoids whose are is going to be \(2^n\) more than the originial. The inverse of the scalar will reverse scaling to get back to the original gird.

Exercises (page 50)

24.

  1. Purely white parts will remain white, and purely balck ones will remain black. But the rest of pixels get darker.

  2. Purely white parts will remain white, and purely balck ones will remain black. But the rest of pixes get more white.

  3. It is a reflection across the main diagonal..

  4. Black pixels will become white, and white pixesl will become dark. More darked shades of grey will become their ligher counterparts, and vice versa.

  5. It will be vertically flipped.

  6. It will be horizontally flipped.

  7. The image will be cropped at the bottom.

  8. The cropped image will be horizonatally flipped.

25.

\[\begin{aligned} (A+B)\mathbf{v} &= (A's i^{th} row + B's i^{th} row)\mathbf{v}\\ &= (A's i^{th} row) \cdot \mathbf{v} + (B's i^{th} row) \cdot \mathbf{v}\\ &= A\mathbf{v} + B\mathbf{v} \end{aligned}\]

\[\begin{aligned} (A+B)C &= (A's i^{th} row + B'a i^{th} row) \cdot C's j^{th} column\\ &= A's i^{th} row \cdot C's j^{th} column + B's i^{th} row \cdot C's j^{th} column\\ &= AC + BC \end{aligned}\]

  1. \[\begin{aligned} A(B+C) &= A's i^{th} row \cdot (B'a j^{th} column + C's j^{th} column\\ &= A's i^{th} row \cdot B's j^{th} column + A's i^{th} row \cdot C's j^{th} column\\ &= AB + AC \end{aligned}\]

26.

  1. \(\mathbf{v}^T\mathbf{w} = \sum \mathbf{v}_i\mathbf{w}_j = \mathbf{v}\cdot \mathbf{w}\)

  2. \((M + N)^T = (M's ij^{th} entry + N's ij^{th} entry)^T = M's ji^{th} entry + N's ji^{th} entry = M^T + N^T\)

  3. \((cM)^T\) = (cM’s \(ij^{th}\) entry\()^T\) = c M’s \(ji^{th}\) entry = \(c(M)^T\)

  4. \((MN)^T_{ij} = (MN)_{ji} = (\text{row}_j M) \cdot (\text{col}_i N) = (\text{col}_j M^T) \cdot (\text{row}_i N^T) = (\text{row}_i N^T) \cdot (\text{col}_j M^T) = (N^T M^T)_{ij}\)

  5. \((ABC)^T = (A(BC))^T = (BC)^TA^T = C^TB^TA^T\)

27.

  1. The transpose of an m \(\times\) n matrix will be n \(\times\) m. These two matrices can only be equal if m = n, which defines a square matrix.

  2. \(\begin{pmatrix} 1&0\\0&1\end{pmatrix}\), \(\begin{pmatrix} 1&2&3\\2&55&4\\3&4&5\end{pmatrix}\)

  3. The results in both cases will be symmetric matrices.

  4. \((MM^T)^T = (M^T)^TM^T=MM^T\)

Exercises (page 56)

1.

  1. \(\begin{pmatrix} 1 & 3 \\ 2 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 5 \\ 2 \end{pmatrix}\)

  2. \(\begin{pmatrix} 0 & 2 & 4 \\ 1 & 3 & 5 \\ 3 & 7 & 7 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} -5 \\ -2 \\ 6 \end{pmatrix}\)

  3. \(\begin{pmatrix} 1 & -2 & -2 \\ 3 & -6 & -2 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 3 \\ 2 \end{pmatrix}\)

  4. \(\begin{pmatrix} 1 & 1 \\ -1 & 1 \\ 2 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 2 \\ 2 \\ 1 \end{pmatrix}\)

2.

  1. Unique
  2. unique
  3. infinite
  4. none

3.

  1. \(x + 2y = 5\)
    \(3x + 4y = 6\)

  2. \(3r + s + 4t = 2\)
    \(r + 5s + 9t = 6\)

  3. \(x_1 + 7x_2 = 4\)
    \(x_1 + 8x_2 = 5\)
    \(2x_1 + 8x_2 = 9\)

4.

  1. \(x + y = 1\)
    \(3x + 4y = 5\)

  2. \(x + y = 1\)
    \(3x + 4y = 5\)
    \(x + 2y = 9\)

  3. \(x + y = 1\)

5.

  1. \(x + y + z = 1\)
    \(3x + 4y + z = 5\)
    \(x + 8y + 2z = 9\)

  2. \(x + y + z = 1\)
    \(3x + 4y + z = 5\)
    \(x + 8y + 2z = 9\)
    \(8x + 4y + 9z = 2\)

  3. \(x + y + z = 1\)

6.

Think of two planes that pass through the origin but one plane with \(z \neq 0\) which is parallel to xy-plane. This forms kind of a “3D-triangle-hole”.
It should have 3 unknowns, 3 equations, and yet no solution.

7.

\(A\mathbf{s}_1 = A\mathbf{s}_2 = \mathbf{b}\)
The line joining \(s_1\) and \(s_2\) can be expressed as; \(s_1 + t(s_2 - s_1)\) for some scaling \(t\) and \(s_2 - s_1\) being the vector connecting the tips of \(s_1\) and \(s_2\).
\[A(s_1 + t(s_2 - s_1)) = A(s_1) + A(t(s_2 - s_1))\] \[= \mathbf{b} + t A(s_2 - s_1) = \mathbf{b} + t(A(s_2) - A(s_1))\] \[= \mathbf{b} + t(\mathbf{0}) = \mathbf{b}\]

Exercises (page 57)

8.

  1. \(\left( \begin{array}{cc|c} 1 & 3 & 5 \\ 2 & -1 & 2 \end{array} \right)\)
  2. \(\left( \begin{array}{ccc|c} 0 & 2 & 4 & -5 \\ 1 & 3 & 5 & -2 \\ 3 & 7 & 7 & 6 \end{array} \right)\)
  3. \(\left( \begin{array}{ccc|c} 1 & -2 & -2 & 3 \\ 3 & -6 & -2 & 2 \end{array} \right)\)
  4. \(\left( \begin{array}{cc|c} 1 & 1 & 3 \\ -1 & 1 & 2 \\ 2 & 1 & 1 \end{array} \right)\)

9.

  1. \(\left( \begin{array}{cc|c} 1 & 2 & 5 \\ 3 & 4 & 6 \end{array} \right)\)
  2. \(\left( \begin{array}{ccc|c} 3 & 1 & 4 & 2 \\ 1 & 5 & 9 & 6 \end{array} \right)\)
  3. \(\left( \begin{array}{cc|c} 2 & 7 & 4 \\ 1 & 8 & 5 \\ 2 & 8 & 9 \end{array} \right)\)

9.

  1. \(x + 2y = 3\)
    \(4x + 5y = 6\)

  2. \(7x + 8y = -3\)
    \(9x = 5\)
    \(x + 2y = 4\)

  3. \(y + 4z = 1\)
    \(-2x + 2y + 3z = 0\)
    \(5x - y + 6z = 8\)

10.

  1. \(\begin{pmatrix} 1 & 2 \\ 4 & 5 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 3 \\ 6 \end{pmatrix}\)
  2. \(\begin{pmatrix} 7 & 8 \\ 9 & 0 \\ 1 & 2 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} -3 \\ 5 \\ 4 \end{pmatrix}\)
  3. \(\begin{pmatrix} 0 & 1 & 4 \\ -2 & 2 & 3 \\ 5 & -1 & 6 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 8 \end{pmatrix}\)

Exercise (page 61)

12.

  1. \(\left( \begin{array}{cc|c} 1 & 2 & 3 \\ 2 & 3 & 5 \end{array} \right)\)

  2. \(\left( \begin{array}{ccc|c} 1 & 4 & 7 & 10 \\ 2 & 5 & 8 & 11 \\ 3 & 6 & 9 & 12 \end{array} \right)\)

  3. \(\begin{pmatrix} 2 & 3 & 25 & 1 & 2 \\ 0 & 1 & 8 & -3 & 7 \\ 0 & -2 & -10 & 6 & 1 \end{pmatrix} \xrightarrow{+2R_2} \begin{pmatrix} 2 & 3 & 25 & 1 & 2 \\ 0 & 1 & 8 & -3 & 7 \\ 0 & 0 & 6 & 0 & 15 \end{pmatrix}\)

  4. \(\begin{pmatrix} 1 & -2 & 3 & -4 & 5 \\ 6 & 2 & 11 & -10 & -2 \end{pmatrix} \xrightarrow{-6R_1} \begin{pmatrix} 1 & -2 & 3 & -4 & 5 \\ 0 & 14 & -7 & 14 & -32 \end{pmatrix}\)

13.

  1. \(\begin{pmatrix} 1 & -3 & 1 \\ -2 & -1 & 4 \\ 3 & 2 & 5 \\ -1 & 2 & 0 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & -3 & 1 \\ 0 & -7 & 6 \\ 0 & 11 & 2 \\ 0 & -1 & 1 \end{pmatrix}\)

  2. \(\begin{pmatrix} 1 & -4 & 5 & 0 \\ 0 & 1 & -2 & 6 \\ 0 & 1/2 & -3 & 4 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & -3 & 24 \\ 0 & 1 & -2 & 6 \\ 0 & 0 & -2 & 1 \end{pmatrix}\)

14.

  1. \(\begin{pmatrix} 2 & 3 & 0 \\ 4 & 5 & 0 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}\), the solution is \((0,0)\)

  2. \(\begin{pmatrix} 4 & 3 & 2 \\ 7 & 5 & 3 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 2 \end{pmatrix}\), the solution is \((-1, 2)\)

  3. \(\begin{pmatrix} 1 & 2 & 3 & 1 \\ 2 & 4 & 7 & 2 \\ 3 & 7 & 11 & 8 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & 0 & -9 \\ 0 & 1 & 0 & 5 \\ 0 & 0 & 1 & 0 \end{pmatrix}\), the solution is \((-9, 5, 0)\)

  4. \(\begin{pmatrix} 1 & 2 & 3 & 8 \\ 1 & 3 & 3 & 10 \\ 1 & 2 & 4 & 9 \end{pmatrix} \rightarrow \begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & 1 \end{pmatrix}\), the solution is \((1, 2, 1)\)

15.

It has no solution. The last unknown scaled by zero is always zero, not a non-zero c. Since we don’t have the last variable, we can’t go on using “back substitution”.

16.

  1. Neither
  2. RREF
  3. Neither
  4. RREF

Exercise (page 67)

17.

  1. \(\begin{pmatrix} 15/8 \\ 1/4 \\ 1/8 \end{pmatrix}\) a point in \(\mathbb{R}^3\)

  2. \(\begin{pmatrix} 5 \\ 0 \\ 0 \\ 0 \end{pmatrix} + s \begin{pmatrix} -3 \\ 1 \\ 0 \\ 0 \end{pmatrix} + t \begin{pmatrix} 9 \\ 0 \\ 5 \\ 4 \end{pmatrix}\), a plane in \(\mathbb{R}^4\)

  3. NO soln

  4. \(\begin{pmatrix} 0 \\ 1/2 \\ 0 \end{pmatrix} + t \begin{pmatrix} 7/4 \\ -1/4 \\ 1 \end{pmatrix}\), a line in \(\mathbb{R}^3\)

  5. \(\begin{pmatrix} 3 \\ 4 \\ -2 \end{pmatrix}\), a point in \(\mathbb{R}^3\)

  6. \(\begin{pmatrix} 0 \\ 0 \\ 1 \\ 2 \\ 0 \\ 0 \end{pmatrix} + t \begin{pmatrix} -2 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} + s \begin{pmatrix} -1 \\ 0 \\ 1 \\ -2 \\ 1 \\ 0 \end{pmatrix} + v \begin{pmatrix} 1 \\ 0 \\ -1 \\ 1 \\ 0 \\ 1 \end{pmatrix}\), a 3-D hyperplane in \(\mathbb{R}^6\)

18.

  1. Infinite soln, a plane in \(\mathbb{R}^4\)
  2. no soln
  3. Infinite soln, a plane in \(\mathbb{R}^3\)
  4. unique soln, the origin in \(\mathbb{R}^2\)
  5. no soln
  6. Infinite soln, a 4-D hyperplane in \(\mathbb{R}^7\)

19.

No, \(\begin{pmatrix} 1 & -4 & 2 & 3 \\ 0 & 3 & 5 & -7 \\ -2 & 8 & -4 & 3 \end{pmatrix}\) has no soln.

20.

No, it wouldn’t change. Indicates a redundant equation.

21.

    1. linearly independent
    2. linearly independent
    3. linearly dependent

22.

  1. w lies in the span of the three vectors.
    \(w = (22 - 2C)v_1 + (25 - 3C)v_2 + Cv_3\)
    where C is a scalar.

24.

  1. \(7/6x^2 - 1/2x - 2/3\)
  2. \(3x^3 - x^2 - 3x + 1\)
  3. NO quadratic polynomial. There are infinite such quartic polynomials.

25.

\(a = d - 20\)
\(b = d - 120\)
\(c = 270 - d\)
\(120 \le d \le 270\)

min Max
a 100 250
b 0 150
c 0 150
d 120 270

26.

Let \(x_1\) be the no of roosters bought, \(x_2\) hens, \(x_3\) chicks.
Then:
\(x_1 = 4/3 x_3 - 100\)
\(x_2 = 200 - 7/3 x_3\)
where \(75 \le x_3 \le 85\) and \(x_3\) is divisible by 3

Exercises (page 78)

28

The inverses of the given \(3 \times 3\) matrices:

\(A^{-1} = \begin{pmatrix} 7 & -3 & -3 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{pmatrix}\)

\(B^{-1} = \begin{pmatrix} 1/3 & 1/4 & 1/3 \\ -1/2 & 1/2 & 0 \\ 1/6 & -1/2 & -1/3 \end{pmatrix}\)

\(C^{-1} = \begin{pmatrix} 8/15 & -1/5 & -1/15 \\ 2/3 & 0 & -1/3 \\ 7/30 & 1/10 & 1/30 \end{pmatrix}\)

29

  1. Proof that \(A A^{-1} = I\): \[\begin{pmatrix} a & c \\ b & d \end{pmatrix} \begin{pmatrix} \frac{d}{ad-bc} & \frac{-c}{ad-bc} \\ \frac{-b}{ad-bc} & \frac{a}{ad-bc} \end{pmatrix} = \begin{pmatrix} \frac{ad-bc}{ad-bc} & \frac{ac-ac}{ad-bc} \\ \frac{bd-bd}{ad-bc} & \frac{ad-bc}{ad-bc} \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = I\]

  2. Inverses of \(2 \times 2\) matrices: \(A^{-1} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\) \(B^{-1} = \begin{pmatrix} 2 & -3 \\ -1 & 2 \end{pmatrix}\) \(C^{-1} = \begin{pmatrix} 2 & -3 \\ -7 & 8 \end{pmatrix}\) \(D^{-1} = \begin{pmatrix} 4 & -6 \\ -2 & 3 \end{pmatrix}\)

30

  1. A diagonal matrix maps vectors along different coordinate axes. If one element is zero, we are essentially shrinking one dimension, which we can’t invert back.

  2. All we have to do to inverse a diagonal matrix is reverse the scaling along each coordinate axis back.

  3. To inverse a scaling by \(c\), we must do \(1/c\). \(A^{-1} = \begin{pmatrix} -1/2 & 0 & 0 & 0 & 0 \\ 0 & 1/3 & 0 & 0 & 0 \\ 0 & 0 & 1/5 & 0 & 0 \\ 0 & 0 & 0 & 1/4 & 0 \\ 0 & 0 & 0 & 0 & 1/2 \end{pmatrix}\)

  4. \(A_{ij}^{-1} = \frac{1}{A_{ij}}\) for \(i=j\) and \(A\) is a diagonal matrix.

  5. Then \(n^{th}\) power of a diagonal matrix can be thought of scaling along each coordinate axis \(n\) times. That is the same as raising the \(n^{th}\) power of each diagonal element.

Exercises (page 74)

32

No. (Example provided: \(\begin{pmatrix} 1 & 0 \\ 2 & 0 \end{pmatrix} \xrightarrow{R_2 = 2R_1, \: update \: by -3R_2} \begin{pmatrix} 1 & 0 \\ 2 & 0 \end{pmatrix}\), \(R_2 \neq 2R_1\)).

33

  1. The \(42^{nd}\) column of \(\text{rref}(A)\) is still three times the \(6^{th}\) column plus twice its \(7^{th}\) column.

  2. NO. \(A\)’s columns are not linearly independent.

Exercises (page 79)

34

  1. Kernel: None (\(0\) vector). Image: a plane in \(\mathbb{R}^2\). Rank: 2, Nullity: 0.

  2. Image: The origin in \(\mathbb{R}^3\). Kernel: \(x, y, \text{ and } z\) axes. Rank: 0, Nullity: 3.

  3. Image: A line (\(y=x\)) in \(\mathbb{R}^2\). Kernel: The line perpendicular to \(y=x\) is sent to zero. That is \(y = -x\). Rank: 1, Nullity: 1.

  4. Image: Hyperplane in \(\mathbb{R}^4\). Kernel: None. Rank: 4, Nullity: 0.

  5. Image: Hyperplane in \(\mathbb{R}^n\). Kernel: None. Rank: \(n\), Nullity: 0.

  6. Image: The first coordinate axis (\(x\)). Kernel: \(y\) and \(z\) are set to zero. The \(y\)-\(z\) plane. Rank: 1, Nullity: 2.

35

  1. Based on the rows you multiply one of the \(x\) or \(y\) components of the resulting vector will be scaled by the constant you choose.
  1. Based on the rank-nullity theorem, if row operations preserve the image, then they preserve the kernel as well.

36

  1. No kernel. \(\text{im}(A) = t_1 \begin{pmatrix} 2 \\ 1 \end{pmatrix} + t_2 \begin{pmatrix} 0 \\ 1 \end{pmatrix}\).

  2. \(\text{ker}(A) = t \begin{pmatrix} -2 \\ 0 \end{pmatrix}\). \(\text{im}(A) = t \begin{pmatrix} 3 \\ 2 \end{pmatrix}\).

  3. \(\text{ker}(A) = t \begin{pmatrix} -1 \\ -2 \\ 1 \end{pmatrix}\). \(\text{im}(A) = t_1 \begin{pmatrix} 1 \\ 4 \\ 7 \end{pmatrix} + t_2 \begin{pmatrix} 2 \\ 5 \\ 8 \end{pmatrix}\).

  4. \(\text{ker}(A) = t \begin{pmatrix} -1 \\ -2 \\ 1 \end{pmatrix}\). \(\text{im}(A) = t_1 \begin{pmatrix} 1 \\ 2 \end{pmatrix} + t_2 \begin{pmatrix} 3 \\ 4 \end{pmatrix}\).

  5. \(\text{ker}(A)\): none. \(\text{im}(A) = t_1 \begin{pmatrix} 4 \\ 3 \\ 1 \\ 2 \end{pmatrix} + t_2 \begin{pmatrix} 1 \\ 1 \\ 2 \\ 0 \end{pmatrix}\).

  6. \(\text{ker}(A) = t \begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix} + t_2 \begin{pmatrix} 1 \\ 0 \\ 0 \\ 1 \end{pmatrix}\). \(\text{im}(A) = t_1 \begin{pmatrix} 0 \\ 1 \\ 0 \\ 1 \end{pmatrix} + t_2 \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \end{pmatrix} + t_3 \begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix} + t_4 \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix}\).

Thinking geometrically:

  1. This is in \(\mathbb{R}^2 \rightarrow \mathbb{R}^2\) using linearly independent columns suggesting a nullity of zero.

  2. This is a mapping to a line in \(\mathbb{R}^2\), so the columns are linearly dependent, we can see a nullity of one.

  3. The middle column is an average of the two points on the corner. We can expect at least a nullity of 1.

  4. This is in \(\mathbb{R}^3 \rightarrow \mathbb{R}^2\), we can expect at least a nullity of 1.

  5. This is in \(\mathbb{R}^2 \rightarrow \mathbb{R}^4\) and the columns are linearly independent, we can expect a nullity of zero.

  6. It is clear that all columns are linearly independent.

37

  1. True
  2. False
  3. True
  4. False
  5. True
  6. False

38

Column Space: Since it belongs to every point in the image, in a linear combination of matrix columns. Null Space: Since it is a space of vectors that are “nullified” or sent to zero.

39

Each pair of columns is linearly dependent on the other. In other words, they are scalar multiples of one non-zero vector. The kernel is therefore a plane, meaning rank is 1, with only one pivot column in rref. Geometrically, the image will be a line.

40

Any collapse in dimension is irreversible.

Exercises (page 82)

41

  1. \(\text{ker}(A) = t \begin{pmatrix} -3 \\ 1 \end{pmatrix}\), \(\text{im}(A) = t \begin{pmatrix} 2 \\ 1 \end{pmatrix}\)

  2. \(\begin{pmatrix} 2 \\ 0 \end{pmatrix} + t \begin{pmatrix} -3 \\ 1 \end{pmatrix}\)

  3. \(\begin{pmatrix} 0 \\ -1 \end{pmatrix} + t \begin{pmatrix} -3 \\ 1 \end{pmatrix}\)

  4. They are all sent to \(\begin{pmatrix} 14 \\ 7 \end{pmatrix}\). The line can be written as: \(\begin{pmatrix} 1 \\ 2 \end{pmatrix} + t \begin{pmatrix} -3 \\ 1 \end{pmatrix}\)

Verification: \(\begin{pmatrix} 2 & 6 \\ 1 & 3 \end{pmatrix} \begin{pmatrix} 1-3t \\ 2+t \end{pmatrix} = \begin{pmatrix} 2-6t+12+6t \\ 1-3t+6+3t \end{pmatrix} = \begin{pmatrix} 14 \\ 7 \end{pmatrix}\)

42

  1. \(\text{im}(M) = t_1 \begin{pmatrix} 1 \\ 2 \\ 3 \\ 4 \end{pmatrix} + t_2 \begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix}\)

43

  1. \(\text{ker}(A) = t \begin{pmatrix} -2 \\ 1 \end{pmatrix}\), \(\text{im}(A) = t \begin{pmatrix} 1 \\ 2 \end{pmatrix}\)

  2. \(\text{ker}(B) = t \begin{pmatrix} -2 \\ 0 \\ 1 \end{pmatrix}\), \(\text{im}(B) = t_1 \begin{pmatrix} 3 \\ 5 \\ 5 \end{pmatrix} + t_2 \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}\)

  3. \(\text{ker}(C) = t_1 \begin{pmatrix} -1 \\ -1 \\ 1 \\ 0 \end{pmatrix} + t_2 \begin{pmatrix} -2 \\ 0 \\ 0 \\ 1 \end{pmatrix}\) \(\text{im}(C) = t_1 \begin{pmatrix} 1 \\ 4 \end{pmatrix} + t_2 \begin{pmatrix} 4 \\ 1 \end{pmatrix}\)

  4. \(\text{ker}(D) = 0\), \(\text{im}(D) = t_1 \begin{pmatrix} 4 \\ 0 \\ 2 \\ 0 \end{pmatrix} + t_2 \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix} + t_3 \begin{pmatrix} 1 \\ 2 \\ 4 \\ 5 \end{pmatrix}\)

  5. \(\text{ker}(E) = t_1 \begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix} + t_2 \begin{pmatrix} 8 \\ 0 \\ 1 \end{pmatrix}\) \(\text{im}(E) = t_0 (1)\)

44

  1. \(\begin{pmatrix} 1 \\ 1 \end{pmatrix} + t \begin{pmatrix} 3 \\ 1 \end{pmatrix}\)

  2. \(\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} + t \begin{pmatrix} -1 \\ -1 \\ 1 \end{pmatrix}\)

  3. \(\begin{pmatrix} 1 \\ -1/2 \\ 0 \\ 0 \end{pmatrix} + t_1 \begin{pmatrix} -1 \\ 1 \\ 0 \\ 0 \end{pmatrix} + t_2 \begin{pmatrix} -1/2 \\ -1/2 \\ 3 \\ 1 \end{pmatrix}\)