2  Vector Spaces

\(\newcommand{\vlist}[2]{#1_1,#1_2,\ldots,#1_#2}\) \(\newcommand{\vectortwo}[2]{\begin{bmatrix} #1 \\ #2\end{bmatrix}}\) \(\newcommand{\vectorthree}[3]{\begin{bmatrix} #1 \\ #2 \\ #3\end{bmatrix}}\) \(\newcommand{\vectorfour}[4]{\begin{bmatrix} #1 \\ #2 \\ #3 \\ #4\end{bmatrix}}\) \(\newcommand{\vectorfive}[5]{\begin{bmatrix} #1 \\ #2 \\ #3 \\ #4 \\ #5 \end{bmatrix}}\) \(\newcommand{\lincomb}[3]{#1_1 \vec{#2}_1+#1_2 \vec{#2}_2+\cdots + #1_m \vec{#2}_#3}\) \(\newcommand{\norm}[1]{\left|\left |#1\right|\right |}\) \(\newcommand{\ip}[1]{\left \langle #1\right \rangle}\) \(\newcommand{\plim}[2]{\lim_{\footnotesize\begin{array}{c} \\[-10pt] #1 \\[0pt] #2 \end{array}}}\)

In mathematics, a linear space is a vector space over the real numbers or complex numbers in which every vector has a unique linear combination of basis vectors that satisfies certain properties. Linear spaces are very important in mathematics and physics because many physical problems can be reduced to problems about linear spaces. In this book, we will give a complete introduction to linear spaces.

Linear spaces, also called vector spaces, are mathematical structures that allow the addition of vectors and the multiplication of vectors by scalars. Linear spaces contain a set of vectors, called the basis, which spans the space. The dimension of a linear space is the number of vectors in the basis. Linear spaces are important in physics because they allow physicists to model physical phenomena such as electromagnetism and gravity.

Linear spaces also have applications in engineering and computer science. In engineering, linear spaces are used to design structural elements such as bridges and buildings. In computer science, linear spaces are used to represent data points in multidimensional space.

A linear space has the following properties:

  1. Vector addition is commutative: For any two vectors \(u\) and \(v\) in the linear space, we have \(u + v = v + u\).
  2. Vector addition is associative: For any three vectors \(u, v,\) and \(w\) in the linear space, we have \((u + v) + w = u + (v + w).\)
  3. There is a zero vector: There is a vector \(0\) in the linear space such that for any vector \(u\) in the linear space, we have \(0 + u = u.\)
  4. Every vector has an additive inverse: For any vector \(u\) in the linear space, there is a vector \(-u\) such that \(u + (-u) = 0.\)
  5. Multiplication by a scalar is left distributive: For any scalar \(c\) and any two vectors \(u\) and \(v\) in the linear space, we have \(c(u + v) = cu + cv.\)
  6. Multiplication by a scalar is right distributive: For any scalars \(a\) and \(b\) and any vector \(u\) in the linear space, we have \((a+b) u = au + bu.\)
  7. Multiplication by a scalar is associative: For any three scalars \(c, d, and e,\) and any vector \(u\) in the linear space, we have \((cd)e u = c(de)u.\)
  8. There is a multiplicative identity: There is a scalar \(1\) such that for any vector \(u\) in the linear space, we have \(1 u = u.\)

A basis for a linear space is a set of vectors that spans the space and is linearly independent. A set of vectors is linearly independent if no vector in the set can be written as a linear combination of the other vectors. The dimension of a linear space is the number of vectors in any basis for the space.

For example, consider the set of all real numbers. This set has infinite dimensions because it takes an infinite number of real numbers to describe any element in the set. On the other hand, the set of all two-dimensional vectors has dimension two because any two-dimensional vector can be written as a linear combination of the vectors (1,0) and (0,1).

The dimension of a linear space is also the number of degrees of freedom of the space. For example, a point in three-dimensional space has three degrees of freedom because it can be described by three coordinates. A point in four-dimensional space has four degrees of freedom because it can be described by four coordinates.

The dimension of a linear space is also the number of independent equations that are needed to describe the space. For example, consider the set of all real numbers. This set can be described by the equation \(x + 0 = x\). Since this equation has only one unknown \((x)\), it is said to have dimension one.

The dimension of a linear space is also the number of vectors in any basis for the space. So, in the previous example, the set of all real numbers has dimension one because any basis for this space must have only one vector.

The dimension of a linear space is also the number of dimensions in which the space can be embedded. For example, consider the set of all two-dimensional vectors. This space can be embedded in three-dimensional space by adding a third coordinate that is always zero. So, the dimension of the set of all two-dimensional vectors is three.

A linear map is a function that preserves the structure of a linear space. In other words, a linear map is a function that takes vectors in one linear space and maps them to vectors in another linear space such that the following properties are satisfied:

  1. The map is commutative: For any two vectors \(u\) and \(v\) in the first linear space, we have \(f(u) + f(v) = f(v) + f(u).\)
  2. The map is associative: For any three vectors \(u, v,\) and \(w\) in the first linear space, we have \(f((u + v) + w) = (f(u) + f(v)) + f(w).\)
  3. The map is distributive: For any scalar \(c\) and any two vectors \(u\) and \(v\) in the first linear space, we have \(f(cu + cv) = cf(u) + cf(v).\)
  4. The map is additive: For any two vectors $u and $v in the first linear space, we have \(f(u + v) = f(u) + f(v).\)
  5. The map is homogeneous: For any scalar \(c\) and any vector \(u\) in the first linear space, we have \(f(cu) = cf(u).\)

A linear map is also called a linear transformation or a linear operator if the function maps into itself. The set of all linear maps from one linear space to another is itself a linear space. This space is called the space of linear maps from the first space to the second space.

In this book, we’ll introduce the concept of linear maps and explore some of their properties. We’ll also discuss some of the ways in which they can be used.

Linear spaces are vector spaces with a basis. This means that every element in the space can be written as a linear combination of the elements in the basis. The coordinates of an element are the coefficients of this linear combination.

In other words, they tell us how much of each element in the basis we need to add together to get to our desired element. For example, consider the vector \(v=(2,3,5)\). We can write this vector as a linear combination of the vectors \(e_1=(1,0,0)\), \(e_2=(0,1,0)\), and \(e_3=(0,0,1)\) like so: \(v=2e_1+3e_2+5e_3\). Therefore, the coordinates of \(v\) with respect to this basis are \((2,3,5).\)

Linear maps take vector spaces and transform them into (perhaps) new vector spaces. The matrix of a linear map is a way of representation that shows how the map transforms each basis vector.

The matrix of a linear map is a way of representing this transformation. It is a square array of numbers that define how the coordinates of one space are transformed into the coordinates of another space. By understanding how linear maps work, we can better understand the relationships between objects in space.

For example, let’s say we have the linear map \(f(x)=Ax\) where \(A\) is the matrix \([[1,-2],[3,4]]\). This map takes vectors in \(\mathbb{R}^2\) and transforms them into vectors in \(\mathbb{R}^2\). The matrix \(A\) tells us how \(f(x)\) does this transformation. It takes the first basis vector \((1,0)\) and transforms it into \((-2,3)\), which is (\(-2\) times the first basis vector plus 3 times the second basis vector). It does the same thing to the second basis vector \((0,1)\), transforming it into \((4,-1)\). So now we know that \(f(x)=(-2,3)x+(4,-1)y\) for all \(x\) and \(y\) in \(\mathbb{R}^2\).

We can use matrices to represent any linear map, no matter how complicated. This makes them a very powerful tool for mathematical analysis.

In this book, you’ll learn about the concept of linear spaces and linear maps. We’ve discussed how these maps can be represented by matrices, and we’ve seen how they can be used to understand relationships between objects in space.

This is just the beginning of our exploration into the world of linear algebra. In future books, we’ll delve deeper into the properties of linear maps and matrices. We’ll also see how they can be used in a variety of applications.

2.1 Introduction to Vector Spaces

Definition 2.1 Let \(\mathbb{F}\) be a field. The collection of vectors, as \(n\times 1\) matrices over \(\mathbb{F}\), is called a vector space over the field of scalars \(\mathbb{F}\). We denote a vector space by \(V\) where \(\mathbb{F}\) is a positive integer \(n\).

Since vectors are just \(n\times 1\) matrices, the proof of the next two propositions follow immediately from \(\ref{PropertiesofMatrixAddition}\) and \(\ref{Properties of Scalar Multiplication}\).

Theorem 2.1 Let \(V\) be a vector space. Then the following hold.

  • For all \(\vec{u}, \vec{v}\in V\), \(\vec{u}+\vec{v}=\vec{v}+\vec{u}\).
  • For all \(\vec{u}, \vec{v}, \vec{w} \in V\), \((\vec{u}+\vec{v})+\vec{w}=\vec{u}+(\vec{v}+\vec{w})\).
  • For all \(\vec{v}\in V\), there exists \(\vec{0} \in V\) such that \(\vec{v}+\vec{0}=\vec{v}\).
  • For every \(\vec{v}\in V\), there exists \(\vec{w}\in V\) such that \(\vec{v}+\vec{w}=\vec{0}\).

Proof. The proof is left for the reader as Exercise \(\ref{ex:PropertiesofVectorAddition1}\).

Theorem 2.2 Let \(V\) be a vector space. Then the following hold.

  • For all \(\vec{v}\in V\), \(1\vec{v}=\vec{v}\).
  • For all \(a, b\in k\) and \(\vec{u}\in V\), \((a b) \vec{v}=a (b \vec{v})\).
  • For all \(a \in k, u, v\in V\), \(a (\vec{u}+\vec{v})=a \vec{u}+ a\vec{v}\).
  • For all $a, b k $ and \(\vec{u}\in V\), \((a+b)\vec{u}=a \vec{u}+ b \vec{u}\).

Proof. The proof is left for the reader as Exercise \(\ref{ex:PropertiesofVectorAddition2}\).

Theorem 2.3 Let \(V\) be a vector space. Then the following hold.

  • There exists a unique additive identity (denoted by \(\vec{0}\)).
  • Every \(\vec{v}\in V\) has a unique additive inverse, (denoted by \(-\vec{v}\)).
  • Proof.
  • Let \(\vec{u}_1\) and \(\vec{u}_2\) be additive identities in \(V\), then \(\vec{v}+\vec{u}_1=\vec{v}\) and \(\vec{v}+\vec{u}_2=\vec{v}\) for every \(\vec{v}\in V\). Then, \[ \vec{u}_1 =\vec{u}_1+\vec{u}_2 =\vec{u}_2+\vec{u}_1 =\vec{u}_2 \] as desired.
  • Let \(\vec{v}_1\) and \(\vec{v}_2\) be additive inverses of \(\vec{w}\) in \(V\), then \(\vec{w}+\vec{v}_1=\vec{0}\) and \(\vec{w}+\vec{v}_2=\vec{0}\). Then, \[ \vec{v}_1 =\vec{v}_1+\vec{0} =\vec{v}_1+(\vec{w}+\vec{v}_2) =(\vec{v}_1+\vec{w})+\vec{v}_2 =(\vec{w}+\vec{v}_1)+\vec{v}_2 =\vec{0}+\vec{v}_2=\vec{v}_2 \] as desired.

Theorem 2.4 Let \(V\) be a vector space. Then the following hold.

  • If \(\vec{v}\in V\), then \(0\, \vec{v}=\vec{0}\).
  • If \(a\in k\), then \(a\, \vec{0}=\vec{0}\).
  • Proof.
  • Let \(\vec{v}\in V\), then \[ \vec{v}=1 \vec{v}=(1+0) \vec{v}= 1 \vec{v}+0 \vec{v}= \vec{v}+0\vec{v} \] which shows that \(0 \vec{v}\) is the additive identity of \(V\), namely \(0 \vec{v}=\vec{0}\).
  • Let \(a\in k\), then \[ a \vec{0} =a(\vec{0}+\vec{0}) =a\vec{0}+a\vec{0} \] which shows that \(a \vec{0}\) is the additive identity of \(V\), namely \(a \vec{0}=\vec{0}\).

Theorem 2.5 Let \(V\) be a vector space. Then the following hold.

  • If \(\vec{v}\in V\), then \(-(-\vec{v})=\vec{v}\).
  • If \(\vec{v}\in V\), then \((-1)\, \vec{v}=-\vec{v}\).
  • Proof.
  • Let \(\vec{v}\in V\), then \[ \vec{v}+(-1)\vec{v} =1 \vec{v}+(-1) \vec{v} =(1+(-1)) \vec{v} =0 \vec{v} =\vec{0} \] which shows that \((-1)\vec{v}\) is the unique additive inverse of \(\vec{v}\) namely, \((-1)\vec{v}=-\vec{v}\).
  • Since \(-\vec{v}\) is the unique additive inverse of \(\vec{v}\), \(\vec{v}+(-\vec{v})=\vec{0}\). Then \((-\vec{v})+\vec{v}=\vec{0}\) shows that \(\vec{v}\) is the unique additive inverse of \(-\vec{v}\), namely, \(\vec{v}=-(-\vec{v})\) as desired.

Theorem 2.6 Let \(V\) be a vector space with \(a\in k\) and \(\vec{v}\in V\). If \(a\,\vec{v}=\vec{0}\), then \(a=0\) or \(\vec{v}=\vec{0}\).

Proof. Suppose \(a\neq 0\). If \(a v =\vec{0}\) then \[ \vec{v}=1 \vec{v} =(a^{-1} a) \vec{v} =a^{-1} (a \vec{v}) =a^{-1} \vec{0} =\vec{0}. \] Otherwise \(a=0\) as desired.

Exercise 2.1 Determine whether the following collection of vectors in \(\mathbb{R}^3\) are linearly independent or linearly dependent.

  • \((0,1,1), (1,2,1), (0,4,6), (1,0,-1)\)
  • \((0,1,0), (1,2,1), (0,-4,6), (-1,1,-1)\)

Exercise 2.2 Determine whether the following collection of vectors in \(\mathbb{R}^4\) are linearly independent or linearly dependent.

  • \((0,1,1,1), (1,2,1,1), (0,4,6,2), (1,0,-1, 2)\)
  • \((0,1,0,1), (1,2,1,3), (0,-4,6,-2), (-1,1,-1, 2)\)

Exercise 2.3 Show that the given vectors do not form a basis for the vector space \(V\).

  • \((21,-7), (-6, 1)\); \(V=\mathbb{R}^2\)
  • \((21,-7,14), (-6, 1,-4), (1,0,0)\); \(V=\mathbb{R}^3\)
  • \((48,24,108,-72), (-24, -12,-54,36), (1,0,0,0), (1,1,0,0)\); \(V=\mathbb{R}^4\)

Exercise 2.4 Reduce the vectors to a basis of the vector space \(V\).

  • \((1,0), (1,2), (2,4)\), \(V=\mathbb{R}^2\)
  • \((1,2,3), (-1, -10, 15), (1, 2, -3), (2,0,6), (1, -2, 3)\), \(V=\mathbb{R}^3\)

Exercise 2.5 Which of the following collection of vectors in \(\mathbb{R}^3\) are linearly dependent? For those that are express one vector as a linear combination of the rest.

  • \((1,1,0), (0,2,3), (1,2,3)\)
  • \((1,1,0), (3,4,2), (0,2,3)\)

Exercise 2.6 Prove \(\ref{PropertiesofVectorAddition1}\).

Exercise 2.7 Prove \(\ref{PropertiesofVectorAddition2}\).

Exercise 2.8 Let \(S=\{v_1, v_2, ..., v_k\}\) be a set of vectors in a a vector space \(V\). Prove that \(S\) is linearly dependent if and only if one of the vectors in \(S\) is a linear combination of all other vectors in \(S\).

Exercise 2.9 Suppose that \(S=\{v_1, v_2, v_3\}\) is a linearly independent set of vector in a vector space \(V\). Prove that \(T=\{u_1, u_2, u_3\}\) is also linearly independent where \(u_1=v_1\), \(u_2=v_1+v_2\), and \(u_3=v_1+v_2+v_3\).

Exercise 2.10 Which of the following sets of vectors form a basis for the vector space \(V\).   - \((1,3), (1,-1)\); \(V=\mathbb{R}^2\) - \((1,3),(-2,6)\); \(V=\mathbb{R}^2\) - \((3,2,2), (-1,2,1), (0,1,0)\); \(V=\mathbb{R}^3\) - \((3,2,2), (-1,2,0), (1,1,0)\); \(V=\mathbb{R}^3\) - \((2,2,2,2), (3,3,3,2), (1,0,0,0), (0,1,0,0)\); \(V=\mathbb{R}^4\) - $(1,1,2,0), (2,2,4,0), (1,2,3,1), (2,1,3,-1), (1,2,3,-1) $; \(V=\mathbb{R}^4\)

Exercise 2.11 Find a basis for the subspace of the vector space \(V\).

  • All vectors of the form \((a,b,c)\) where \(b=a+c\) where \(V=\mathbb{R}^3\).
  • All vectors of the form \((a,b,c)\) where \(b=a-c\) where \(V=\mathbb{R}^3\).
  • All vectors of the form \(\vectorfour{b-a}{a+c}{b+c}{c}\) where \(V=\mathbb{R}^4\).

Exercise 2.12 Let \(\vec{v}_1=\vectorthree{0}{1}{1}\), \(\vec{v}_2=\vectorthree{1}{0}{0}\) and \(S=\text{span}(\vec{v}_1,\vec{v}_2)\).

  • Is \(S\) a subspace of \(\mathbb{R}^3\)?
  • Find a vector \(\vec{u}\) in \(S\) other than \(\vec{v}_1\), \(\vec{v}_2\).
  • Find scalars which verify that \(3\vec{u}\) is in \(S\).
  • Find scalars which verify that \(\vec{0}\) is in \(S\).

Exercise 2.13 Let \(\vec{u}_1=\vectorthree{0}{2}{2}\), \(\vec{u}_2=\vectorthree{2}{0}{0}\) and \(T=\text{span}(\vec{u}_1,\vec{u}_2)\). Show \(S=T\) by showing \(S\subseteq T\) and \(T\subseteq S\) where \(S\) is defined in Exercise \(\ref{vecex1}\).

Exercise 2.14 Prove that the non-empty intersection of two subspaces of \(\mathbb{R}^3\) is a subspace of \(\mathbb{R}^3\).

Exercise 2.15 Let \(S\) and \(T\) be subspaces of \(\mathbb{R}^3\) defined by \[ S=\text{span}\left(\vectorthree{1}{0}{2},\vectorthree{0}{2}{1}\right) \qquad \text{and} \qquad T=\text{span}\left(\vectorthree{2}{-2}{3},\vectorthree{3}{-4}{4}\right). \] Show they are the same subspace of \(\mathbb{R}^3\).

Exercise 2.16 Let \({\vec{v}_1,\vec{v}_2, \vec{v}_3}\) be a linearly independent set of vectors. Show that if \(\vec{v}_4\) is not a linear combination of \(\vec{v}_1, \vec{v}_2, \vec{v}_3\), then \({\vec{v}_1,\vec{v}_2, \vec{v}_3},\vec{v}_4\) is a linearly independent set of vectors.

Exercise 2.17 If
\(\{\vec{v}_1,\vec{v}_2, \vec{v}_3\}\) is a linearly independent set of vectors in \(V\), show that
\(\{\vec{v}_1,\vec{v}_1+\vec{v}_2, \vec{v}_1+\vec{v}_2+\vec{v}_3\}\) is also a linearly independent set of vectors in \(V\).

Exercise 2.18 If
\(\{\vec{v}_1,\vec{v}_2, \vec{v}_3\}\) is a linearly independent set of vectors in \(V\), show that
\(\{\vec{v}_1+\vec{v}_2,\vec{v}_2+\vec{v}_3, \vec{v}_3+\vec{v}_1\}\) is also a linearly independent set of vectors in \(V\).

Exercise 2.19 Let \(\{\vec{v}_1,\vec{v}_2, \vec{v}_3\}\) be a linearly dependent set. Show that at least one of the \(\vec{v}_i\) is a linear combination of the others.

Exercise 2.20 Prove or provide a counterexample to the following statement. If a set of vectors \(T\) spans the vector space \(V\), then \(T\) is linearly independent.

Exercise 2.21 Which of the following are not a basis for \(\mathbb{R}^3\)?

  • \(\vec{v_1}=\vectorthree{1}{0}{0}, \vec{v_2}=\vectorthree{0}{1}{1}, \vec{v_3}=\vectorthree{1}{-1}{-1}\)
  • \(\vec{u_1}=\vectorthree{0}{0}{1}, \vec{u_2}=\vectorthree{1}{0}{1}, \vec{u_3}=\vectorthree{2}{3}{4}\)

Exercise 2.22 Let \(S\) be the space spanned by the vectors \[ \vec{v}_1=\vectorfour{1}{0}{1}{1} \quad \vec{v}_1=\vectorfour{-1}{-3}{1}{0} \quad \vec{v}_1=\vectorfour{2}{3}{0}{1} \quad \vec{v}_1=\vectorfour{2}{0}{2}{2} \] Find the dimension of \(S\) and a subset of \(T\) which could serve as a basis for \(S\).

Exercise 2.23 Let \(\{\vec{v}_1, \vec{v}_2, ..., \vec{v}_n\}\) be a basis for \(V\), and suppose that \(\vec{u} =a_1 \vec{v_1}+a_2 \vec{v_2}+\cdots + a_n \vec{v_n}\) with \(a_1\neq 0\). Prove that \(\{\vec{u}, \vec{v}_2, ..., \vec{v}_n\}\) is also a basis for \(V\).

Exercise 2.24 Let \(S=\text{span}(\vec{v}_1,\vec{v}_2,\vec{v}_3)\) and \(T=\text{span}(\vec{u}_1,\vec{u}_2,\vec{u}_3)\) where \(\vec{v}_i\) and \(\vec{u}_i\) are defined as follows. \[ \vec{v}_1=\vectorfour{1}{-1}{2}{0} \quad \vec{v}_2=\vectorfour{2}{1}{1}{1} \quad \vec{v}_3=\vectorfour{3}{-1}{2}{-1} \qquad \vec{u}_1=\vectorfour{3}{0}{3}{1} \quad \vec{u}_2=\vectorfour{1}{2}{-1}{1} \quad \vec{u}_3=\vectorfour{4}{-1}{5}{1} \] Is one of these two subspaces strictly contained in the other or are they equal?

Exercise 2.25 Let \(S=\text{span}(\vec{v}_1,\vec{v}_2,\vec{v}_3)\) where \(\vec{v}_i\) are defined as follows. \[ \vec{v}_1=\vectorfour{1}{2}{3}{1} \qquad \vec{v}_2=\vectorfour{2}{-1}{1}{-3} \qquad \vec{v}_3=\vectorfour{1}{3}{4}{2} \qquad \text{and}\qquad \vec{u}=\vectorfour{1}{2}{3}{1} \] Is the vector \(\vec{u}\) in \(S\)?

Exercise 2.26 If possible, find a value of \(a\) so that the vectors \[ \vectorthree{1}{2}{a} \qquad \vectorthree{0}{1}{a-1} \qquad \vectorthree{3}{4}{5} \qquad \] are linearly independent.

Exercise 2.27 Let \(S=\text{span}(\vec{v}_1,\vec{v}_2,\vec{v}_3)\) where \(\vec{v}_i\) are defined as follows. \[ \vec{v}_1=\vectorfour{1}{-1}{2}{3} \qquad \vec{v}_2=\vectorfour{1}{0}{1}{0} \qquad \vec{v}_3=\vectorfour{3}{-2}{5}{7} \qquad \text{and}\qquad \vec{u}=\vectorfour{1}{1}{0}{-1} \] Find a basis of \(S\) which includes the vector \(\vec{u}\).

Exercise 2.28 Find a vector \(\vec{u}\) in \(\mathbb{R}^4\) such that \(\vec{u}\) and the vectors \[ \vec{v}_1=\vectorfour{1}{-1}{-1}{1} \qquad \vec{v}_2=\vectorfour{1}{0}{1}{1} \qquad \vec{v}_3=\vectorfour{1}{2}{1}{1} \] for a basis of \(\mathbb{R}^4\).

Exercise 2.29 Show that every subspace of \(V\) has no more than \(n\) linearly independent vectors.

Exercise 2.30 Find two bases of \(\mathbb{R}^4\) that have only the vectors \(\vec{e}_3\) and \(\vec{e}_4\) in common.

Exercise 2.31 Prove that if a list of vectors is linearly independent so is any sublist.

Exercise 2.32 Suppose \(\vec{v}_1,\vec{v}_2, \vec{v}_3\) and \(\vec{v}_1, \vec{v}_2, \vec{v}_4\) are two sets of linearly dependent vectors, and suppose that \(\vec{v}_1\) and \(\vec{v}_2\) are linearly independent. Prove that any set of three vectors chosen from \(\vec{v}_1, \vec{v}_2, \vec{v}_3, \vec{v}_4\) is linearly dependent.

Exercise 2.33 If \(\vec{u}\) and \(\vec{v}\) are linearly independent vectors in \(V\), prove that the vectors \(a\vec{u}+b\vec{v}\) and \(c\vec{u}+d\vec{v}\) are also linearly independent if and only if \(ad-bc\neq 0\).

Exercise 2.34 Complete the proof of \(\ref{prop:spnlinbasis}\).

Exercise 2.35 Let \(U\) be the collection of vectors that satisfy the equations \(x+y+z=0\) and \(x+2y-z=0\). Show \(U\) is a subspace of \(\mathbb{R}^3\), find a basis for \(U\), and find \(\dim(U)\).

Exercise 2.36 Let \(U\) be the collection of vectors that satisfy the equations \(x+y+z=0\), \(x+2y-z=0\), and \(y-2z=0\). Show \(U\) is a subspace of \(\mathbb{R}^3\), find a basis for \(U\), and find \(\dim(U)\).

Exercise 2.37 Show that the only subspaces of \(\mathbb{R}\) are \(\{\vec{0}\}\) and \(\{\mathbb{R}\}\).

Exercise 2.38 Show that the only subspaces of \(\mathbb{R}^2\) are \(\{\vec{0}\}\), \(\{\mathbb{R}^2\}\), and any set consisting of all scalar multiples of a nonzero vector. Describe these subspaces geometrically.

Exercise 2.39 Determine the various types of subspaces of \(\mathbb{R}^3\) and describe them geometrically.

Exercise 2.40 For \(\vec{b}\neq\vec{0}\), show that the set of solutions of the \(n\times m\) linear system \(A \vec{x}=\vec{b}\), is not a subspace of \(V\).

Exercise 2.41 Suppose that \(\vec{v}_1, \vec{v}_2, ..., \vec{v}_n\) are linearly independent in \(\mathbb{R}^n\). Show that if \(A\) is an \(n\times n\) matrix with \(\text{rref}(A)=I_n\), then \(A\vec{v}_1, A\vec{v}_2, ..., A\vec{v}_n\) are also linearly independent in \(\mathbb{R}^n\).

Exercise 2.42 Let \(S=\{\vlist{v}{s}\}\) and \(T=\{\vlist{u}{t}\}\) be two sets of vectors in \(V\) where each \(\vec{u}_i\), \((i=1,2,...,t)\) is a linear combination of the vectors in \(S\). Show that \(\vec{w}=\lincomb{a}{u}{t}\) is a linear combination of the vectors in \(S\).

Exercise 2.43 Let \(S=\{\vlist{v}{m}\}\) be a set of non-zero vectors in a vector space \(V\) such that every vector in \(V\) can be uniquely as a linear combination of the vectors in \(S\). Prove that \(S\) is a basis for \(V\).

Exercise 2.44 Find a basis for the solution space of the homogeneous system \((\lambda I_n-A)\vec{x}=\vec{0}\) for the given \(\lambda\) and \(A\).

  • \(\lambda=1, A=\begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & -3 \\ 0 & 1 & 3 \end{bmatrix}\)
  • \(\lambda=2, A=\begin{bmatrix} -2 & 0 & 0 \\ 0 & -2 & -3 \\ 0 & 4 & 5 \end{bmatrix}\)

Exercise 2.45 Prove \(\ref{prop:roweqtoidn}\).

Exercise 2.46 Prove \(\ref{cor:explincomb}\).

Exercise 2.47 Prove \(\ref{sumprop}\).

2.2 Subspaces

Definition 2.2 A subset \(U\) of a vector space \(V\) is called a subspace of \(V\) if it has the following three properties

  • \(U\) contains the zero vector in \(V\),
  • \(U\) is closed under addition: if \(\vec{u}\) and \(\vec{v}\) are in \(U\) then so is \(\vec{u}+\vec{v}\), and
  • \(U\) is closed under scalar multiplication: if \(\vec{v}\) is in \(U\) and \(a\) is any scalar, then \(a\vec{v}\) is in \(U\).

Example 2.1 Let \(U\) be the subset of \(\mathbb{R}^5\) defined by \[ U=\{(x_1,x_2,x_3,x_4,x_5)\in\mathbb{R}^5 \, \mid \,x_1=3x_2 \text{ and } x_3=7x_4\}. \] Show \(U\) is a subspace of \(\mathbb{R}^5.\) The zero vector \(\vec{0}=(0,0,0,0,0,0)\) is in \(U\) since \(0=3(0)\) and \(0=7(0)\). Let \(\vec{u}=(u_1, u_2, u_3, u_4, u_5)\) and \(\vec{v}=(v_1, v_2, v_3, v_4, v_5)\) be vectors in \(U\) and let \(a\) be a scalar. Then \[ \vec{u}+\vec{v}=(u_1+v_1, u_2+v_2, u_3+v_3, u_4+v_4, u_5+v_5) \] is in \(U\) since \[ u_1=3 u_2 \text{ and } v_1=3 v_2 \text{ imply } u_1+v_1=3 (u_2+v_2)\] and \[ u_3=7 u_4 \text{ and } v_3=7 v_4 \text{ imply } u_3+v_3=7 (u_4+v_4).\] Also, \(a \vec{u}\) is in \(U\) since \(u_1=3u_2\) implies \(a u_1=3 (a u_2)\). By \(\ref{subdef}\), \(U\) is a subspace of \(\mathbb{R}^5\).

Example 2.2 Give an example of a nonempty subset \(U\) of \(\mathbb{R}^2\) such that \(U\) is closed under addition and under taking additive inverses, but \(U\) is not a subspace of \(\mathbb{R}^2\). The subset \(\mathbb{Z}^2\) of \(\mathbb{R}^2\) is closed under additive inverses and addition, however \(\mathbb{Z}^2\) is not a subspace of \(\mathbb{R}^2\) since \(\sqrt{2} \in \mathbb{R}, (1,1)\in \mathbb{Z}^2\) however \((\sqrt{2} , \sqrt{2}) \not \in \mathbb{Z}^2\).

Example 2.3 Give an example of a nonempty subset \(U\) of \(\mathbb{R}^2\) such that \(U\) is closed under scalar multiplication, but \(U\) is not a subspace of \(\mathbb{R}^2\). The set \(\{(x_1,x_2)\in \mathbb{R}^2 \mid x_1 x_2=0\}=M\) is closed under scalar multiplication because, if \(\lambda\in \mathbb{R}\) and \((x_1,x_2)\in M\), then \((\lambda x_1, \lambda x_2)\in M\) holds since \(\lambda x_1 \lambda x_2=0\). However, \(M\) is not a subspace because \((0,1)+(1,0)=(1,1)\not \in M\) even though \((0,1),(1,0)\in M\).

Example 2.4 Show that the set of all solutions of an \(m\times n\) homogenous linear system of equations is a subspace of \(V\) (called the null space ). Let \(A\vec{x}=\vec{0}\) be an \(m\times n\) homogenous system of linear equations and let \(U\) be the set of solutions to this system. Of course \(A\vec{0}=\vec{0}\) and so the zero vector is in \(U\). Let \(\vec{u}\) and \(\vec{v}\) be in \(U\) and let \(a\) be a scalar. Then \[ A(\vec{u}+\vec{v})=A\vec{u}+A\vec{v}=\vec{0}+\vec{0}=\vec{0} \] and \[ A(a \vec{u})=a(A\vec{u})=a\vec{0}=\vec{0} \] shows \(\vec{u}+\vec{v}\) and \(a\vec{u}\) are in \(U\). By \(\ref{subdef}\), \(U\) is a subspace of \(V\).

Definition 2.3 Let \(\vec{v}_1, \vec{v}_2, ..., \vec{v}_m\) be vectors in the vector space \(V\). The set of all linear combinations \[ \text{span}(\vlist{v}{m}) =\left\{ \lincomb{c}{v}{m} \mid \vlist{c}{m}\in k \right\} \] is called the spanning set of the vectors \(\vlist{v}{m}\).

Example 2.5 Show that the spanning set of the vectors \(\vlist{v}{m}\) in \(V\) is a subspace of \(V\).
Let \(U=\text{span}(\vlist{v}{m})\). Notice \(\vec{0}\in U\) since \(\vec{0}=\lincomb{0}{v}{m}\) where \(0\in k\). Let \(\vec{u}\) and \(\vec{v}\) be vectors in \(U\) and let \(a\) be a scalar. By \(\ref{spandef}\), there exists scalars \(\vlist{c}{m}\) and scalars
\(\vlist{d}{m}\) such that \[ \vec{u} =\lincomb{c}{v}{m} \quad \text{ and } \quad \vec{v} =\lincomb{d}{v}{m} \] Then \[ \vec{u}+\vec{v}=\sum_{i=1}^m c_i \vec{v}_i+\sum_{i=1}^m d_i \vec{v}_i=\sum_{i=1}^m (c_i+d_i) \vec{v}_i \] and \[ a\vec{u}=a\left(\sum_{i=1}^m c_i \vec{v}_i\right)= \sum_{i=1}^m (a c_i) \vec{v}_i \] show \(\vec{u}+\vec{v}\) and \(a\vec{u}\) are in \(U\); and thus \(U\) is a subspace of \(V\).

We say that a nonzero \(\vec{v}_i\) is in the list \(\vec{v}_1 , \ldots, \vec{v}_i, ..., \vec{v}_m\) if \(\vec v_i\) can be written as a linear combination of the other nonzero vectors in the list. An equation of the form \(a_1 \vec{v}_1 + \cdots +a_m \vec{v}_m = \vec{0}\) is called a linear relation among the vectors \(\vec{v}_1 , ..., \vec{v}_m\); and is called a nontrivial relation if at least one of the \(a_i\)’s is nonzero.

Definition 2.4 The vectors \(\vlist{v}{m}\) in \(V\) are called linear independent if the only choice for \[ \lincomb{a}{v}{m}= \vec{0} \] is \(a_1 = a_2=\cdots =a_m = 0\). Otherwise the vectors \(\vlist{v}{m}\) are called linear dependent.

Lemma 2.1 Show that any set \(S=\{\vlist{v}{m}\}\) of vectors in \(V\) is a linearly dependent set of vectors if and only if at least one of the vectors in the set can be written as a linear combination of the others.

Proof. Assume the vectors in the set \(S\) are linearly dependent. By \(\ref{lindepdef}\), there exists scalars \(c_1, c_2, ..., c_m\) (not all zero) such that \(\lincomb{c}{v}{m}=\vec{0}\). Let \(i\) be the least index such that \(c_i\) is nonzero. Thus \(c_1=c_2=\cdots =c_{i-1}=0\). So \[ c_i \vec{v}_i=-c_{i+1}\vec{v}_{i+1}-\cdots -c_m \vec{v}_m \] for some \(i\). Since \(c_i\neq 0\) and \(c_i\in k\), \(c_i^{-1}\) exists and thus \[ \vec{v}_i =\left(\frac{-c_{i+1}}{c_i}\right)\vec{v}_{i+1} + \cdots + \left(\frac{-c_m}{c_i}\right)\vec{v}_m \] which shows \(\vec{v}_i\) is a linear combination of the others, via \[ \vec{v}_i= 0\vec{v}_1+\cdots+0\vec{v}_{i-1}+\left(\frac{-c_{i+1}}{c_i}\right)\vec{v}_{i+1} + \cdots + \left(\frac{-c_m}{c_i}\right)\vec{v}_m. \] Now assume one of the vectors in the set \(S\) can be written as a linear combination of the others, say \[ \vec{v}_k =c_1 \vec{v}_1+\cdots +c_{k-1} \vec{v}_{k-1} +c_{k+1} \vec{v}_{k+1} +\cdots + c_m \vec{v}_m \] where \(c_1, c_2, \ldots, c_m\) are scalars. Thus, \[ \vec{0}=c_1 \vec{v}_1+\cdots + c_{k-1} \vec{v}_{k-1} +(-1) \vec{v}_k +c_{k+1} \vec{v}_{k+1} +\cdots + c_m \vec{v}_m \] and so by \(\ref{lindepdef}\), \(\vec{v}_1, ..., \vec{v}_m\) are linearly dependent.

For a list of vectors \(\vlist{v}{m}\) in \(V\) the following equivalent statements follow from the appropriate definitions:

  • vectors \(\vlist{v}{m}\) are linearly independent,
  • none of the vectors \(\vlist{v}{m}\) are redundant,
  • none of the vectors \(\vlist{v}{m}\) can be written as a linear combination of the other vectors in the list,
  • there is only the trivial relation among the vectors \(\vlist{v}{m}\),
  • the only solution to the equation \(\lincomb{a}{v}{m}\) is \(a_1 = a_2=\cdots =a_m= 0\), and
  • \(\text{rank}(A)=n\) where \(A\) is the \(n\times m\) matrix whose columns are the vectors \(\vlist{v}{m}\).

Example 2.6 Determine whether the following vectors \(\vec{u}\), \(\vec{v}\), and \(\vec{w}\) are linearly independent. \[ \vec{u}=\vectorfour{1}{1}{1}{1} \qquad \vec{v}=\vectorfour{1}{2}{3}{4} \qquad \vec{w}=\vectorfour{1}{4}{7}{10} \] Without interchanging rows, we use elementary row operations to find \[ \text{rref} \begin{bmatrix} \vec{u} & \vec{v} & \vec{w} \end{bmatrix} =\begin{bmatrix}1 & 0 & -2 \\ 0 & 1 & 3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}. \] From this we infer the nontrivial relation \(\vec 0=(-2)\vec{u}+(3)\vec{v}+(-1)\vec{w}.\) Therefore the given vectors are linearly dependent.

Theorem 2.7 Let \[ \vec{v}_1=\vectorfour{a_{11}}{a_{21}}{\vdots}{a_{n1}} \quad \vec{v}_2=\vectorfour{a_{12}}{a_{22}}{\vdots}{a_{n2}} \quad \cdots \quad \vec{v}_m=\vectorfour{a_{1s}}{a_{2m}}{\vdots}{a_{nm}} \] be \(s\) vectors in \(V\). These vectors are linearly dependent if and only if there exists a solution to the system of linear equations \[\begin{equation} \label{lincomsys} \begin{cases} a_{11}x_1+a_{12}x_2+\cdots+a_{1m}x_m=0 \\ a_{21}x_1+a_{22}x_2+\cdots+a_{2m}x_m=0 \\ \qquad \qquad \vdots \\ a_{n1}x_1+a_{n2}x_2+\cdots+a_{nm}x_m=0 \\ \end{cases} \end{equation}\] different from \(x_1=x_2=\cdots=x_m=0\).

Proof. Assume \(\vlist{v}{m}\) are linear dependent. By \(\ref{lindepdef}\), there exists scalars \(\vlist{c}{m}\) such that \[\begin{equation} \label{lincomeq} \lincomb{c}{v}{m}=\vec{0} \end{equation}\] and not all \(c_i\)’s are zero. \(\ref{lincomeq}\) yields a system \[ \begin{cases} a_{11}c_1+a_{12}c_2+\cdots+a_{1m}c_m=0 \\ a_{21}c_1+a_{22}c_2+\cdots+a_{2m}c_m=0 \\ \qquad \qquad \vdots \\ a_{n1}c_1+a_{n2}c_2+\cdots+a_{nm}c_m=0 \\ \end{cases} \] with solution \(x_1=c_1\), \(x_2=c_2\), …, \(x_m=c_m\). Since not all \(c_i\)’s are zero we have a solution different from \(x_1=x_2=\cdots=x_m=0\). Assume the system in \(\ref{lincomsys}\) has a solution \(\vec{x}^*\) with \(\vec{x}^*\neq \vec{0}\), say \(x_i^*\). By \(\ref{colvecmat}\) we can write \[\begin{equation} \vec{0}=A\vec{x}= \lincomb{x^*}{v}{m} \end{equation}\] Isolating the term \(x_i^*\vec{v}_i\) yields \[\begin{equation} x_i^* \vec{v}_i=-x_i^*\vec{v}_1-\cdots -x_{i-1}\vec{v}_{i-1}-x_{i+1}\vec{v}_{i+1} -\cdots -x_m^*\vec{v}_m \end{equation}\] and since \(x_i^*\neq 0\), \((x_i^*)^{-1}\) exists. Therefore, \[\begin{equation} \vec{v}_1=\left(-\frac{x_1^*}{x^*_i}\right)\vec{v}_1-\cdots -\left(-\frac{x_{i-1}^*}{x^*_i}\right)\vec{v}_{i+1}-\cdots - \left(-\frac{x_{m}^*}{x^*_i}\right)\vec{v}_{m} \end{equation}\] shows the vectors \(\vlist{v}{m}\) are linearly dependent.

Theorem 2.8 The \(n\times m\) linear system of equations \(A\vec{x}=\vec{b}\) has a solution if and only if the vector \(\vec{b}\) is contained in the subspace of \(V\) generated by the column vectors of \(A\).

Proof. Let \(\vec{x}\) be a solution to \(A\vec{x}=\vec{b}\) with \(A=\begin{bmatrix}\vec{v}_1 & \vec{v}_2 & \cdots & \vec{v}_m \end{bmatrix}\). By \(\ref{colvecmat}\), \(\vec{b}=A\vec{x}=\lincomb{x}{v}{m}\) and thus \(\vec{b}\in \text{span}(\vlist{v}{m})\) as needed. Conversely, assume \(\vec{b}\) is in the subspace generated by the column vectors of \(A\); that is assume \(\vec{b}\in \text{span}(\vlist{v}{m})\). By \(\ref{spandef}\), there exists scalars \(\vlist{c}{m}\) such that \(\vec{b}=\lincomb{c}{v}{m}\). By \(\ref{spandef}\), \(\vec{b}=\lincomb{c}{v}{m}=A\vec{c}\) where the components of \(\vec{c}\) are the \(c_i\)’s. Thus the system \(A\vec{x}=\vec{b}\) has a solution, namely \(\vec{c}\).

Example 2.7 Let \(U\) and \(V\) be finite subsets of a vector space \(V\) with \(U\subseteq V\).

  • If \(U\) is linear dependent, then so is \(V\).
  • If \(V\) is linear independent, then so is \(U\).

Let \(U=\{\vlist{u}{s}\}\) \(V=\{\vlist{v}{t}\}\).

  • If \(U\) is linear dependent, then thee exists a vector, say \(\vec{u}_k\) such that \(\vec{u}_k\) is a linear combination of the other \(\vec{u}_i\)’s. Since \(U\subseteq V\) all \(\vec{u}_i\)’s are in \(V\). Thus we have a vector \(\vec{u}_k\) in \(V\) that is a linear combination of other vectors in \(V\). Therefore, by \(\ref{lindepothers}\), \(V\) is linear dependent.
  • Let \(\vlist{c}{s}\) be scalars such that \[\begin{equation} \label{lincombcus} \lincomb{c}{u}{s}=\vec{0}. \end{equation}\] Since \(U\subseteq V\), we know \(u_i\in V\) for \(1\leq i \leq s\). Since \(V\) is linear independent, \(\ref{lincombcus}\) implies \(c_1=c_2=\cdots =c_m=0\). By \(\ref{lindepdef}\), \(U\) is linear independent as well.

Corollary 2.1 Any vector in \(V\), written as a column matrix, can be expressed (uniquely) as a linear combination of \(\vec{v}_1, \vec{v}_2, ..., \vec{v}_m\) if and only if \(A \vec{x}=\vec{v}\) has unique solution, where \(A=\begin{bmatrix}\vec{v}_1 & \vec{v}_2 & \cdots & \vec{v}_m\end{bmatrix}\). When there is a solution, the components \(x_1, x_2, ...., x_m\) of \(\vec{x}\) give the coefficients for the linear combination.

Proof. This proof is left for the reader as Exercise \(\ref{ex:explincomb}\).

Theorem 2.9 The vectors \(\vlist{v}{n}\) in \(V\) form a linearly independent set of vectors if and only if \(\begin{bmatrix}\vec{v}_1& \vec{v}_2 & \cdots & \vec{v}_n \end{bmatrix}\) is row equivalent to \(I_n\).

Proof. This proof is left for the reader as Exercise \(\ref{ex:roweqtoidn}\).

Theorem 2.10 Let \(V\) be a vector space and assume that the vectors \(\vlist{v}{n}\) are linearly independent and \(\text{span}(\vlist{s}{m})=V\). Then \(n\leq m\).

Proof. We are given \[ \text{span}(\vlist{s}{m})=V \quad \text{and} \quad \vlist{v}{n} \text{ are linearly independent.} \] Since \(\vec{v}_1\) is a linear combination of the vectors \(\vec{s}_1\), \(\vec{s}_2\), …., \(\vec{s}_m\) we obtain \[ \text{span}(\vec{v}_1,\vec{s}_2,...,\vec{s}_m)=V \quad \text{and} \quad \vec{v}_2, ..., \vec{v}_n \text{ are linearly independent,} \] respectively. Since \(\vec{v}_2\) is a linear combination of \(\vec{v}_1\), \(\vec{s}_2\), …, \(\vec{s}_m\) we can obtain \[ \text{span}(\vec{v}_1,\vec{v}_2,\vec{s}_3,...,\vec{s}_m)=V \quad \text{and} \quad \vec{v}_3, ..., \vec{v}_n \text{ are linearly independent,} \] respectively. Now if \(m<n\) then repeating this process will eventually exhaust the \(\vec{s}_i\)’s and lead to \[ \text{span}(\vec{v}_1,\vec{v}_2,...,\vec{v}_m)=V \quad \text{and} \quad \vec{v}_{m+1}, ..., \vec{v}_n \text{ are linearly independent.} \] This is a contradiction since \(\vec{v}_n\) is not in \(\text{span}(\vlist{v}{m});\) and whence \(n\leq m\).

Theorem 2.11 A set \(S=\{\vlist{v}{m}\}\) of vectors in \(V\) is linearly independent if and only if for any vector \(\vec{u}\), if \(\vec{u}=\lincomb{u}{v}{m}\), then this representation is unique.

Proof. Assume the vectors in \(S\) are linearly independent and assume \(\vec{u}\) is an arbitrary vector with \[\begin{equation} \vec{u}=\lincomb{a}{v}{m} \qquad \text{and} \qquad \vec{u}=\lincomb{b}{v}{m} \end{equation}\] as both representations of \(\vec{u}\) as linear combinations of the vectors in \(S\). Then \[ \vec{0}=\vec{u}-\vec{u} =(a_1-b_1)\vec{v}_1+(a_2-b_2)\vec{v}_2+\cdots +(a_m-b_m)\vec{v}_m. \] Since \(S\) is linearly independent \(a_1-b_1=a_2-b_2=\cdots =a_m-b_m=0\) and thus \(a_1=b_1\), \(a_2=b_2\), …, \(a_m=b_m\). Therefore, the representation of \(\vec{u}\) as a linear combination of the vectors in \(S\) is unique. Conversely, assume for nay vector \(\vec{u}\) which can be written as a linear combination of the vectors in \(S\), the representation is unique. If \(\vlist{c}{m}\) are scalars such that \(\lincomb{c}{v}{m}=\vec{0}\) then \(c_1=c_2=\cdots =c_m=0\) much hold since \(0\vec{v}_1+0\vec{v}_2+\cdots +0\vec{v}_m=\vec{0}\) and this representation is unique. Therefore, the vectors in \(S\) are linearly independent.

Definition 2.5 The vectors \(\vlist{v}{m}\) in \(V\) are called a basis of a linear subspace \(V\) if they span \(V\) and are linearly independent.

Example 2.8 Find a basis for \(V\) for \(n=1,2,3,...m\). For \(n=2\), the vectors \(\vectortwo{1}{0}\), \(\vectortwo{0}{1}\) form a basis for \(k^2\). For \(n=3\), the vectors \(\vectorthree{1}{0}{0}\), \(\vectorthree{0}{1}{0}\), \(\vectorthree{0}{0}{1}\) form a basis for \(k^3\). In general, for a positive integer \(n\), the following \(n\) vectors of \(V\) form a basis (called the standard basis ) of \(V\). \[\begin{equation} \label{stba} \vec{e}_1=\vectorfour{1}{0}{\vdots}{0} \qquad \vec{e}_2=\vectorfour{0}{1}{\vdots}{0} \qquad \cdots \qquad \vec{e}_n=\vectorfour{0}{0}{\vdots}{1} \end{equation}\] The vectors in a standard basis are linearly independent by \(\ref{prop:roweqtoidn}\). Given any vector \(\vec{v}\) in \(V\) with components \(v_i\), we can write \[ \vec{v}=\lincomb{v}{e}{n}, \] and thus \(k^n=\text{span}(\vlist{e}{n})\) which shows that any standard basis is in fact a basis.

Example 2.9 Show the following vectors \(\vec{v}_1\), \(\vec{v}_2\), \(\vec{v}_3\), and \(\vec{v}_4\) form a basis for \(\mathbb{R}^4\). \[ \vec v_1=\begin{bmatrix} 1 \\ 1\\ 1 \\ 1 \end{bmatrix} \qquad \vec v_2=\begin{bmatrix} 1 \\ -1\\ 1 \\ -1 \end{bmatrix} \qquad \vec v_3=\begin{bmatrix} 1 \\ 2\\ 4 \\ 8 \end{bmatrix}\qquad \vec v_4=\begin{bmatrix} 1 \\ -2\\ 4 \\ -8 \end{bmatrix} \] We determine \(\text{rref}(A)=I_4\) where \(A\) is the matrix with column vectors \(\vec v_1, \vec v_2, \vec v_3, \vec v_4\). By \(\ref{prop:roweqtoidn}\), \(\vec v_1, \vec v_2, \vec v_3, \vec v_4\) are linearly independent. Since \(\vec v_1, \vec v_2, \vec v_3, \vec v_4\) also span \(\mathbb{R}^4\), they form a basis of \(\mathbb{R}^4\).

Example 2.10 Let \(U\) be the subspace of \(\mathbb{R}^5\) defined by \[ U=\{(x_1,x_2,x_3,x_4,x_5)\in\mathbb{R}^5 \mid x_1=3x_2 \text{ and } x_3=7x_4\}. \] Find a basis of \(U\). The following vectors belong to \(U\) and are linearly independent in \(\mathbb{R}^5\). \[ v_1=\vectorfive{3}{1}{0}{0}{0} \qquad v_2=\vectorfive{0}{0}{7}{1}{0} \qquad v_3=\vectorfive{0}{0}{0}{0}{1} \] If \(u\in U\), then the representation \[ u =\vectorfive{u_1}{u_2}{u_3}{u_4}{u_5} =\vectorfive{3u_2}{u_2}{7u_4}{u_4}{u_5} =u_2\vectorfive{3}{1}{0}{0}{0}+u_4\vectorfive{0}{0}{7}{1}{0}+u_5\vectorfive{0}{0}{0}{0}{1} \] shows that they also span \(U\), and thus form a basis of \(U\) by \(\ref{basisdef}\).

Theorem 2.12 Let \(S=\{\vlist{v}{n}\}\) be a set of vectors in a vector space \(V\) and let \(W=\text{span}(S)\). Then some subset of \(S\) is a basis for \(W\).

Proof. Assume \(W=\text{span}(S)\) and suppose \(S\) is a linearly independent set of vectors. Thus, in this case, \(S\) is a basis of \(W\), by \(\ref{basisdef}\). So we can assume \(S\) is a linearly dependent set of vectors. By \(\ref{lindepothers}\), there exists \(i\) such that \(1\leq i \leq m\) and \(\vec{v}_i\) is a linear combination of the other vectors in \(S\). It is left for Exercise \(\ref{ex:spnlinbasis}\) to show that \[ W=\text{span}(S)=\text{span}(S_1) \] where \(S_1=S/\{\vec{v}_i\}\). If \(S_1\) is linearly independent set of vectors, then \(S_1\) is a basis of \(W\). Otherwise, \(S_1\) is a linear dependent set and we can delete a vector from \(S_1\) that is a linear combination of the other vectors in \(S_1\). We obtain another subset \(S_2\) of \(S\) with \[ W=\text{span}(S)=\text{span}(S_1)=\text{span}(S_2). \] Since \(S\) is finite, if we continue, we find a linearly independent subset of \(S\) and thus a basis of \(W\).

Corollary 2.2 All bases of a subspace \(U\) of a vector space \(V\) consists of the same number of vectors.

Proof. Let \(S=\{\vlist{v}{n}\}\) and \(T=\{\vlist{w}{m}\}\) be bases of a subspace \(U\). Then \(\text{span}(S)=V\) and \(T\) is a lineal indecent set of vectors. By \(\ref{inpspanine}\), \(m\leq n\). Similarly, since \(\text{span}(T)=V\) and \(S\) is a linearly independent set of vectors, \(n\leq m\). Therefore, \(m=n\) as desired.

Corollary 2.3 The vectors \(\vec{v}_1, \vec{v}_2, ..., \vec{v}_n\) form a basis of \(V\) if and only if the reduced row echelon form of the \(n\times n\) matrix \(\begin{bmatrix}\vec{v}_1& \vec{v}_2 & \cdots & \vec{v}_n \end{bmatrix}\) is \(I_n\).

Proof. Suppose the vectors \(\vlist{v}{n}\) form a basis of \(V\) and consider the \(n\times n\) linear system \[ \begin{cases} v_{11} x_1+v_{12} x_2+\cdots +v_{1n} x_n=0 \\ v_{21} x_1+v_{22} x_2+\cdots +v_{2n} x_n=0 \\ \qquad \qquad \vdots \\ v_{n1} x_1+v_{n2} x_2+\cdots +v_{nn} x_n=0 \end{cases} \] where the \(v_{ij}\)’s are the components of the \(\vec{v}_j\)’s. Since \(\{\vlist{v}{n}\}\) is a basis, \(\vec{v}_1=\vec{v}_2=\cdots =\vec{v}_n=\vec{0}\) and this linear system can not have another solution. By \(\ref{cor:linsystmecor2}\), \(\text{rref}(A)=I_n\) where \(A=\begin{bmatrix}\vec{v}_1& \vec{v}_2 & \cdots & \vec{v}_n \end{bmatrix}\).

Definition 2.6 The number of vectors in a basis of a subspace \(U\) of \(V\) is called the dimension of \(U\), and is denoted by \(\text{dim} U\).

Example 2.11 Find a basis of the subspace of \(\mathbb{R}^4\) that consists of all vectors perpendicular to both of the following vectors \(\vec{v}_1\) and \(\vec v_2\). \[ \vec v_1=\vectorfour{1}{0}{-1}{1} \qquad \vec v_2=\vectorfour{0}{1}{2}{3} \] We need to find all vectors \(\vec x\) in \(\mathbb{R}^4\) such that \(\vec x \cdot \vec v_1=0\) and \(\vec x \cdot \vec v_2=0\). We solve both \[ \vectorfour{x_1}{x_2}{x_3}{x_4}\cdot \vectorfour{1}{0}{-1}{1}=0 \qquad \text{and} \qquad \vectorfour{x_1}{x_2}{x_3}{x_4}\cdot \vectorfour{0}{1}{2}{3}=0 \] which leads to the system and matrix \[ \begin{cases} x_1-x_3+x_4 =0 \\ x_2+2x_3+3x_4 =0 \end{cases} \qquad \text{and} \qquad A= \begin{bmatrix} 1 & 0 & -1 & 1 \\ 0 & 1 & 2 & 3 \end{bmatrix}. \] All solutions are given by \[ \vectorfour{x_1}{x_2}{x_3}{x_4} =t \vectorfour{-1}{-3}{0}{1}+u\vectorfour{1}{-2}{1}{0} \quad \text{ where $t, u\in\mathbb{R}$}. \] It follows the vectors \(\vectorfour{-1}{-3}{0}{1}\), \(\vectorfour{1}{-2}{1}{0}\) form a basis of the desired subspace.

Theorem 2.13 Let \(U\) be a subspace of \(k^m\) with \(\dim U=n\), then

  • any list of linearly independent vectors contains \(n\) elements,
  • any list of vectors that spans \(U\) contains at least \(n\) elements,
  • if \(n\) vectors are linearly independent then they form a basis, and
  • if \(n\) vectors span \(U\), then they form a basis of \(U\).

Proof. The proof is left for the reader as Exercise \(\ref{ex:sumprop}\)

Example 2.12 Determine the values of \(a\) for which the following vectors \(\vec{u}_1\), \(\vec{u}_2\), \(\vec{u}_3\), and \(\vec{u}_4\) form a basis of \(\mathbb{R}^4\). \[ \vec{u}_1=\vectorfour{1}{0}{0}{4} \qquad \vec{u}_2=\vectorfour{0}{1}{0}{6} \qquad \vec{u}_3=\vectorfour{0}{0}{1}{8} \qquad \vec{u}_4=\vectorfour{4}{5}{6}{a}\] Let \(A=\begin{bmatrix}\vec{u}_1 & \vec{u}_2 & \vec{u}_3& \vec{u}_4\end{bmatrix}\). Using row operations we find the row-echelon form of \(A\) to be the following matrix. \[ \begin{bmatrix} 1 & 0 & 0 & 4 \\ 0 & 1 & 0 & 5 \\ 0 & 0 & 1 & 6\\ 0 & 0 & 8 & a-94 \end{bmatrix} \] Thus, \(\text{rref}(A)=I_4\) if and only if \(a=95\). Therefore, by \(\ref{rrefbasis}\), \(B=\{\vec{u}_1, \vec{u}_2, \vec{u}_3, \vec{u}_4\}\) is a basis if and only if \(a=95\).

Theorem 2.14 The dimension of the row space of a matrix \(A\) is equal to the dimension of the column space of \(A\).

Proof. Gerber pg 226.

Exercise 2.48 Determine whether the following collection of vectors in \(\mathbb{R}^3\) are linearly independent or linearly dependent.   - \((0,1,1), (1,2,1), (0,4,6), (1,0,-1)\) - \((0,1,0), (1,2,1), (0,-4,6), (-1,1,-1)\)

Exercise 2.49 Determine whether the following collection of vectors in \(\mathbb{R}^4\) are linearly independent or linearly dependent.   - \((0,1,1,1), (1,2,1,1), (0,4,6,2), (1,0,-1, 2)\) - \((0,1,0,1), (1,2,1,3), (0,-4,6,-2), (-1,1,-1, 2)\)

Exercise 2.50 Show that the given vectors do not form a basis for the vector space \(V\).   - \((21,-7), (-6, 1)\); \(V=\mathbb{R}^2\) - \((21,-7,14), (-6, 1,-4), (1,0,0)\); \(V=\mathbb{R}^3\) - \((48,24,108,-72), (-24, -12,-54,36), (1,0,0,0), (1,1,0,0)\); \(V=\mathbb{R}^4\)

Exercise 2.51 Reduce the vectors to a basis of the vector space \(V\).   - \((1,0), (1,2), (2,4)\), \(V=\mathbb{R}^2\) - \((1,2,3), (-1, -10, 15), (1, 2, -3), (2,0,6), (1, -2, 3)\), \(V=\mathbb{R}^3\)

Exercise 2.52 Which of the following collection of vectors in \(\mathbb{R}^3\) are linearly dependent? For those that are express one vector as a linear combination of the rest.

  • \((1,1,0), (0,2,3), (1,2,3)\)
  • \((1,1,0), (3,4,2), (0,2,3)\)

Exercise 2.53 Prove \(\ref{PropertiesofVectorAddition1}\).

Exercise 2.54 Prove \(\ref{PropertiesofVectorAddition2}\).

Exercise 2.55 Let \(S=\{v_1, v_2, ..., v_k\}\) be a set of vectors in a a vector space \(V\). Prove that \(S\) is linearly dependent if and only if one of the vectors in \(S\) is a linear combination of all other vectors in \(S\).

Exercise 2.56 Suppose that \(S=\{v_1, v_2, v_3\}\) is a linearly independent set of vector in a vector space \(V\). Prove that \(T=\{u_1, u_2, u_3\}\) is also linearly independent where \(u_1=v_1\), \(u_2=v_1+v_2\), and \(u_3=v_1+v_2+v_3\).

Exercise 2.57 Which of the following sets of vectors form a basis for the vector space \(V\).   - \((1,3), (1,-1)\); \(V=\mathbb{R}^2\) - \((1,3),(-2,6)\); \(V=\mathbb{R}^2\) - \((3,2,2), (-1,2,1), (0,1,0)\); \(V=\mathbb{R}^3\) - \((3,2,2), (-1,2,0), (1,1,0)\); \(V=\mathbb{R}^3\) - \((2,2,2,2), (3,3,3,2), (1,0,0,0), (0,1,0,0)\); \(V=\mathbb{R}^4\) - $(1,1,2,0), (2,2,4,0), (1,2,3,1), (2,1,3,-1), (1,2,3,-1) $; \(V=\mathbb{R}^4\)

Exercise 2.58 Find a basis for the subspace of the vector space \(V\).   - All vectors of the form \((a,b,c)\) where \(b=a+c\) where \(V=\mathbb{R}^3\). - All vectors of the form \((a,b,c)\) where \(b=a-c\) where \(V=\mathbb{R}^3\). - All vectors of the form \(\vectorfour{b-a}{a+c}{b+c}{c}\) where \(V=\mathbb{R}^4\).

Exercise 2.59 Let \(\vec{v}_1=\vectorthree{0}{1}{1}\), \(\vec{v}_2=\vectorthree{1}{0}{0}\) and \(S=\text{span}(\vec{v}_1,\vec{v}_2)\).

  • Is \(S\) a subspace of \(\mathbb{R}^3\)?
  • Find a vector \(\vec{u}\) in \(S\) other than \(\vec{v}_1\), \(\vec{v}_2\).
  • Find scalars which verify that \(3\vec{u}\) is in \(S\).
  • Find scalars which verify that \(\vec{0}\) is in \(S\).

Exercise 2.60 Let \(\vec{u}_1=\vectorthree{0}{2}{2}\), \(\vec{u}_2=\vectorthree{2}{0}{0}\) and \(T=\text{span}(\vec{u}_1,\vec{u}_2)\). Show \(S=T\) by showing \(S\subseteq T\) and \(T\subseteq S\) where \(S\) is defined in Exercise \(\ref{vecex1}\).

Exercise 2.61 Prove that the non-empty intersection of two subspaces of \(\mathbb{R}^3\) is a subspace of \(\mathbb{R}^3\).

Exercise 2.62 Let \(S\) and \(T\) be subspaces of \(\mathbb{R}^3\) defined by \[ S=\text{span}\left(\vectorthree{1}{0}{2},\vectorthree{0}{2}{1}\right) \qquad \text{and} \qquad T=\text{span}\left(\vectorthree{2}{-2}{3},\vectorthree{3}{-4}{4}\right). \] Show they are the same subspace of \(\mathbb{R}^3\).

Exercise 2.63 Let \({\vec{v}_1,\vec{v}_2, \vec{v}_3}\) be a linearly independent set of vectors. Show that if \(\vec{v}_4\) is not a linear combination of \(\vec{v}_1, \vec{v}_2, \vec{v}_3\), then \({\vec{v}_1,\vec{v}_2, \vec{v}_3},\vec{v}_4\) is a linearly independent set of vectors.

Exercise 2.64 If
\(\{\vec{v}_1,\vec{v}_2, \vec{v}_3\}\) is a linearly independent set of vectors in \(V\), show that
\(\{\vec{v}_1,\vec{v}_1+\vec{v}_2, \vec{v}_1+\vec{v}_2+\vec{v}_3\}\) is also a linearly independent set of vectors in \(V\).

Exercise 2.65 If
\(\{\vec{v}_1,\vec{v}_2, \vec{v}_3\}\) is a linearly independent set of vectors in \(V\), show that
\(\{\vec{v}_1+\vec{v}_2,\vec{v}_2+\vec{v}_3, \vec{v}_3+\vec{v}_1\}\) is also a linearly independent set of vectors in \(V\).

Exercise 2.66 Let \(\{\vec{v}_1,\vec{v}_2, \vec{v}_3\}\) be a linearly dependent set. Show that at least one of the \(\vec{v}_i\) is a linear combination of the others.

Exercise 2.67 Prove or provide a counterexample to the following statement. If a set of vectors \(T\) spans the vector space \(V\), then \(T\) is linearly independent.

Exercise 2.68 Which of the following are not a basis for \(\mathbb{R}^3\)?   - \(\vec{v_1}=\vectorthree{1}{0}{0}, \vec{v_2}=\vectorthree{0}{1}{1}, \vec{v_3}=\vectorthree{1}{-1}{-1}\) - \(\vec{u_1}=\vectorthree{0}{0}{1}, \vec{u_2}=\vectorthree{1}{0}{1}, \vec{u_3}=\vectorthree{2}{3}{4}\)

Exercise 2.69 Let \(S\) be the space spanned by the vectors \[ \vec{v}_1=\vectorfour{1}{0}{1}{1} \quad \vec{v}_1=\vectorfour{-1}{-3}{1}{0} \quad \vec{v}_1=\vectorfour{2}{3}{0}{1} \quad \vec{v}_1=\vectorfour{2}{0}{2}{2} \] Find the dimension of \(S\) and a subset of \(T\) which could serve as a basis for \(S\).

Exercise 2.70 Let \(\{\vec{v}_1, \vec{v}_2, ..., \vec{v}_n\}\) be a basis for \(V\), and suppose that \(\vec{u} =a_1 \vec{v_1}+a_2 \vec{v_2}+\cdots + a_n \vec{v_n}\) with \(a_1\neq 0\). Prove that \(\{\vec{u}, \vec{v}_2, ..., \vec{v}_n\}\) is also a basis for \(V\).

Exercise 2.71 Let \(S=\text{span}(\vec{v}_1,\vec{v}_2,\vec{v}_3)\) and \(T=\text{span}(\vec{u}_1,\vec{u}_2,\vec{u}_3)\) where \(\vec{v}_i\) and \(\vec{u}_i\) are defined as follows. \[ \vec{v}_1=\vectorfour{1}{-1}{2}{0} \quad \vec{v}_2=\vectorfour{2}{1}{1}{1} \quad \vec{v}_3=\vectorfour{3}{-1}{2}{-1} \qquad \vec{u}_1=\vectorfour{3}{0}{3}{1} \quad \vec{u}_2=\vectorfour{1}{2}{-1}{1} \quad \vec{u}_3=\vectorfour{4}{-1}{5}{1} \] Is one of these two subspaces strictly contained in the other or are they equal?

Exercise 2.72 Let \(S=\text{span}(\vec{v}_1,\vec{v}_2,\vec{v}_3)\) where \(\vec{v}_i\) are defined as follows. \[ \vec{v}_1=\vectorfour{1}{2}{3}{1} \qquad \vec{v}_2=\vectorfour{2}{-1}{1}{-3} \qquad \vec{v}_3=\vectorfour{1}{3}{4}{2} \qquad \text{and}\qquad \vec{u}=\vectorfour{1}{2}{3}{1} \] Is the vector \(\vec{u}\) in \(S\)?

Exercise 2.73 If possible, find a value of \(a\) so that the vectors \[ \vectorthree{1}{2}{a} \qquad \vectorthree{0}{1}{a-1} \qquad \vectorthree{3}{4}{5} \qquad \] are linearly independent.

Exercise 2.74 Let \(S=\text{span}(\vec{v}_1,\vec{v}_2,\vec{v}_3)\) where \(\vec{v}_i\) are defined as follows. \[ \vec{v}_1=\vectorfour{1}{-1}{2}{3} \qquad \vec{v}_2=\vectorfour{1}{0}{1}{0} \qquad \vec{v}_3=\vectorfour{3}{-2}{5}{7} \qquad \text{and}\qquad \vec{u}=\vectorfour{1}{1}{0}{-1} \] Find a basis of \(S\) which includes the vector \(\vec{u}\).

Exercise 2.75 Find a vector \(\vec{u}\) in \(\mathbb{R}^4\) such that \(\vec{u}\) and the vectors \[ \vec{v}_1=\vectorfour{1}{-1}{-1}{1} \qquad \vec{v}_2=\vectorfour{1}{0}{1}{1} \qquad \vec{v}_3=\vectorfour{1}{2}{1}{1} \] for a basis of \(\mathbb{R}^4\).

Exercise 2.76 Show that every subspace of \(V\) has no more than \(n\) linearly independent vectors.

Exercise 2.77 Find two bases of \(\mathbb{R}^4\) that have only the vectors \(\vec{e}_3\) and \(\vec{e}_4\) in common.

Exercise 2.78 Prove that if a list of vectors is linearly independent so is any sublist.

Exercise 2.79 Suppose \(\vec{v}_1,\vec{v}_2, \vec{v}_3\) and \(\vec{v}_1, \vec{v}_2, \vec{v}_4\) are two sets of linearly dependent vectors, and suppose that \(\vec{v}_1\) and \(\vec{v}_2\) are linearly independent. Prove that any set of three vectors chosen from \(\vec{v}_1, \vec{v}_2, \vec{v}_3, \vec{v}_4\) is linearly dependent.

Exercise 2.80 If \(\vec{u}\) and \(\vec{v}\) are linearly independent vectors in \(V\), prove that the vectors \(a\vec{u}+b\vec{v}\) and \(c\vec{u}+d\vec{v}\) are also linearly independent if and only if \(ad-bc\neq 0\).

Exercise 2.81 Complete the proof of \(\ref{prop:spnlinbasis}\).

Exercise 2.82 Let \(U\) be the collection of vectors that satisfy the equations \(x+y+z=0\) and \(x+2y-z=0\). Show \(U\) is a subspace of \(\mathbb{R}^3\), find a basis for \(U\), and find \(\dim(U)\).

Exercise 2.83 Let \(U\) be the collection of vectors that satisfy the equations \(x+y+z=0\), \(x+2y-z=0\), and \(y-2z=0\). Show \(U\) is a subspace of \(\mathbb{R}^3\), find a basis for \(U\), and find \(\dim(U)\).

Exercise 2.84 Show that the only subspaces of \(\mathbb{R}\) are \(\{\vec{0}\}\) and \(\{\mathbb{R}\}\).

Exercise 2.85 Show that the only subspaces of \(\mathbb{R}^2\) are \(\{\vec{0}\}\), \(\{\mathbb{R}^2\}\), and any set consisting of all scalar multiples of a nonzero vector. Describe these subspaces geometrically.

Exercise 2.86 Determine the various types of subspaces of \(\mathbb{R}^3\) and describe them geometrically.

Exercise 2.87 For \(\vec{b}\neq\vec{0}\), show that the set of solutions of the \(n\times m\) linear system \(A \vec{x}=\vec{b}\), is not a subspace of \(V\).

Exercise 2.88 Suppose that \(\vec{v}_1, \vec{v}_2, ..., \vec{v}_n\) are linearly independent in \(\mathbb{R}^n\). Show that if \(A\) is an \(n\times n\) matrix with \(\text{rref}(A)=I_n\), then \(A\vec{v}_1, A\vec{v}_2, ..., A\vec{v}_n\) are also linearly independent in \(\mathbb{R}^n\).

Exercise 2.89 Let \(S=\{\vlist{v}{s}\}\) and \(T=\{\vlist{u}{t}\}\) be two sets of vectors in \(V\) where each \(\vec{u}_i\), \((i=1,2,...,t)\) is a linear combination of the vectors in \(S\). Show that \(\vec{w}=\lincomb{a}{u}{t}\) is a linear combination of the vectors in \(S\).

Exercise 2.90 Let \(S=\{\vlist{v}{m}\}\) be a set of non-zero vectors in a vector space \(V\) such that every vector in \(V\) can be uniquely as a linear combination of the vectors in \(S\). Prove that \(S\) is a basis for \(V\).

Exercise 2.91 Find a basis for the solution space of the homogeneous system \((\lambda I_n-A)\vec{x}=\vec{0}\) for the given \(\lambda\) and \(A\).

  • \(\lambda=1, A=\begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & -3 \\ 0 & 1 & 3 \end{bmatrix}\)
  • \(\lambda=2, A=\begin{bmatrix} -2 & 0 & 0 \\ 0 & -2 & -3 \\ 0 & 4 & 5 \end{bmatrix}\)

Exercise 2.92 Prove \(\ref{prop:roweqtoidn}\).

Exercise 2.93 Prove \(\ref{cor:explincomb}\).

Exercise 2.94 Prove \(\ref{sumprop}\).

2.3 Introduction to Linear Spaces

Example 2.13 Show that the set of solution to a homogenous system form a linear space with standard operations.

Example 2.14 Show that the set of vectors for which a particular linear system has a solution is a linear space.

Definition 2.7 Let \(\mathbb{F}\) be a field (whose elements are called scalars ) and let \(V\) be a nonempty set (whose elements are called vectors) on which two operations, called addition and scalar multiplication, have been defined. The addition operation (denoted by \(+\)), assigns to each pair \((u,v)\in V\times V\), a unique vector \(u+v\) in \(V\). The scalar multiplication operation (denoted by juxtaposition), assigns to each pair \((a,v)\in \mathbb{F}\times V\) a unique vector \(a v\) in \(V\).
We call \(V\) a linear space if the following axioms (A1)-(A8) are also satisfied.

  • For all \(u, v\in V\), \(u+v=v+u\).
  • For all \(u, v, w \in V\), \((u+v)+w=u+(v+w)\).
  • There exists \(0\in V\) such that \(v+0=v\) for all \(v\in V\).
  • For every \(v\in V\), there exists \(w\in V\) such that \(v+w=0\).
  • For all \(v\in V\), \(1 v=v\).
  • For all \(a, b\in \mathbb{F}\) and \(u\in V\), \((a b) v=a (b v)\).
  • For all \(a \in \mathbb{F}\) and \(u, v\in V\), \(a(u+v)=a u+av\).
  • For all \(a, b \in \mathbb{F}\) and \(u\in V\), \((a+b)u=a u+ b u\).

If \(\mathbb{F}=\mathbb{R}\) then \(V\) is a called a real linear space . If \(\mathbb{F}=\mathbb{C}\) then \(V\) is called a complex linear space . We denote the zero vector (A3) as \(0\), to distinguish between the zero vector and the zero \(0\) in the field of scalars.

Example 2.15 Let \(V=\{(x,y)\mid y=mx\}\), where \(m\) is a fixed real number and \(x\) is an arbitrary real number. Show that \(V\) is a linear space.

Example 2.16 Let \(V=\{(x,y,x)\mid ax+by+cz=0\}\) where \(a, b\) and \(c\) are fixed real numbers. Show that \(V\) is a linear space with the standard operations.

::: {#exm- } [Matrix Space] Show that the set \(M_{m\times n}\) of all \(m\times n\) matrices, with ordinary addition of matrices and scalar multiplication, forms a linear space.

:::

::: {#exm- } [Polynomial Space] Show that the set \(P(t)\) of all polynomials with real coefficients, under the ordinary operations of addition of polynomials and multiplication of a polynomial by a scalar, forms a linear space. Show that the set of all polynomials with real coefficients of degree less than or equal to \(n\), under the ordinary operations of addition of polynomials and multiplication of a polynomial by a scalar, forms a linear space.

:::

::: {#exm- } [Function Space] Show that the set \(F(x)\) of all functions that map the real numbers into itself is a linear space. Show that the set \(F[a,b]\) of all functions on the interval \([a,b]\) using the standard operations is a linear space.

:::

::: {#exm- } [The Space of Infinite Sequences] Show that the set of all infinite sequences of real numbers is a linear space, where addition and scale multiplication are defined term by term.

:::

::: {#exm- } [The Space of Linear Equations] Show that the set \(L_n\) of all linear equations with \(n\) variables, forms a linear space.

:::

Lemma 2.2 Every linear space \(V\) has a unique additive identity (denoted by \(0\)).

Proof. Let \(u_1\) and \(u_2\) be additive identities in \(V\), then \(v+u_1=v\) and \(v+u_2=v\) for every \(v\in V\). Thus, \(u_1=u_1+u_2=u_2+u_1=u_2\) as desired.

Lemma 2.3 Every \(v\in V\) has a unique additive inverse, (denoted by \(-v\)).

Proof. Let \(v_1\) and \(v_2\) be additive inverses of \(w\) in \(V\), then \(w+v_1=\mathbf{0}\) and \(w+v_2=\mathbf{0}\). Thus, \[ v_1=v_1+\mathbf{0}=v_1+(w+v_2)=(v_1+w)+v_2=(w+v_1)+v_2=\mathbf{0}+v_2=v_2 \] as desired.

Lemma 2.4 If \(v\in V\), then \(0\, v=0\).

Proof. Let \(v\in V\), then \(v=1 v=(1+0) v= 1 v+0 v= v+0v\) which shows that \(0 v\) is the additive identity of \(V\), namely \(0 v=\mathbf{0}\).

Lemma 2.5 If \(a\in \mathbb{F}\), then \(a\, 0=0\).

Proof. Let \(a\in \mathbb{F}\), then \[ a \mathbf{0}=a(\mathbf{0}+\mathbf{0})=a\mathbf{0}+a\mathbf{0} \] which shows that \(a \mathbf{0}\) is the additive identity of \(V\), namely \(a \mathbf{0}=\mathbf{0}\).

Lemma 2.6 If \(v\in V\), then \(-(-v)=v\).

Proof. Let \(v\in V\), then \[ v+(-1)v=1 v+(-1) v=(1+(-1)) v=0 v= \mathbf{0} \] which shows that \((-1)v\) is the unique additive inverse of \(v\) namely, \((-1)v=-v\).

Lemma 2.7 If \(v\in V\), then \((-1)\, v=-v\).

Proof. Since \(-v\) is the unique additive inverse of \(v\), \(v+(-v)=\mathbf{0}\). Then \((-v)+v=\mathbf{0}\) shows that \(v\) is the unique additive inverse of \(-v\), namely, \(v=-(-v)\) as desired.

Lemma 2.8 If \(a\,v=0\), then \(a=0\) or \(v=0\).

Proof. Suppose \(a\neq 0\). If \(a v =\mathbf{0}\) then \(v=1 v=(a^{-1} a) v=a^{-1} (a v)=a^{-1} \mathbf{0}=\mathbf{0}\). Otherwise \(a=0\) as desired.

Example 2.17 Let \(V\) be a linear space with \(u\in V\) and let \(a\) and \(b\) be scalars. Prove that if \(a u=bu\) and \(u\neq 0\), then \(a=b\).

Let \(V\) be a linear space and \(U\) a nonempty subset of \(V\). If \(U\) is a linear space with respect to the operations on \(V\), then \(U\) is called a subspace of \(V\).

Theorem 2.15 A subset \(U\) of \(V\) is a linear subspace of \(V\) if and only if \(U\) has the following properties:

  • \(U\) contains the zero vector of \(V\),
  • \(U\) is closed under the addition defined on \(V\), and
  • \(U\) is closed under the scalar multiplication defined on \(V\).

Proof. Koman pg 103.

More generally, a subset \(U\) of \(V\) is called a subspace of \(V\) if \(U\) is also a vector space using the same addition and scalar multiplication as on \(V\). Any vector space is a subspace of itself. The set containing just the \(0\) vector is also a subspace of any vector space. Given any vector space with a nonzero vector \(v\), the scalar multiples of \(v\) is a vector subspace of \(V\) and is denoted by \(\langle v \rangle\). Because any linear space \(V\) has \(V\) and \(0\) as subspaces, these subspaces are called the trivial subspaces of \(V\). All other subspaces are called proper subspaces of \(V\).

Example 2.18 Give an example of a real linear space \(V\) and a nonempty set \(S\) of \(V\) such that, whenever \(u\) and \(v\) are in \(S\), \(u+v\) is in \(S\) but \(S\) is not a subspace of \(V\).

Example 2.19 Give an example of a real linear space \(V\) and a nonempty set \(S\) of \(V\) such that, whenever \(u\) and \(v\) are in \(S\), \(c u\) is in \(S\) for every scalar \(c\) but \(S\) is not a subspace of \(V\).

Example 2.20 Show that \(P_n[0,1]\) is a proper subspace of \(C[0,1]\).

Example 2.21 Show that \(C'[0,1]\) (continuous first derivative) is a proper subspace of \(C[0,1]\).

Example 2.22 Show that \(R[0,1]\) (Riemann integrable) is a proper subspace of \(C[0,1]\).

Example 2.23 Show that \(D[0,1]\) (Differenable functions) is a proper subspace of \(C[0,1]\).

Definition 2.8 A linear combination of a list of vectors \((\vlist{v}{m})\) in \(V\) is a vector of the form \(\lincomb{a}{v}{m}\) where \(\vlist{a}{m} \in k\).

Lemma 2.9 Let \(U\) be a nonempty subset of a vector space \(V\). Then \(U\) is a subspace of \(V\) if and only if every linear combination of vectors in \(U\) is also in \(U\).

Proof. If \(U\) is a subspace of \(V\), then \(U\) is a vector space and so is closed under linear combinations by definition of vector space. Conversely, suppose every linear combination of vectors in \(U\) is also in \(U\). Thus for any \(a, b \in k\), \(a u+b v \in U\) for every \(u, v\in U\). In particular, when \(a=b=1\) then \(u+v \in U\) and so \(U\) is closed with respect to addition. Notice when \(b=0\) and \(a=-1\), then \(-u\in U\) for every \(u\in U\) and so \(U\) is closed under inverses. Notice when \(u=v\), \(a=1\), and \(b=-1\) then \(u+(-u)=0\in U\) so \(U\) contains the identity element. The rest of the axioms in the definition of a vector space hold by containment.

Definition 2.9 The intersection and union of subspaces is just the intersection and union of the subspaces as sets. The sum of subspaces \(\vlist{U}{m}\) of a vector space \(U\) is defined by
\[ U_1+ U_2+\cdots +U_m = \{ u_1 + u_2+\cdots + u_m \mid u_i \in U_i \text{ for } 1\leq i \leq m \}. \]

Lemma 2.10 Let \(V\) be a linear space over a field \(k\). The intersection of any collection of subspaces of \(V\) is a subspace of \(V\).

Proof. Let \(\{U_i\, |\, i \in I\}\) be a collection of subspaces where \(I\) is some indexed set. Let \(a,b\in k\) and \(u,v\in \cap_{i\in I} U_i\). Since each \(U_i\) is a subspace of \(V\), \(a u +b v\in U_i\) for every \(i\in I\). Thus \(a u+b v\in \cap_{i\in I} U_i\) and therefore \(\cap_{i\in I} U_i\) is a subspace of \(V\).

Example 2.24 Show that the \(x\)-axis and the \(y\)-axis are subspaces on \(\mathbb{R}^2\), yet the union of these axis is not.

Lemma 2.11 Let \(V\) be a linear space over a field \(k\). The union of two subspaces of \(V\) is a subspace of \(V\) if and only if one of the subspaces is contained in the other.

Proof. Suppose \(U\) and \(W\) are subspaces of \(V\) with \(U\subseteq W\). Then \(U\cup W=W\) and so \(U\cup W\) is also a subspace of \(V\). Conversely, suppose \(U\), \(W\), \(U\cup W\) are subspaces of \(V\) and suppose \(u\in U\). If \(u\in W\) then \(U\) is contained in \(W\) as desired. Thus we assume, \(u\not\in W\). If \(w\in W\), then \(u+w \in U\cup W\) and either \(u+w\in U\) or \(u+w\in W\). Notice \(u+w\in W\) and \(w\in W\) together yield \(u\in W\) which is a contradiction. Thus \(u+w\in U\) and so \(w\in U\) which yields \(W\subseteq U\) as desired.

Lemma 2.12 Let \(V\) be a linear space over a field \(k\). The sum \(U_{1}+ U_2+\cdots +U_{m}\) is the smallest subspace containing each of the subspaces \(\vlist{U}{m}\).

Proof. The sum of two subspaces is a subspace since the sum of two subspaces is closed under linear combinations. Thus \(\vlist{U}{m}\) is a subspace containing \(U_i\) for each \(1\leq i \leq m\). Let \(U\) be another subspace containing \(U_{i}\) for each \(1\leq i \leq m\). If \(u\in U_{1}+ \cdots +U_{m}\), then \(u\) has the form \(u=u_1+\cdots + u_m\) where each \(u_i\in U_i\subseteq U\). Since \(U\) is a subspace \(u\in U\) and so \(U_{1}+U_{2}+ \cdots +U_{m}\) is the smallest such subspace.

Definition 2.10 If \(\{\vlist{v}{m}\}\) is a subset of a linear space \(V\), then the subspace of all linear combinations of these vectors is called the subspace generated ( spanned) by \(\vlist{v}{m}.\) The spanning set of the list of vectors \((\vlist{v}{m})\) in \(V\) is denoted by \[ \text{span}(\vlist{v}{m})= \{\lincomb{a}{v}{m} \mid \vlist{a}{m}\in \mathbb{F} \}. \]

Lemma 2.13 The span of a list of vectors in \(V\) is the smallest subspace of \(V\) containing all the vectors in the list.

Proof. Let \((\vlist{v}{n})\) be a list of vectors in \(V\) and let \(S\) denote \(\text{span}(\vlist{v}{n})\). Clearly, \(S\) contains \(v_i\) for each \(1\leq i \leq n\). Let \(u,v \in S\) and \(a,b\in k\). Then there exists \(\vlist{a}{n}\) in \(k\) and \(\vlist{b}{n}\) in \(k\) such that \(u=a_1 v_1+\cdots a_n v_n\) and \(v=b_1 v_1+ \cdots b_n v_n\).
Then \[ a u+b v =(a a_1 +b b_1) v_1+ \cdots +(a a_n+b b_n) v_n \] which shows \(a u+b v\in S\) since \(a a_i+b b_i \in k\) for each \(1\leq i \leq n\). Thus \(S\) is a subspace containing each of the \(v_i\). Let \(T\) be a subspace containing \(v_i\) for \(1 \leq i \leq n\). If \(s\in S\), then there exists \(\vlist{c}{n} \in k\) such that \(s=c_1 v_1+\cdots + c_n v_n\). Since \(v_i\in T\) for each \(i\) and \(T\) is closed under linear combinations (since \(T\) is a subspace), \(s\in T\). Meaning \(S \subseteq T\), so indeed \(S\) is the smallest subspace of \(V\) containing all the vectors \(v_i\).

Definition 2.11 Let \(\emptyset\) denote the empty set. Then \(\text{span}(\emptyset)=\{0\}\).

Example 2.25 Let \(A=\begin{matrix}1 & 1 \ 0 & 0 \end{matrix}\). Show that \(S=\{X\in M_{2\times 2} \mid AX=XA\}\) is a subspace of \(M_{2\times 2}\) under the standard operations.

Example 2.26 Let \(f_1=x^2+1, f_2=3x-1, f_3=2\). Determine the subspace generated by \(f_1, f_2, f_3\) in \(P_4\).

Example 2.27 Let \(A_1=\begin{bmatrix} 1 & 0 & 0 & 3 \\ 0 & 0 & 2 & 0\end{bmatrix}\) and \(A_2=\begin{bmatrix} 0 & 2 & 0 & 1 \\ 0 & 0 & 0 & 1 \end{bmatrix}.\) Determine the subspace generated by \(A_1\) and \(A_2\) in \(R_{2\times 4}\).

Example 2.28 Describe \(\text{span}(0)\).

Example 2.29 Consider the subset \(S=\{x^3-2x^2+x-3, 2x^3-3x^2+2x+5, 4x^3-7x^2+4x-1, 4x^2+x-3\}\) of \(P\). Show that \(3x^3-8x^2+2x+16\) is in \(\text{span} (S)\) by expressing it as a linear combination of the elements of \(S\).

Example 2.30 Determine if the matrices \[ \begin{bmatrix} 2 & -1 \\ 0 & 2 \end{bmatrix}, \begin{bmatrix} -4 & 2 \\ 3 & 0 \end{bmatrix}, \begin{bmatrix} -1 & 0 \\ 2 & 1 \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 0 & 3 \end{bmatrix} \] span \(M_{2\times 2}\).

2.4 Linear Independence and Bases

Let \(V\) be a linear save over a field \(\mathbb{F}\). A subset \(S\) of \(V\) is said to be linearly dependent if there exist distinct \(\vlist{v}{n}\) in \(S\) and scalars \(\vlist{a}{n}\) in \(\mathbb{F}\), not all zero, such that \(\lincomb{a}{v}=0.\) If a set \(S\) cntains only finitely many \(v_i\) we sometimes say that \(\vlist{v}{n}\) are dependent.

Definition 2.12 A list of vectors \((\vlist{v}{m})\) in \(V\) is called linearly independent if the only choice of \(\vlist{a}{m}\in k\) is \(a_1=\cdots=a_m=0\). A list of vectors that are not linearly independent is called linearly dependent.

Notice from the definition we can conclude that any set which contains a linearly dependent set is linearly dependent. Any subset of a linearly independent set is linearly independent. Any set which contains the zero vector is linearly dependent. A set \(S\) of vectors is linearly independent if and only if each finite subset of \(S\) is linearly independent. (Show!)

::: {#lem- } [Linear Dependence] If \((\vlist{v}{m})\) is linearly dependent in \(V\) and \(v_1\neq 0\) then there exists \(j\in {2,\ldots,m}\) such that the following hold: \(v_j\in\text{span}(v_1,\ldots,v_{j-1})\) and if the \(j^{th}\) term is removed from \((\vlist{v}{m})\), the span of the remaining list equals \(\text{span}(\vlist{v}{m})\). :::

Proof. The proof is left for the reader as Exercise \(\ref{ex:Linear Dependence Lemma}\).

Example 2.31 Show that the following subset of \(M_{2\times 2}\) is a linear dependent set. \[\begin{equation} \label{lindeexample} \left\{ \begin{bmatrix} 2 & 3 \\ -1 & 4 \end{bmatrix}, \begin{bmatrix} -11 & 3 \\ -2 & 2 \end{bmatrix}, \begin{bmatrix} 6 & -1 \\ 3 & 4 \end{bmatrix}, \begin{bmatrix} -1 & 0\\2 & 2 \end{bmatrix} \right\} \end{equation}\]

Example 2.32 Suppose that \(S\) is the subset \[ S=\{2x^3-x+3, 3x^3+2x-2, x^3-4x+8, 4x^3+5x-7\} \] of \(P_3\).

  • Show that \(S\) is linear dependent.
  • Show that every three element subset of \(S\) is linear dependent.
  • Show that every two element subset of \(S\) is linear independent.

Example 2.33 Show that no linear independent set can contain zero.

::: {#lem- } [Linear Independence Lemma] Let \(S\) be a linearly independent subset of a vector space \(V\). Suppose \(v\) is a vector in \(V\) which is not in the subspace spanned by \(S\). Then the set obtained by adjoining \(v\) to \(S\) is linearly independent. :::

Proof. Suppose \(u_1,\ldots,u_n\) are distinct vectors in \(S\) and that \(a_1 u_1+\cdots + a_n u_n+a v =0.\) Then \(a=0\); for otherwise, \[ v=\left (-\frac{a_1}{a}\right ) u_1+\cdots + \left (-\frac{a_n}{a} \right )u_n \] and \(v\) is in the subspace spanned by \(S\). Thus \(a_1 u_1+\cdots + a_n u_n=0\), and since \(S\) is a linearly independent set each \(a_i=0\).

Theorem 2.16 If \(S\) is a set of vectors in a vector space \(V\) over a field \(F\) then the following are equivalent.

  • \(S\) is linearly independent and spans \(V\)
  • For every vector \(v\in V\), there is a unique set of vectors \(v_1,\ldots,v_n\) in \(S\), along with a unique set of scalars in \(\mathbb{F}\) for which \(v=a_1 v_1+\cdots+a_n v_n\)
  • \(S\) is a minimal spanning set in the sense that \(S\) spans \(V\), and any proper subset of \(S\) does not span \(V\)
  • \(S\) is a maximal linearly independent set in the sense that \(S\) is linearly independent, but any proper superset of \(S\) is not linearly independent.

Proof. \((i) \Leftrightarrow (ii)\): Then \(S\) is a spanning set. If some proper subset \(S'\) of \(S\) also spanned \(V\), than any vector in \(S-S;\) would be a linear combination of the vectors in \(S'\). contradicting the fact that the vectors in \(S\) are linearly independent. Conversely, if \(S\) is a minimal spanning set, then it must be linearly independent. For if not, some vector \(s\in S\) would be a linear of the other vectors in \(S\), and so \(S-S'\) would be a proper spanning subset of \(S\), which is not possible.

\((i) \Leftrightarrow (iv)\): Then \(S\) is linearly independent. If \(S\) were not maximal, there would be a vector \(v\in V-S\) for which the set \(S\cup \{v\}\) is linear independent. But then \(V\) is not in the span of \(S\), contradicting the fact that \(S\) is a spanning set. Hence, \(S\) is a maximal linearly independent set, and so (i) implies (iv). Conversely, if \(S\) is a maximal independent set,. then it must span \(V\), for if not, we could find a vector \(v\in S-S'\) that is not a linear combination of the vectors in \(S\). Hence, \(S\cup \{v\}\) would be a contradiction.

Definition 2.13 A basis of \(V\) is a list of vectors in \(V\) that is linearly independent and spans \(V\).

Example 2.34 Find a basis for the space of all \(2\times 2\) matrices \(S\) such that \[ \begin{bmatrix} 1 & 1 \\ 1 & 1\end{bmatrix}S=S. \] Let \(S=\begin{bmatrix} a & b \\ c & d \end{bmatrix}\). Then \[ \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \] meaning \[ \begin{bmatrix} a+c & b+d \\ a+c & b+d \end{bmatrix}=\begin{bmatrix} a & b \\ c & d \end{bmatrix}. \] So \(a+c=a, b+d=b, a+c=c,\) and \(b+d=d\). These imply, respectively that \(a=b=c=d=0\).

Corollary 2.4 A list \((v_1,\ldots,v_m)\) of vectors in \(V\) is a basis of \(V\) if and only if every \(v\in V\) can be written uniquely in the form \(v=a_1 v_1+\ldots+a_m v_m\) where \(\vlist{a}{m}\in k\).

Proof.

Corollary 2.5 Any \(n\) linearly independent vectors in a linear space \(V\) of dimension \(n\) constitute a basis for \(V\).

Proof.

Example 2.35 Show that the space of all \(m\times n\) matrices over a field \(\mathbb{F}\) has dimension \(mn\).

Corollary 2.6 Any finitely generated linear space, generated by a asset of nonzero vectors, has a basis.

Proof.

The zero linear space has no basis, because any subset contains the zero vector and must be linearly dependent.

Example 2.36 Show that \[ B=\{[2,3,0,-1],[-1,1,1,-1],[3,2,-1,0],[2,3,0,-1]\} \] is a maximal linearly independent subset of \[ S=\{[1,4,1,-2],[-1,1,1,-1],[3,2,-1,0],[2,3,0,-1]\}. \] Determine \(\dim\text{span}(S)\) and determine whether or not \(\text{span}(S)=P_3\).

2.5 Finite-dimensional Linear Spaces

Definition 2.14 The vector space \(V\) is a direct sum of subspaces \(\vlist{U}{m}\) of \(V\), written \(V=U_{1} \oplus \cdots \oplus U_{m}\), if each element of \(V\) can be written uniquely as a sum \(u_1 + u_2+\cdots + u_m\) where each \(u_j\in U_j\).

Lemma 2.14 If \(\vlist{U}{m}\) are subspaces of \(V\), then \(V=U_{1} \oplus \cdots \oplus U_{m}\) if and only if both conditions hold:

  • \(V=U_{1} + \cdots + U_{m}\).
  • the only way to write \(0\) as a sum \(u_{1} + u_{2}+\cdots+ u_{m}\) where each \(u_{j} \in U_{j}\), is by taking all the \(u_{j}\)’s equal to \(0\).

Proof. If \(V=U_{1} \oplus U_2 \oplus \cdots \oplus U_{m}\), then every element in \(V\) can be written uniquely in the form \(u_1+ u_2+\cdots +u_m\) where each \(u_j\in U_j\) for \(1\leq i \leq m\). Thus both conditions listed above are satisfied. Conversely, suppose both conditions hold and assume \(u=u_1 + \cdots +u_m\) where \(u_i \in U_i\) and \(u=v_1 + \cdots + v_m\) where \(v_i\in U_i\). Since \[ u-u=(u_1 + \cdots +u_m)-(v_1 + \cdots +v_m)=(u_1-v_1)+\cdots +(u_m-v_m)=0 \] it follows \(u_i=v_i\) for \(1\leq i \leq m\); and so uniqueness is established.

Example 2.37 Let \(V\) be the linear space of all functions from \(\mathbb{R}\) to \(\mathbb{R}\) and let \(V_e\) and \(V_o\) be the set of all even functions (\(f(-x)=f(x)\)) and the set of odd functions (\(f(-x)=-f(x)\)), respectively.

  • Prove that \(V_e\) and \(V_o\) are subspaces of \(V\).
  • Prove that \(V_e+V_o=V\)
  • Prove that \(V_e\cap V_o=\{0\}\).

Lemma 2.15 If \(U\) and \(W\) are subspaces of \(V\), then \(V=U \oplus W\) if and only if \(V=U+W\) and \(U \cap W= \{0\}\).

Proof. If \(V=U \oplus W\) then \(V=U+W\) and \(\{0\} \subseteq U\cap W\) are immediate. Suppose \(v\in U \cap W\). Since \(v\in U \oplus W\), there exists unique \(u\in U\) and \(w\in W\) such that \(v=u+w\). Assume for a contradiction, \(u\not = 0\). Then \(u=u+0\) where \(0\in W\) and \(u \in U\) shows that \(u\not \in W\) since \(V=U\oplus W\). Also, \(v=u+w\in U\cap W\) so \(u+w\in W\) and \(w\in W\) implies \(u\in W\). This contradiction leads to \(u=0\). Similarly, \(w=0\) and so \(v=u+w=0\) which yields \(U \cap W= \{0\}\). Conversely, suppose \(V=U+W\) and \(U \cap W= \{0\}\). If \(0=u+w\) then \(u=-w\in W\) and together with \(u\in U\) yields \(u=0\). Thus, \(w=0\) also. So the only way to write 0 as a sum \(u+w\) is to have \(u=w=0\) therefore \(V=U \oplus W\).

Example 2.38 Prove or give a counterexample: if \(U_1, U_2\), and \(W\) are subspaces of \(V\) such that \(U_1 +W=U_2+W\), then \(U_1=U_2\). False, here is a counterexample. Let \(V=\mathbb{R}^2=W\) with \(U_1=\{(x,0) \mid x \in \mathbb{R}\}\) and \(U_2=\{(0,y) \mid y\in \mathbb{R}\}\). Then \(U_1+W=U_2+W=\mathbb{R}^2\). However, \(U_1\neq U_2\).

Example 2.39 Prove or give a counterexample: if \(U_1, U_2, W\) are subspaces of \(V\) such that \(V=U_1 \oplus W\) and \(V=U_2 \oplus W\), then \(U_1=U_2\). False, here is a counterexample. Let \(V=\mathbb{R}^2\) with \(W=\{(x,x) \mid x\in \mathbb{R}\}\), \(U_1=\{(x,0) \mid x \in \mathbb{R}\}\), and \(U_2=\{(0,y) \mid y\in \mathbb{R}\}\). All of these sets are subspaces. Let \(u\in U_1\cap W\), then \(u=(x,0)\) and \(u=(z,z)\) so \(z=0\) and \(x=0\) which implies \(u=(0,0)\). In fact \(U_1\cap W=\{0\}\). Thus, \(\mathbb{R}^2=U_1\oplus W\). Also \(\mathbb{R}^2=U_2\oplus W\). However, \((1,0)\in U_1\) and \((1,0)\not \in U_2\) showing \(U_1\neq U_2\).

Example 2.40 Suppose \(m\) is a positive integer. Is the set consisting of 0 and all polynomials with coefficients in \(F\) and with degree equal to \(m\) a subspace of \(\mathcal{P}(\mathbb{F})\)?

Definition 2.15 We call a linear space \(V\) a finite-dimensional linear space if there is a finite list of vectors \((v_1, \ldots, v_m)\) with \(\text{span}(v_1, \ldots, v_m)=V.\) If a linear space is not a finite-dimensional vector space it is called an infinite-dimensional vector space.

Lemma 2.16 Let \(V\) be a finite-dimensional vector space. Then the length of every linearly independent list of vectors in \(V\) is less than or equal to the length of every spanning list of vectors in \(V\).

Proof.

Lemma 2.17 Let \(V\) be a finite-dimensional vector space. Then \(V\) only has finite-dimensional subspaces.

Proof.

Theorem 2.17 Let \(V\) be a finite-dimensional vector space. Then every spanning list \((v_1,\ldots,v_m)\) of a vector space can be reduced to a basis of the vector space,

Proof.

Theorem 2.18 Let \(V\) be a finite-dimensional vector space. Then if \(V\) is finite-dimensional then \(V\) has a basis.

Proof. By definition, every finite-dimensional vector space has a finite spanning set. By \(\ref{spanning reduce lemma}\), every spanning set can be reduced to a basis, and so every finite-dimensional vector space does indeed have a basis.

Theorem 2.19 Let \(V\) be a finite-dimensional vector space. Then if \(W\) is a subspace of \(V\), every linearly independent subset of \(W\) is finite and is part of a finite basis for \(W\).

Proof.

Theorem 2.20 Let \(V\) be a finite-dimensional vector space. Then every linearly independent list of vectors in \(V\) can be extended to a basis of \(V\),

Proof.

Theorem 2.21 Let \(V\) be a finite-dimensional vector space. Then if \(U\) is a subspace of \(V\), then there is a subspace \(W\) of \(V\) such that \(V=U \bigoplus W\), and

Proof.

Theorem 2.22 Let \(V\) be a finite-dimensional vector space. Then any two bases of \(V\) have the same length.

Proof.

Definition 2.16 The dimension of a finite-dimensional vector space is defined to be the length of any basis of the vector space.

Theorem 2.23 Let \(V\) be a finite-dimensional vector space. Then if \(U\) is a subspace of \(V\) then \(\text{dim} U \leq \text{dim} V\).

Proof.

Theorem 2.24 Let \(V\) be a finite-dimensional vector space. Then every spanning list of vectors in \(V\) with length dim \(V\) is a basis of \(V\).

Proof.

Theorem 2.25 Let \(V\) be a finite-dimensional vector space. Then every linearly independent list of vectors in \(V\) with length dim \(V\) is a basis of \(V\).

Proof.

Theorem 2.26 Prove that if \(U_1\) and \(U_2\) are subspaces of a finite-dimensional vector space, then \[ \text{dim} (U_1+U_2)= \text{dim} U_1 + \text{dim} U_2 -\text{dim} (U_1 \cap U_2). \]

Proof.

Theorem 2.27 If \(V\) is finite-dimensional, and \(U_1, \ldots, U_m\) are subspaces of \(V\) such that

  • \(V=U_1+\cdots+U_m\) and
  • \(\text{dim} V=\text{dim} U_1+\cdots+\text{dim} U_m\),

then \(V=U_1 \oplus \cdots \oplus U_m.\)

Proof. Choose a basis for each \(U_j\). Put these bases together in one list, forming a list that spans \(V\) and has length \(\text{dim} (V)\). Thus this list is a basis of \(V\) and in particular it is linearly independent. Now suppose that \(u_j\in U_j\) for each \(j\) are such that \(0=u_1+\cdots + u_m\). We can write each \(u_j\) as a linear combination of the basis vectors of \(U_j\). Substituting these expressions above, we have written \(0\) as a linear combination of the basis vectors of \(V\). Thus all scalars used in the linear combinations must be zero. Thus each \(u_j=0\) which proves \(V=U_1 \oplus \cdots \oplus U_m.\)

2.6 Infinite-dimensional Linear Spaces

Example 2.41 Prove that \(k^\infty\) is infinite-dimensional. Suppose \(\mathbb{F}^\infty\) is finite-dimensional with dimension \(n\). Let \(v_i\) be the vector in \(\mathbb{F}^\infty\) consisting of all 0’s except with 1 in the \(i\)-th position for \(i=1,...,n\). The vectors \((v_1,...,v_n)\) are linearly independent in \(\mathbb{F}^\infty\) and so they must form a basis; however they do not span since the vector \(v_{n+1}\) consisting of all 0’s except with \(1\) in the \((n+1)\)-th position. Thus \(\mathbb{F}^\infty\) can not be finite-dimensional.

Example 2.42 Prove that the real vector space consisting of all continuous real-valued functions on the interval \([0,1]\) is infinite-dimensional. Notice that \(f(x)=x^n\) is continuous on \([0,1]\) for every positive integer \(n\). If this vector space is finite-dimensional of say dimension \(n\) then the list of vectors \((1,x,x^2,...,x^{n-1})\) must form a basis since they are linearly independent and there are \(n\) of them. However, \(x^n\) is not in the span of this list and so this list can not be a basis. This contradiction shows that this vector space must be infinite-dimensional.

Example 2.43 Prove that \(V\) is infinite-dimensional if and only if there is a sequence \(v_1, v_2,...\) of vectors in \(V\) such that \((v_1, ..., v_n)\) is linearly independent for every positive integer \(n\). Suppose \(V\) is infinite-dimensional. Then \(V\) has no finite spanning list. Pick \(v_1\neq 0\). For each positive integer \(n\) choose \(v_{n+1}\not \in \text{ span}(v_1,...,v_n)\), by the linear independence lemma, and for each positive integer \(n\), the list \((v_1,...,v_n)\) is linearly independent. Conversely, suppose there is a sequence \(v_1,v_2,...,\) of vectors in \(V\) such that \((v_1,...,v_n)\) is linearly independent for every positive integer \(n\). If \(V\) is finite-dimensional, then it has a spanning list with \(M\) elements. By the previous theorem, every linearly independent list has no more than \(M\) elements. Therefore, \(V\) is infinite-dimensional.

Theorem 2.28 Every vector space has a basis. Moreover, any two bases have the same carnality.

Proof. Let \(V\) be a nonzero vector space and consider the collection \(A\) of all linearly independent subsets of \(V\). This collection is nonempty, since any single nonzero vector forms a linearly independent set. Now, if \(I_1\subset I_2 \subset \cdots\) is a chain of linearly independent subsets of \(V\), then the union \(U\) is also a linearly independent set. Hence, every chain in \(A\) has an upper bound in \(A\), and according to Zorn’s lemma, \(A\) must contain a maximal element, that is, \(V\) has a maximal linearly independent set, which is a basis for \(V\).

We may assume that all bases for \(V\) are infinite sets, for if any basis is finite, then \(V\) has a finite spanning set and so is a finite-dimensional vector space. Let \(B\) be a basis for \(V\). We may write \(B=\{b_i \mid i\in I\}\) where \(I\) is some indexed set, used to index the vectors in \(B\). Note that \(|I|=|B|\). Now let \(C\) be another basis for \(V\). Then any vector \(c\in C\) can be written as a finite linear combination of the vectors in \(B\), where all the coefficients are nonzero, say \(c=\sum_{i\in U_c} r_i b_i\). Here \(U_c\) is a finite subset of the index set \(I\). Now, because \(C\) is a basis for \(V\), the union of all of the \(U_c\)’s as \(C\) varies over \(C\) must be in \(I\), in symbols, \(\bigcup_{c\in C} U_c=I\). For if all vectors in the basis \(C\) can be expressed as a finite linear combination of the vectors \(B-\{ b_k\}\) spans \(V\), which is not the case. Therefore, \(|B|=|I|\leq |C| \alpha_0=|C|.\) But we may also reverse the roles of \(B\) and \(C\) we obtain the reverse inequality. Therefore, \(|B|=|C|\) as desired.