Vector Spaces (1) - Subspaces and Sums

  1. Introduction
  2. Vector Spaces
  3. Basic Properties
  4. Examples
  5. Subspaces and Sums

Introduction

It's almost ridiculous that I would expose you to free abelian groups before talking about vector spaces and linear algebra. Vector spaces and free abelian groups have a lot in common, but vector spaces are more familiar, more ubiquitous and easier to compute with.

At their core, vector spaces are very simple and their definition will closely mimic that of groups and topological spaces. Recall that a group is a set with a binary operation, an identity and inverses for all its elements. A topological space is a set and a collection of "open sets" which include the set itself, the empty set, finite intersections and arbitrary unions of open sets. Vector spaces are defined in a similar manner.

A vector space is a special kind of set containing elements called vectors, which can be added together and scaled in all the ways one would generally expect.

You have likely encountered the idea of a vector before as some sort of arrow, anchored to the origin in euclidean space with some well-defined magnitude and direction.

This is the sort of vector encountered in introductory physics classes. However, such arrows are not the only mathematical objects that can be added and scaled, so it would be silly to restrict our attention only to them. We will make a more abstract and inclusive definition of vector spaces, which are the main objects of study in linear algebra.

We would like our definition to include some way to scale vectors so that we can expand of shrink them in magnitude while preserving their direction. Since in general there is no concept of vector multiplication, we will need to bring in additional elements by which we are allowed to multiply our vectors to achieve a scaling effect. These scalars will be the elements of a field, which we have encountered before in my posts on constructing the rational numbers, but I will give the definition again because it is so important.

Definition. A field is a set $\F$ of elements called scalars, together with two binary operations:

Addition
which assigns to any pair of scalars $a,b\in\F$ the scalar $a+b\in\F$,

Multiplication
which assigns to any pair of scalars $a,b\in\F$ the scalar $ab\in\F$.

Any field and its operations must satisfy the following properties:

Additive Identity
There exists $0\in\F$ such that $0+a=a$ for every $a\in\F$.

Additive Inverses
For every $a\in\F$, there exists $b\in\F$ for which $a+b=0$.

Commutative Property of Addition
For all $a,b\in\F$, we have that $a+b=b+a$.

Associative Property of Addition
For all $a,b,c\in\F$, we have that $(a+b)+c=a+(b+c)$.

Multiplicative Identity
There exists $1\in\F$, with $1\ne 0$, such that $1a=a$ for every $a\in\F$.

Multiplicative Inverses
For every $a\in\F$ with $a\ne 0$, there exists $b\in\F$ for which $ab=1$.

Commutative Property of Multiplication
For all $a,b\in\F$, we have that $ab=ba$.

Associative Property of Multiplication
For all $a,b,c\in\F$, we have that $(ab)c=a(bc)$.

Distributive Property
For all $a,b,c\in\F$, we have that $a(b+c)=ab+ac$.

It is not too difficult to verify that the set $\R$ of real numbers is a field when considered with the usual addition and multiplication. Similarly, the sets $\C$ of complex numbers and $\Q$ of rational numbers form fields when equipped with their usual arithmetic.

There are other examples of fields besides these familiar ones. For example, the set $\Z_3$ of integers modulo $3$ is a field when considered with addition and multiplication modulo $3$. The tables below describe addition and multiplication in $\Z_3$, so you can check for yourself that the field axioms hold.

In fact, there are many more examples of finite fields. For any prime number $p$, the set $\Z_p$ of integers modulo $p$ forms a field. However, for our purposes we will usually only be interested in the fields $\R$ and $\C$.

Vector Spaces

With all of this in mind, we can move forward with defining the concept of a vector space over a field. This definition ensures that vectors interact with scalars and other vectors in a reasonable way.

Definition. A vector space over a field $\F$ is a set $V$ of elements called vectors, together with two operations:

Vector Addition
which assigns to any pair of vectors $\vec{u},\vec{v}\in V$ the vector $\vec{u}+\vec{v}\in V$,

Scalar Multiplication
which assigns to any scalar $a\in\F$ and any vector $\vec{v}\in V$ the vector $a\vec{v}\in V$.

Any vector space and its operations must satisfy the following properties:

Zero Vector
There exists $\vec{0}\in V$ such that $\vec{0}+\vec{v}=\vec{v}$ for every $\vec{v}\in V$.

Additive Inverses
For every $\vec{u}\in V$, there exists $\vec{v}\in V$ for which $\vec{u}+\vec{v}=\vec{0}$.

Commutative Property of Addition
For all $\vec{u},\vec{v}\in V$, we have that $\vec{u}+\vec{v}=\vec{v}+\vec{u}$.

Associative Property of Addition
For all $\vec{u},\vec{v}, \vec{w}\in V$, we have that $(\vec{u}+\vec{v})+\vec{w}=\vec{u}+(\vec{v}+\vec{w})$.

Compatibility with Field Multiplication
For all $a,b\in\F$ and $\vec{v}\in V$, we have that $(ab)\vec{v}=a(b\vec{v})$.

Scalar Multiplicative Identity
For every $\vec{v}\in V$, we have that $1\vec{v}=\vec{v}$.

First Distributive Property
For all $a,b\in\F$ and $\vec{v}\in V$, we have that $(a+b)\vec{v}=a\vec{v}+b\vec{v}$.

Second Distributive Property
For all $a\in\F$ and $\vec{u},\vec{v}\in V$, we have that $a(\vec{u}+\vec{v})=a\vec{u}+a\vec{v}$.

Although the choice of field is important when defining a particular vector space, it is often ignored when talking generally about vector spaces. This is because many results hold true for all vector spaces, regardless of the field over which they are defined. Whenever this information is important, it will be specified. Otherwise, we will often refer simply to a vector space $V$, with the understanding that some field $\F$ is lurking in the background.

Basic Properties

Now we will verify five basic facts that are true of all vector spaces. They may seem obvious, and many of them closely mimic analogous results we've already seen in group theory. Nonetheless, without proof we could not in good faith use them in our later arguments.

The first important property is that a vector space only has one zero vector.

Theorem. In any vector space $V$, the zero vector is unique.

Proof. Suppose there exist two zero vectors, $\vec{0}_1,\vec{0}_2\in V$. Then, using the definition of a zero vector and the commutative property, we have that

\begin{align}
\vec{0}_1 &= \vec{0}_1+\vec{v}, \\
\vec{0}_2 &= \vec{v}+\vec{0}_2,
\end{align}

for every $\vec{v}\in V$. In light of these equations,

\begin{align}
\vec{0}_1 &= \vec{0}_1 + \vec{0}_2 \\
&= \vec{0}_2.
\end{align}

It follows that the zero vector is unique.

The next fact that we will prove is that each vector in a vector space has only one additive inverse. Its proof is extremely similar to the above.

Theorem. In any vector space $V$, every vector $\vec{u}\in V$ has a unique additive inverse.

Proof. Suppose $\vec{u}$ has two additive inverses, $\vec{v}_1,\vec{v}_2\in V$. Then, using the definition of an additive inverse and the commutative property, we have that

\begin{align}
\vec{v}_1+\vec{u} &= \vec{0}, \\
\vec{u}+\vec{v}_2 &= \vec{0}.
\end{align}

In light of these equations,

\begin{align}
\vec{v}_1 &= \vec{v}_1 + \vec{0} & \scriptstyle\textit{zero vector}\\
&= \vec{v}_1 + (\vec{u}+\vec{v_2}) & \scriptstyle\textit{additive inverses}\\
&= (\vec{v}_1 + \vec{u}) + \vec{v}_2 & \scriptstyle\textit{associativity}\\
&= \vec{0} + \vec{v}_2 & \scriptstyle\textit{additive inverses}\\
&= \vec{v}_2. & \scriptstyle\textit{zero vector}
\end{align}

It follows that $u$ has a unique additive inverse.

Because of this theorem, no confusion arises if we write $-\vec{v}$ to denote the additive inverse of $V$. We will adopt this notation for the rest of time.

We will not take the time to do this, but it should be clear how to modify the above two proofs to show that in any field $\F$, additive and multiplicative identities are unique, as well as additive and multiplicative inverses.

Next, we show that the scalar product of a field's additive identity $0$ with any vector yields the zero vector.

Theorem. In any vector space $V$, we have that $0\vec{v}=\vec{0}$ for every $\vec{v}\in V$.

Proof. We proceed via the following computation:

\begin{align}
0\vec{v} + 0\vec{v} &= (0+0)\vec{v} & \scriptstyle\textit{first distributive property}\\
&= 0\vec{v} & \scriptstyle\textit{additive identity}\\
&= 0\vec{v} + \vec{0}. & \scriptstyle\textit{zero vector}
\end{align}

Adding $-0\vec{v}$ to both sides yields $0\vec{v} = \vec{0}$, the desired equality.

It is very important to realize that in the above proof, the scalar $0$ on the left is the additive identity in the underlying field $\F$, while the vector $\vec{0}$ on the right is the zero vector in the vector space $V$. Remember that we do not have any concept of multiplying vectors.

We will now prove a result of similar obviousness, whose proof is similar to the above.

Theorem. In any vector space $V$ over a field $\F$, we have that $a\vec{0}=\vec{0}$ for every $a\in\F$.

Proof. We proceed via the following computation:

\begin{align}
a\vec{0} + a\vec{0} &= a(\vec{0}+\vec{0}) & \scriptstyle\textit{second distributive property}\\
&= a\vec{0} & \scriptstyle\textit{zero vector}\\
&= a\vec{0} + \vec{0}. & \scriptstyle\textit{zero vector}
\end{align}

Adding $-a\vec{0}$ to both sides yields $a\vec{0} = \vec{0}$, the desired equality.

There is one obvious fact left to prove, namely that the scalar product of $-1$ with any vector yields the additive inverse of that vector.

Theorem. In any vector space $V$, we have that $-1\vec{v}=-\vec{v}$ for every $\vec{v}\in V$.

Proof. We proceed via the following computation:

\begin{align}
-1\vec{v}+\vec{v} &= -1\vec{v}+1\vec{v} & \scriptstyle\textit{multiplicative identity}\\
&= (-1+1)\vec{v} & \scriptstyle\textit{first distributive property}\\
&= 0\vec{v} & \scriptstyle\textit{additive inverses}\\
&= \vec{0}. & \scriptstyle\textit{zero vector}
\end{align}

Adding $-\vec{v}$ to both sides yields $-1\vec{v} = -\vec{v}$, the desired equality.

Examples

We are now armed with a number of facts about abstract vector spaces and their interactions with scalars, but we have yet to exhibit a single actual example of a vector space. We will now examine some vector spaces that are important throughout the entire subject of linear algebra.

Example. It isn't too hard to see that the set $\{\0\}$, which contains only the zero vector, is a vector space over any field. We are forced to define vector addition and scalar multiplication in the only possible way, and $\0$ must act as both the zero vector and its own additive inverse. This is analogous to the trivial group in group theory, and is not a particularly interesting example.

Example. For any field $\F$, it happens that $\F$ is a vector space over itself, taking vector addition to be the same as field addition and scalar multiplication to be the same as field multiplication. This, again, is easy to check directly from the definitions. As specific examples, $\R$ and $\C$ are both vector spaces over themselves.

For the next example, recall the definition of cartesian product. In particular, recall that $\F^n$ is the set of all ordered tuples of length $n$ with components in $\F$. For instance, $(i, 3+2i, 4)$ is an element of $\C^3$ and $(0, -\pi, 8, \sqrt{2})$ is an element of $\R^4$.

Example. For any field $\F$, the set $\F^n$ is a vector space over $\F$ if we define vector addition and scalar multiplication in the following (obvious) way.

Given $\x=(x_1,x_2,\ldots,x_n)$ and $\y=(y_1,y_2,\ldots,y_n)$ in $\F^n$, we define addition component-wise in terms of field addition as follows:

$$\begin{align}
\x+\y &= (x_1,x_2,\ldots,x_n)+(y_1,y_2,\ldots,y_n) \\
&= (x_1+y_1,x_2+y_2,\ldots,x_n+y_n).
\end{align}$$

Similarly, given $a\in\F$ and $\x=(x_1,x_2,\ldots,x_n)\in\F^n$, we define scalar multiplication component-wise in terms of field multiplication as follows:

$$\begin{align}
a\x &= a(x_1,x_2,\ldots,x_n) \\
&= (ax_1,ax_2,\ldots,ax_n).
\end{align}$$

It is not terribly difficult to show that $\F^n$ does in fact constitute a vector space over $\F$ with these operations. For instance, clearly $\0=(0,0,\ldots,0)$ acts as the zero vector in $\F^n$, and $-\x=(-x_1,-x_2,\ldots,-x_n)$ acts as the additive inverse of $\x=(x_1,x_2,\ldots,x_n)$.

Our next example as a vector space may be slightly surprising at first, but it is both highly important and an excellent example of a vector space whose vectors are certainly not arrows of any kind. First, we will need the following definition.

Definition. Let $\F$ denote a field and let $n$ be a natural number. A polynomial with coefficients $a_0,a_1,\ldots,a_n$ in $\F$ is a function $p:\F\to\F$ of the form

$$\begin{align}
p(x) &= \sum_{i=0}^n a_ix^i \\
&= a_0 + a_1x+a_2x^2+\cdots+a_nx^n
\end{align}$$

for all $x\in\F$. If $a_n\ne 0$, we say that $n$ is the degree of $p$. We write $\mathscr{P}(\F)$ to denote the set of all polynomials with coefficients in $\F$.

As a concrete example, the function $p:\R\to\R$ defined by

$$p(x)=3+2x+8x^2$$

is a polynomial of degree two with coefficients in $\R$, namely $a_0=3$, $a_1=2$ and $a_2=8$. It is thus an element of $\mathscr{P}(\R)$.

If $p(x)=0$ for all $x\in\F$, we call $p$ the zero polynomial, whose degree is undefined. However, for the purposes of adding and multiplying polynomials, it is sometimes useful to formally treat the degree of the zero polyomial as $-\infty$.

Example. It is easy to see that for any field $\F$, the set $\mathscr{P}(\F)$ forms a vector space over $\F$ when equipped with the following definitions of vector addition and scalar multiplication.

For any two vectors $\p,\q\in\mathscr{P}(\F)$, we define their sum $\p+\q\in\mathscr{P}(\F)$ in terms of function addition. That is,

$$(\p+\q)(x)=\p(x)+\q(x)$$

for all $x\in\F$. Similarly, for any $a\in\F$ and $\p\in\mathscr{P}(\F)$, we define scalar multiplication as expected:

$$(a\p)(x)=a\p(x)$$

for all $x\in\F$. It is clear that the zero polynomial must act as the zero vector. Again, checking that $\mathscr{P}(\F)$ is actually a vector space with these definitions is easy but fairly time consuming, so I won't do it here.

Subspaces and Sums

It often happens that a vector space contains a subset which also acts as a vector space under the same operations of addition and scalar multiplication. For instance, the vector space $\{\0\}$ is a (fairly boring) subset of any vector space. This phenomenon is so important that we give it a name.

Definition. A subset $U$ of a vector space $V$ is a subspace of $V$ if it is itself a vector space under the same operations of vector addition and scalar multiplication.

It should go without saying that $U$ and $V$ are defined over the same field.

Note the unfortunate naming conflict with subspaces of topological spaces. Normally they are discussed in different contexts and so this causes no confusion. In cases where confusion may arise, we will sometimes refer to them as topological subspaces and linear subspaces.

Using this definition, certainly $\{\0\}$ constitutes a subspace of every vector space. At the other extreme, every vector space is a subspace of itself (because every set is a subset of itself). However, the concept of a subspace wouldn't be very interesting if these were the only possibilities. Before demonstrating some nontrivial proper subspaces, we provide a more straightforward method for determining whether a subset of a vector space constitutes a subspace.

Proposition. A subset $U$ of a vector space $V$ is a subspace of $V$ if and only if the following three conditions hold:

  1. The zero vector is in $U$.
  2. The set $U$ is closed under addition of vectors.
  3. The set $U$ is closed under scalar multiplication.

Proving this would consist largely of statements like "the associative property holds in $U$ because it holds in $V$." Therefore, I will omit the proof of this proposition.

Example. Consider the set $U=\{(x,0\in\F^2\mid x\in\F\}$, where $\F$ is any field. This is the set of all ordered pairs with entries in $\F$ whose second component is the additive identity.

The zero vector $(0,0)$ is certainly in $U$. From the definition of addition in $\F^2$, the sum of two vectors in $U$ must be of the form

$$(x,0) + (y,0) = (x+y,0)$$

which is certainly in $U$. Similarly, the scalar product of any $a\in\F$ with any vector in $U$ must be of the form

$$a(x,0)=(ax,0)$$

which is also in $U$. We have shown that all three conditions are met, so $U$ is a subspace of $\F^2$.

Example. If $\F$ is a field and $n$ is a natural number, then the set of all polynomials whose degree is at most $n$ forms a subspace of $\P(\F)$. We denote this subspace $\P_n(\F)$.

To see that this is truly a subspace, we will verify the three conditions of the above proposition. We have already stated that the degree of the zero polynomial is $-\infty$, which is certainly less than $N$, so $\0\in\P_n(\F)$.

Next, suppose that $\p,\q\in\P_n(\F)$. Then there exist degrees $j,k\le n$ and coefficients $a_0,\ldots,a_j,b_0,\ldots,b_k\in\F$ for which

$$\begin{align}
\p(x) &= \sum_{i=0}^j a_ix^i, \\
\q(x) &= \sum_{i=0}^k b_ix^i
\end{align}$$

for every $x\in\F$. Since $j$ is the degree of $\p$ and $k$ is the degree of $\q$, we know by definition that $a_j,b_k\ne 0$. Suppose without loss of generality that $j\le k$. It follows that

$$\begin{align}
(\p+\q)(x) &= \sum_{i=0}^j a_ix^i + \sum_{i=0}^k b_ix^i \\
&= \sum_{i=0}^j (a_i+b_i)x^i + \sum_{i=j+1}^k b_ix^i
\end{align}$$

for all $x\in\F$. Note that we have combined like terms until the smaller polynomial ran out, then added terms from the higher-degree polynomial until it was also depleted.

For each $0\le i\le k$, define

$$c_i =
\begin{cases}
a_i + b_i & \text{if } i\le j, \\
b_i & \text{if } i>j.
\end{cases}$$

Then $c_0,\ldots,c_k\in\F$ are the coefficients of the polynomial $\p+\q$. That is,

$$(\p+\q)(x)=\sum_{i=0}^k c_i x^i.$$

If $c_k\ne 0$ then $\p+\q$ is a polynomial of degree $k$. If $c_k = 0$ then $\p+\q$ is a polynomial of degree less than $k$. Since $k\le n$, it follows either way that $\p+\q\in\P(\F)$.

Next, suppose that $a\in\F$ and $p\in\P(\F)$. Then there exists a degree $j\le n$ and coefficients $b_0,\ldots,b_j\in\F$ for which

$$\p(x)=\sum_{i=0}^j b_i x^i$$

for all $x\in\F$. Thus,

$$\begin{align}
a\p(x) &= a\sum_{i=0}^j b_i x^i \\
&= \sum_{i=0}^j ab_i x^i
\end{align}$$

for every $x\in\F$. If $a=0$ then clearly $a\p$ is the zero polynomial and thus $a\p=\0\in\P_n(\F)$. If $a\ne 0$ then for each $0\le i\le j$, define $c_i=ab_i$. Then $c_0,\ldots,c_j$ are the coefficients of $a\p$. That is,

$$a\p(x)=\sum_{i=0}^j c_i x^i.$$

Therefore, $a\p$ is a polynomial of degree $j\le n$, so $a\p\in\P(\F)$. We have established (rather painstakingly) that all three conditions hold, and so we conclude that $\P_n(\F)$ is in fact a subspace of $\P(\F)$.

We will now work toward defining sums of subspaces. We will begin with the following theorem, which establishes that the intersection of subspaces is always a subspace.

Theorem. If $U$ and $W$ are subspaces of a vector space $V$, their intersection $U\cap V$ is also a subspace of $V$.

Proof. We will again show that the conditions of the proposition hold. Since $U$ and $W$ are subspaces, we know that they both contain the zero vector and are closed under vector addition and scalar multiplication.

Since $\0\in U$ and $\0\in W$, certainly $\0\in U\cap W$.

For any vectors $\u,\v\in U\cap W$, we have that $\u,\v\in U$ and $\u,\v\in W$. Since both subspaces are closed under vector addition, $u+v\in U$ and $u+v\in W$. Thus, $\u+\v\in U\cap W$.

Lastly, suppose $a\in\F$ and $\v\in U\cap W$. Then $\v\in U$ and $\v\in W$, so $a\v\in U$ and $a\v\in W$ because both subspaces are closed under scalar multiplication. It follows that $a\v\in U\cap W$, completing the proof.

Using an inductive argument, it is easy to show that this result can be extended to any finite intersection of subspaces.

Naturally, we might now consider the question of whether the union of subspaces is a subspaces. A moment's thought will probably be enough to convince you that this is not true in general. The following example shows why.

Example. Consider the subspaces

$$\begin{align}
U &= \{(x,0)\in\R^2\mid x\in\R\}, \\
W &= \{(0,y)\in\R^2\mid y\in\R\}
\end{align}$$

of $\R^2$. We can picture $U$ as the space of all vectors lying on the horizontal axis of the cartesian plane, and $W$ as the space of all vectors lying on the vertical axis. The union of these subspaces, $U\cup W$, is not closed under vector addition. To illustrate this, notice that $(1,0)\in U$ and $(0,1)\in W$, and that $(1,0)+(0,1)=(1,1)$. However, $(1,1)\notin U$ and $(1,1)\notin W$, so certainly $(1,1)\notin U\cup W$. Therefore, $U\cup W$ is not a subspace of $\R^2$.

This is somewhat unfortunate behavior, because intersections and unions of sets are very natural and easy to work with. We would therefore like to define a way to combine subspaces in order to obtain a new subspace which is as close as possible to their union. We mean this in the sense that this new subspace should be the smallest subspace containing their union. To this end, we define the sum of subspaces in the following manner.

Definition. Let $U$ and $W$ be subspaces of a vector space $V$. The sum of $U$ and $W$ is the set

$$U+W=\{\u+\w\in V\mid\u\in U,\w\in W\}.$$

Similarly, the sum of subspaces $U_1,\ldots,U_n$ of $V$ is the set

$$\sum_{i=1}^n U_i = \{\u_1 + \cdots + u_n\in V\mid \u_1\in U_1,\ldots,\u_n\in U_n\}.$$

This definition certainly fixes the issue we had with unions of subspaces, because it ensures by its very definition that it contains the sum of any vector in $U$ and any vector in $W$. We should make sure that the sum of subspaces is actually a subspace.

Theorem. If $U$ and $W$ be subspaces of a vector space $V$, their sum $U+W$ is also a subspace of $V$.

Proof. Since $U$ and $W$ are subspaces, we know that they both contain the zero vector and are closed under vector addition and scalar multiplication.

Since $\0\in U$ and $\0\in W$, clearly $\0\in U+W$ because $\0=\0+\0$, i.e., it is the sum of a vector in $U$ and a vector in $W$.

Next, let $\v_1,\v_2\in U+W$. Then

$$\begin{align}
\v_1 &= \u_1+\w_1, \\
\v_2 &= \u_2+\w_2
\end{align}$$

for some vectors $\u_1,\u_2\in U$ and $\w_1,\w_2\in W$. Since $U$ and $W$ are closed under vector addition, we know that $\u_1+\u_2\in U$ and $\w_1+\w_2\in W$. Therefore,

$$\begin{align}
\v_1+\v_2 &= (\u_1+\w_1) + (\u_2+\w_2) \\
&= (\u_1+\u_2) + (\w_1+\w_2) \\
&\in U+W
\end{align}$$

since it is the sum of a vector in $U$ and a vector in $W$.

Finally, let $a\in\F$ and $\v\in U+W$, so that $\v=\u+\w$ for some $\u\in U$ and some $\w\in W$. Since $U$ and $W$ are closed under scalar multiplication, we know that $a\u\in U$ and $a\w\in W$. Thus,

$$\begin{align}
a\v &= a(\u+\w) \\
&= a\u + a\w \\
&\in U+W
\end{align}$$

since it is the sum of a vector in $U$ and a vector in $W$.

Now that we have established that the sum of two subspaces is a subspace, we will prove our assertion above — that the sum of two subspaces is the smallest subspace containing their union. We do this by showing that any subspace containing their union must also contain their sum.

Theorem. If $U_1$ and $U_2$ are subspaces of a vector space $V$, then every subspace containing $U_1\cup U_2$ also contains $U_1+U_2$.

Proof. Suppose $W$ is a subspace of $V$ for which $U_1\cup U_2\subseteq W$, and let $\v\in U_1+U_2$. Clearly $\v=\u_1+\u_2$ for some $\u_1\in U_1$ and some $\u_2\in U_2$, and so $\v\in W$ since $U_1\cup U_2\subseteq W$ and $W$ must be closed under vector addition since it is a subspace of $V$. Thus, $U_1+U_2\subseteq W$, completing the proof.

The above is equivalent to saying that $U_1+U_2$ is the intersection of all subspaces containing $U_1\cup U_2$. Again, this result extends to any finite number of subspaces via a simple inductive argument.

This verifies that the sum of subspaces acts as we originally intended, in that it is as close to their union as possible while remaining a subspace. Now that we have demonstrated this nice characterization of sums, I will provide an example before moving on.

Example. Consider the subspaces

$$\begin{align}
U_1 &= \{(x,0,0)\in\R^3\mid x\in\R\}, \\
U_2 &= \{(0,y,0)\in\R^3\mid y\in\R\}, \\
U_3 &= \{(0,0,z)\in\R^3\mid z\in\R\}
\end{align}$$

of the vector space $\R^3$. These correspond to the subspaces of vectors lying along the $x$, $y$ and $z$ axes, respectively. The sum of these three subspaces is $\R^3$ itself, because for any vector $(x,y,z)\in\R^3$, we may write

$$(x,y,z)=(x,0,0)+(0,y,0)+(0,0,z),$$

which is a sum of vectors in $U_1$, $U_2$ and $U_3$, respectively. This sum has a particularly nice property — namely that every vector in $\R^3$ has a unique representation as a sum of vectors in $U_1$, $U_2$ and $U_3$. Put another way, this sum is special because we are forced to express $(x,y,z)$ as above — there is no other way to break it down as such a sum. This is really why we choose to express points in $\R^3$ using these coordinates. In fact, this property of sums is so important that it has a name. But I will discuss direct sums next time, since it's very late and I'm a sleepyhead.