If you're the sort of person who cowers in fear whenever the word "tensor" is mentioned, this post is for you. We'll pick up right where we left off last time in our discussion of the dual space, and discover how tensor products are a natural extension of the ideas developed in that post.
Since I haven't posted for a while, I decided to break up my rants about homology with some posts on linear (and multilinear) algebra. In this post, we will (as usual) deal only with finite dimensional vector spaces.
If you already knew some linear algebra before reading my posts, you might be wondering where the heck all the matrices are. The goal of this post is to connect the theory of linear maps and vector spaces to the theory of matrices and computation.
Just as we decided to study continuous functions between topological spaces and homomorphisms between groups, much of linear algebra is dedicated to the study of linear maps between vector spaces.
Bases for vector spaces are similar to bases for topological spaces. The idea is that a basis is a small, easy to understand subset of vectors from which it is possible to extrapolate pretty much everything about the vector space as a whole.
I left off last time with an example of a sum of subspaces with a rather important property. Namely, every vector in the sum had a unique representation as a sum of vectors in the component subspaces.
A vector space is a special kind of set containing elements called vectors, which can be added together and scaled in all the ways one would generally expect.