Linear Maps

Let V and W denote two vector spaces. Then, a linear map T is a function from V to W that satisfies the following properties

  • Additivity T(u+v)=Tu+Tv for all u,vV
  • Homogeneity T(λv)=λ(Tv) for all vV

Some people can refer to linear maps as linear transformations. We can also see the notation T(v) instead of Tv to represent T as an operator, although both are correct.\newline

The set of all linear maps from V to W is denoted by L(V,W). Examples of linear maps are the \textbf{identity map} (maps an element to itself, identity map L(V,V)), differentiation, integration etc.\newline

An important class of linear maps is from Fn to Fm and can be denoted by the following transformation (x1,x2,,xn)FnT(x1,,xn)=(A1,1x1++A1,nxn,,Am,1x1++Am,nxn)

If we are working with the basis vectors for V and W of same dimensions, then there exists a unique linear map such that Tvj=wjj=1,2,,n where v and w are basis vectors for V and W respectively.

Properties

We can define addition and scalar multiplication on the set of linear maps L(V,W) as follows (S+T)(v)=S(v)+T(v)and(λT)(v)=λ(Tv)S,TL(V,W),λF,vV

With the addition and multiplication operations defined, we notice that this set of linear maps L(V,W) is a vector space.\newline

\textbf{Product of Linear Maps}\newline Product of linear maps is defined as follows (ST)(u)=S(Tu)TL(U,V),SL(V,W),STL(U,W) for all uU. ST is only defined when T maps into the domain of S. Note that STTS always. For the equality to hold, both left and right side of the equations must make sense, and the products must indeed be equal.\newline

Additionally, linear maps will satisfy several additional algebraic properties

  • Associativity (T1T2)T3=T1(T2T3) whenever both T1T2 and T2T3 make sense. All these three linear maps will be defined on different sets.
  • Identity TI=IT=T where the first identity map is on V while the second identity map is on W for the product of linear maps to make sense.
  • Distributive Property (S1+S2)T=S1T+S2T and S(T1+T2)=ST1+ST2\newline where all the products make sense and S1,S2,SL(U,V) and T1,T2,TL(V,W).

Null Space

Null space is the subset that get mapped to 0. Mathematically null T=vV such that Tv=0,TL(V,W)

We can easily verify that Null space is a subspace of V since it contains the additive identity (T0=0), is closed under addition (T(u+v)=Tu+Tv=0), and closed under scalar multiplication (T(λv)=λ(Tv)=0).

The dimension of the null space (number of basis vectors in the null space) is also denoted by Nullity. Some author may refer to the null space as the Kernel of the linear map.

Injective or one-to-one

A linear map is injective if it maps distinct elements of V to distinct elements of W TL(V,W)is injective ifTv=Twv=w for all vV and wW.

Injectivity is also equivalent to saying that the null space is a singleton set 0. To prove this, both Tv=Twv=w and v=wTv=Tw needs to be shown.

Range

Range of a linear map is the set of elements that are the outputs of the linear map range T=wW such that Tv=w for some vV=Tv for all vV This range T is a subspace of W.

Surjective or onto

A linear map is said to be surjective if range T = W, i.e., every element of W is mapped to by an element in V.

Fundamental theorem of linear maps

Let V is finite dimensional and TL(V,W), then dim V=dim null T+dim range T

Suppose V and W are finite dimensional vector spaces with dim V> dim W, then there is no injective linear map from V to W. This can also be shown using the fundamental theorem of linear maps by proving dim null T>0.

Further, if dim V< dim W, then any linear map from V to W is not surjective. This can be shown by proving that range T< dim W using fundamental theorem of linear maps.

Change of Bases

The following is the fundamental theorem for change of bases. For two sets of bases u and v of dimensions n and m respectively, the matrix Suv defining the transformation from u to v satisfies Suv[w]u=[w]vSuvuj=s1jv1+s2jv2++smjvm where [w]u and [w]v denote the representation of the same vector w in the two bases, uj is the jth vector of the basis, v2 is the second vector of the basis, sij is the element in the ith row and jth column of the m×n matrix Suv.

Then, the columns of Suv are the bases vectors [u]v, i.e., the elements of basis u expressed in terms of v.

In particular, suppose we want to change from any basis to the standard basis, then v represents the standard basis in the above notation. In this case, Suv is the representation of the original basis in terms of the standard basis, which implies that the columns of Suv are the original basis vectors themselves.

The inverse of Suv will be Svu, i.e., the matrix for basis change from v to u.

Consider the change of basis from (1,1),(1,0) to (1,0),(0,1). The transformation matrix will be [1110]

The basis change is easier to understand in case of polynomials. Consider the basis change from x+1,x1,2x2 (basis 1) to the standard basis 1,x,x2 (basis 2) is S12=[110110002]

The inverse of this is S21=S121=[1/21/201/21/20001/2]

To transform the polynomial a+bx+c2 to basis 1, we can use the above inverse mapping [1/21/201/21/20001/2][abc]=[(a+b)/2(ba)/2c/2]

or, the polynomial can be represented in the new basis as a+b2(1+x)+b12(x1)+c2(2x2). When simplified, this is a+bx+cx2 itself.