Doing algebra with vectors
We can carry out calculations with vectors. In this lecture, we will see how to perform addition, subtraction, scalar product, and dot product of vectors.
For the zero vector, we follow the convention that $-\mathbf{0} = \mathbf{0}$, which is necessary to make computation with vectors possible.
As expected, in coordinates, \[ - \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} -x \\ -y \end{bmatrix}. \]
In the context of linear algebra, $-\mathbf{v}$ is called the additive inverse of $\mathbf{v}$.
Recall that in the context of this course, a scalar is simply a real number. We can multiply a vector by a scalar --- the "scalar multiplication".
For a positive scalar $k$, $k \, \mathbf{v}$ is simply a stretched or squeezed version of $\mathbf{v}$. E.g., $3 \mathbf{v}$ is exactly 3 times as long as $\mathbf{v}$ itself. By extension, $0 \mathbf{v} = \mathbf{0}$.
As expected, \[ (-1) \mathbf{v} = -\mathbf{v}, \] and for $k < 0$, \[ k \, \mathbf{v} = |k|(-\mathbf{v}). \]
It should come as no surprise that, in coordinate, \[ k \, \begin{bmatrix} x \\ y \end{bmatrix} \;=\; \begin{bmatrix} k \, x \\ k \, y \end{bmatrix} \] i.e., we simply need to scale each component by the same scalar.
One important operation defined in terms of scalar multiplication is "normalization", an operation that creates a unit vector.
This operation that takes in the nonzero vector $\mathbf{v}$ and produces $\frac{1}{\|\mathbf{v}\|}$ is called normalization. The resulting vector has the same direction, but has a magnitude of 1.
Clearly, it does not make sense to normalize the zero vector.
We can also add two vectors together, and the sum is also a vector.
Equivalently, we can also place the initial point of $\mathbf{v}$ at the terminal point of $\mathbf{u}$. Then $\mathbf{u} + \mathbf{v}$ the vector represented by the directed line segment that goes from the initial point of $\mathbf{u}$ to the terminal point of $\mathbf{v}$.
As expected, we can define the difference of two vectors in terms of vector sum and additive inverse.
Vectors form a new kind of mathematical object that are different from numbers. Fortunately, some familiar algebraic properties remain true.
From the geometric interpretation, we can see that \begin{align*} \mathbf{v} + \mathbf{0} &= \mathbf{v} \\ \mathbf{v} - \mathbf{0} &= \mathbf{v} \\ \mathbf{0} - \mathbf{v} &= -\mathbf{v} \\ \end{align*}
It is also easy to verify the commutative property \[ \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u} \] still holds.
Once we understand the geometric interpretation of vector sums, it is easy to see how it works algebraically in coordinates.
In other words, in the Cartesian coordinate system, vector addition just reduces to component-wise addition.
From the graphical interpretations of vector sums, it is easy to come up with an upper bound for the magnitude of the sum of two vectors in terms of the magnitude of the two individual vectors.
The above inequality becomes an equality if and only if all three vectors have the same direction (including cases where $\mathbf{u}$ or $\mathbf{v}$ is the zero vector).
From the formula for vector addition in coordinate, it is not hard to verify the following algebraic properties \begin{align*} k (\mathbf{u} + \mathbf{v}) &= (k \, \mathbf{u}) + (k \, \mathbf{v}) \\ (k_1 \mathbf{v}) + (k_2 \mathbf{v}) &= (k_1 + k_2) \, \mathbf{v} \\ k_1 (k_2 \mathbf{v}) &= (k_1 k_2) \, \mathbf{v} \\ 0 \, \mathbf{v} &= \mathbf{0} \\ \end{align*} for scalars $k,k_1,k_2$ and vectors $\mathbf{u},\mathbf{v}$.
So far, we have learned how to add and subtract vectors as well as how to multiply scalars to vectors. It is then natural to ask if we can multiply vectors. It turns out there are many types of multiplications, and here we focus on one of them.
One very important observation is that the dot product of two vectors is a scalar (not a vector). This is why it is also known as the scalar product (not to be confused with scalar multiplication). In the context of linear algebra, this is also one example of an inner product.
From the definition, we can establish that for any vector $\mathbf{v}$ with components $v_1$ and $v_2$, \[ \mathbf{v} \cdot \mathbf{v} = v_1^2 + v_2^2 = \| \mathbf{v} \|^2. \]
For vectors $\mathbf{u},\mathbf{v},\mathbf{w}$, and scalar $k$, \begin{align*} \mathbf{u} \cdot \mathbf{v} &= \mathbf{v} \cdot \mathbf{u} \\ \mathbf{u} \cdot (\mathbf{v} + \mathbf{w}) &= \mathbf{u} \cdot \mathbf{v} + \mathbf{u} \cdot \mathbf{w} \\ k (\mathbf{u} \cdot \mathbf{v}) &= (k \, \mathbf{u}) \cdot \mathbf{v} = \mathbf{u} \cdot (k \, \mathbf{v}) \end{align*}
For any vector $\mathbf{v}$, \[ \mathbf{0} \cdot \mathbf{v} = \mathbf{v} \cdot \mathbf{0} = 0. \]
The dot product of two vectors has a nice geometric interpretation
The special case in which $\mathbf{u}$ is a unit vector will be used very often in this course. In this case, the dot product $\mathbf{u} \cdot \mathbf{v}$ measures the signed length of the projection of $\mathbf{v}$ in the direction of $\mathbf{u}$.
Another important observation is that two nonzero vectors $\mathbf{u}$ and $\mathbf{v}$ are orthogonal if and only if \[ \mathbf{u} \cdot \mathbf{v} = 0. \]