Linear Algebra 23 | Linear Independence (Examples) [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
A family of vectors is linearly independent exactly when the only solution to a linear combination equaling the zero vector uses all zero coefficients.
Briefing
Linear independence hinges on one test: a family of vectors is linearly independent exactly when the only way to combine them to get the zero vector uses all zero coefficients. That definition immediately yields a sharp rule of thumb—if the family includes the zero vector, independence is impossible. For a single vector in , the family is linearly independent as long as ; the equation forces . But if , then holds for any , producing a non-trivial linear combination and proving linear dependence.
The examples then scale this idea up. In , three vectors with two components cannot all be independent. With vectors like , , and , there is a non-trivial combination that lands on the zero vector: . The takeaway is structural rather than numerical: in , any set larger than the dimension must be linearly dependent.
A contrasting “canonical” example comes from the standard basis vectors in . Consider an arbitrary linear combination . Because each has a single 1 in the -th position and zeros elsewhere, the sum becomes the vector . Setting this equal to the zero vector forces every component to be zero, matching the definition of linear independence. This standard basis is therefore a foundational example of a linearly independent family.
The transcript also highlights a common way to generate dependence: if a linearly independent family is extended by adding another vector to the end, the enlarged family becomes linearly dependent. Intuitively, once the original family already spans its “maximal efficient” structure, adding a new vector creates redundancy—there will exist coefficients that cancel out to produce the zero vector.
Finally, linear dependence is characterized in a more geometric way. A family of vectors in is linearly dependent if and only if at least one vector in the family can be removed without changing the span of the remaining vectors. In other words, dependence means some vector is unnecessary for generating the same subspace; independence means no vector can be omitted while preserving the span. This equivalence explains why linear independence is so tightly linked to the efficiency of describing subspaces—an idea that becomes central when constructing bases.
Cornell Notes
Linear independence is defined by a zero-combination test: a family of vectors is linearly independent if the only way to form the zero vector is to use all zero coefficients. The presence of the zero vector in a family guarantees dependence, since any coefficient works in . In , any three vectors are automatically dependent because a non-trivial combination can be found that sums to . The standard basis vectors form a key independent example: , so equality to the zero vector forces every . Dependence can also be recognized by span: a family is dependent exactly when some vector can be omitted without changing the span.
Why does including the zero vector automatically destroy linear independence?
What guarantees that three vectors in are linearly dependent?
How do canonical unit vectors demonstrate linear independence?
Why does adding an extra vector to the end of a family lead to linear dependence?
What is the span-based characterization of linear dependence?
Review Questions
- Given a family that includes the zero vector, what specific coefficient-based argument shows it must be linearly dependent?
- In , construct (or reason about) a non-trivial linear combination of three vectors that equals the zero vector.
- Explain the equivalence: how does “can omit a vector without changing the span” translate into linear dependence?
Key Points
- 1
A family of vectors is linearly independent exactly when the only solution to a linear combination equaling the zero vector uses all zero coefficients.
- 2
Any family containing the zero vector is automatically linearly dependent because for any .
- 3
In , any set of three vectors is linearly dependent; a non-trivial combination can be found that sums to .
- 4
The standard basis vectors are linearly independent because forces all when the result is zero.
- 5
Adding a vector to the end of a canonical independent family creates linear dependence by introducing redundancy.
- 6
Linear dependence is equivalent to the ability to remove at least one vector while keeping the same span; independence means no such omission is possible.