Get AI summaries of any video or article — Sign up free
Linear Algebra 23 | Linear Independence (Examples) thumbnail

Linear Algebra 23 | Linear Independence (Examples)

4 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

A family is linearly independent exactly when the only coefficients that produce the zero vector are all zero.

Briefing

Linear independence hinges on one test: a set of vectors is linearly independent exactly when the only way to combine them to get the zero vector is to use all zero coefficients. That definition immediately explains why the zero vector is a deal-breaker—any family containing the zero vector is automatically linearly dependent, since multiplying the zero vector by any scalar still produces the zero vector, creating a non-trivial combination.

A simple illustration uses a family with just one vector in . With a single vector, linear independence holds as long as the vector is not the zero vector: the equation forces when . But once , the equation is true for every , so there are infinitely many non-trivial coefficient choices—proof of linear dependence.

The transcript then moves to concrete examples in . Three vectors in cannot be linearly independent because there are more vectors than the dimension allows. Using the specific vectors , , and , a non-trivial combination is constructed: . The key takeaway is structural: in , any set of three vectors must contain redundancy, meaning at least one vector can be expressed using the others.

Another central example is the canonical unit vectors . These vectors form the textbook model of linear independence. If , then the resulting vector has components . For this to equal the zero vector, every component must be zero, forcing all coefficients to vanish. This shows the unit vectors are linearly independent.

From there, the transcript draws a practical consequence: if any additional vector is appended to the canonical unit vectors, the enlarged family becomes linearly dependent. Intuitively, the span does not grow in a way that preserves independence—once the space is already fully captured by , adding another vector creates redundancy.

Finally, linear dependence gets a useful characterization. A family of vectors in is linearly dependent if and only if at least one vector can be removed without changing the span of the remaining vectors. This connects the algebraic definition to geometry: independence means every vector is necessary to generate the same subspace. That “no vector can be omitted” property is exactly what makes independent sets ideal for building bases, where efficiency matters.

Cornell Notes

Linear independence is defined by the zero-vector test: a family of vectors is linearly independent if the only coefficients that produce the zero vector are all zero. The zero vector itself prevents independence, since any nonzero coefficient times the zero vector still gives zero, creating a non-trivial combination. In , any three vectors are automatically dependent because a non-trivial combination can be found that sums to . The canonical unit vectors are the standard example of independence: forces every to be zero. A key equivalence follows: dependence happens exactly when some vector can be removed without changing the span, which links independence to efficient subspace descriptions (bases).

Why does any vector family containing the zero vector become linearly dependent?

If the family includes the zero vector , then for any scalar . That means there exists a non-trivial linear combination (with a nonzero coefficient in front of the zero vector) that still equals the zero vector, violating the “only all-zero coefficients” requirement for independence.

How can three vectors in be shown to be linearly dependent?

Because has dimension 2, three vectors must contain redundancy. In the transcript’s concrete example, , , and satisfy . The coefficients are not all zero, so the family is dependent.

Why are the canonical unit vectors linearly independent?

Take . Since each has a 1 in the -th position and 0 elsewhere, the left-hand side becomes the vector . Equality to the zero vector forces every component to be 0, so only the trivial coefficients work.

What changes when a new vector is added to the canonical unit vectors?

Appending any vector to produces a linearly dependent family. Since already span , the added vector cannot introduce a genuinely new direction; it becomes expressible using the existing unit vectors, creating a non-trivial zero combination.

What is the span-based characterization of linear dependence?

A family of vectors is linearly dependent iff some vector can be omitted without changing the span of the remaining vectors. Dependence means at least one vector is redundant for generating the same subspace; independence means no single vector can be removed while keeping the span unchanged.

Review Questions

  1. Give an example of a non-trivial linear combination that equals the zero vector when the family contains the zero vector.
  2. Explain why must be linearly independent using component-wise reasoning.
  3. State and interpret the equivalence: linear dependence ⇔ a vector can be removed without changing the span. Why does this matter for constructing bases?

Key Points

  1. 1

    A family is linearly independent exactly when the only coefficients that produce the zero vector are all zero.

  2. 2

    Any family containing the zero vector is automatically linearly dependent because nonzero coefficients can still yield the zero vector.

  3. 3

    A single nonzero vector in forms a linearly independent family; a single zero vector does not.

  4. 4

    In , any three vectors are linearly dependent, since a non-trivial combination can be found that sums to .

  5. 5

    The canonical unit vectors are linearly independent because forces every .

  6. 6

    Adding any vector to makes the family linearly dependent because the span cannot grow without redundancy.

  7. 7

    Linear dependence is equivalent to the ability to remove some vector without changing the span; independence means every vector is necessary for the same span.

Highlights

Including the zero vector guarantees linear dependence, since for any .
Three vectors in cannot be independent; a non-trivial combination to always exists.
The canonical unit vectors are the standard independent set because the coefficients become the components of the resulting vector.
A family is dependent exactly when at least one vector can be omitted while preserving the span—capturing redundancy in subspace generation.

Topics