Get AI summaries of any video or article — Sign up free
Linear Algebra 22 | Linear Independence (Definition) [dark version] thumbnail

Linear Algebra 22 | Linear Independence (Definition) [dark version]

5 min read

Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Two vectors in R2 are linearly dependent exactly when one is a scalar multiple of the other (collinear).

Briefing

Linear dependence is defined by whether a collection of vectors can “collapse” into the zero vector using a non-all-zero set of coefficients. In plain terms, if there exists a linear combination of vectors that adds up to the zero vector without requiring every coefficient to be zero, the vectors are linearly dependent. This matters because it distinguishes redundant directions from genuinely new ones—an idea that underpins solving linear systems, understanding bases, and determining the dimension of vector spaces.

In two dimensions, the concept is illustrated with vectors in R2. Two vectors are collinear when they lie on the same line, meaning one is a scalar multiple of the other: there is a real number λ such that λ·v = u. That relationship is exactly what “linear dependence” captures in this setting: the pair of vectors does not define more than one direction, because one vector can be obtained from the other by scaling.

The same logic extends to three dimensions. For vectors U, V, and W in R3, linear dependence can occur when all three lie in a single plane. In that case, U can be written as a linear combination of V and W: U = λ·V + μ·W for some real scalars λ and μ. By moving U to the other side, the condition becomes λ·V + μ·W − 1·U = 0, which fits the general “non-trivial combination equals the zero vector” pattern.

To generalize beyond R2 and R3, the definition is framed for any vector space R^n and any number K of vectors. Take a family of K vectors (V1, V2, …, VK) in R^n. The family is linearly dependent if there exist real coefficients (λ1, λ2, …, λK), not all zero, such that the sum λ1·V1 + λ2·V2 + … + λK·VK equals the zero vector in R^n. The key point is the “not all zero” requirement: coefficients all equal to zero would always produce the zero vector and would not be informative.

Linear independence is defined as the opposite condition. A family of vectors is linearly independent when no non-trivial linear combination can produce the zero vector. Equivalently, if a linear combination of the vectors equals the zero vector, then the only solution is that every coefficient must be zero (aside from the trivial case where all vectors are scaled by zero). This definition sets up a practical test for redundancy: independent vectors cannot be reconstructed from the others, while dependent vectors can.

With the definition in place, the discussion points toward examples in the next segment, where these criteria can be applied concretely.

Cornell Notes

Linear dependence is defined for any family of vectors in R^n: the vectors are dependent if some non-all-zero set of real coefficients makes their linear combination equal the zero vector. In R2, two vectors are dependent exactly when one is a scalar multiple of the other (collinear). In R3, three vectors are dependent when one vector can be written as a linear combination of the other two (coplanar), which can be rearranged into a non-trivial combination equaling zero. Linear independence is the complementary idea: the only way to form the zero vector from the family is with all coefficients equal to zero. This distinction matters because it identifies whether vectors introduce new directions or are redundant.

How does collinearity in R2 translate into the formal definition of linear dependence?

In R2, two vectors u and v are collinear when one is a scaled version of the other: there exists a real number λ such that λ·v = u. Rearranging gives λ·v − 1·u = 0, which is a linear combination of the two vectors equaling the zero vector with coefficients (λ, −1) that are not both zero. That matches the definition of linear dependence: a non-trivial combination produces the zero vector.

Why does the “not all coefficients are zero” condition matter?

If all coefficients were allowed to be zero, then λ1·V1 + … + λK·VK = 0 would always be true, regardless of the vectors. The definition requires at least one coefficient be non-zero so the zero vector is reached for a meaningful reason—because some vectors can be expressed in terms of others, revealing redundancy.

How does coplanarity of three vectors in R3 become a linear dependence statement?

If U, V, and W lie in the same plane, then U can be written as U = λ·V + μ·W for some real λ and μ. Moving U to the left yields λ·V + μ·W − 1·U = 0. This is a non-trivial linear combination of the three vectors equaling the zero vector, so the family {U, V, W} is linearly dependent.

What is the formal criterion for linear independence for a family of K vectors?

A family (V1, …, VK) is linearly independent if the only solution to λ1·V1 + … + λK·VK = 0 is λ1 = … = λK = 0. In other words, no non-trivial linear combination can produce the zero vector. This directly negates linear dependence.

How do the examples in R2 and R3 fit into the general R^n definition?

Both examples follow the same template. In R2, dependence comes from a relation of the form λ·V = U, which rearranges to a non-trivial combination equaling zero. In R3, dependence comes from U = λ·V + μ·W, which similarly rearranges to λ·V + μ·W − 1·U = 0. The general definition simply replaces “two vectors” or “three vectors” with K vectors and uses a sum over all coefficients.

Review Questions

  1. Given vectors V1, V2, V3 in R^n, what equation must hold for them to be linearly dependent, and what restriction applies to the coefficients?
  2. What is the exact logical difference between linear dependence and linear independence in terms of solutions to a linear combination equaling the zero vector?
  3. How would you rewrite an equation like U = λ·V + μ·W into the standard “sum equals zero” form used in the definition?

Key Points

  1. 1

    Two vectors in R2 are linearly dependent exactly when one is a scalar multiple of the other (collinear).

  2. 2

    Three vectors in R3 are linearly dependent when one vector can be expressed as a linear combination of the other two (coplanar).

  3. 3

    A family of K vectors in R^n is linearly dependent if there exist real coefficients, not all zero, whose linear combination equals the zero vector.

  4. 4

    Linear independence means the only way to form the zero vector from the family is with all coefficients equal to zero.

  5. 5

    The “not all coefficients are zero” condition prevents the trivial zero combination from being counted as evidence of dependence.

  6. 6

    Rearranging equations like U = λ·V + μ·W into λ·V + μ·W − 1·U = 0 is the standard step that matches the formal definition.

Highlights

Linear dependence is characterized by a non-trivial linear combination equaling the zero vector.
In R2, dependence matches collinearity: one vector is a scaled copy of the other.
In R3, dependence matches coplanarity: one vector is a linear combination of the other two.
Linear independence is the strict opposite: only the all-zero coefficients can produce the zero vector.