Linear Algebra 22 | Linear Independence (Definition) [dark version]
Based on The Bright Side of Mathematics's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Two vectors in R2 are linearly dependent exactly when one is a scalar multiple of the other (collinear).
Briefing
Linear dependence is defined by whether a collection of vectors can “collapse” into the zero vector using a non-all-zero set of coefficients. In plain terms, if there exists a linear combination of vectors that adds up to the zero vector without requiring every coefficient to be zero, the vectors are linearly dependent. This matters because it distinguishes redundant directions from genuinely new ones—an idea that underpins solving linear systems, understanding bases, and determining the dimension of vector spaces.
In two dimensions, the concept is illustrated with vectors in R2. Two vectors are collinear when they lie on the same line, meaning one is a scalar multiple of the other: there is a real number λ such that λ·v = u. That relationship is exactly what “linear dependence” captures in this setting: the pair of vectors does not define more than one direction, because one vector can be obtained from the other by scaling.
The same logic extends to three dimensions. For vectors U, V, and W in R3, linear dependence can occur when all three lie in a single plane. In that case, U can be written as a linear combination of V and W: U = λ·V + μ·W for some real scalars λ and μ. By moving U to the other side, the condition becomes λ·V + μ·W − 1·U = 0, which fits the general “non-trivial combination equals the zero vector” pattern.
To generalize beyond R2 and R3, the definition is framed for any vector space R^n and any number K of vectors. Take a family of K vectors (V1, V2, …, VK) in R^n. The family is linearly dependent if there exist real coefficients (λ1, λ2, …, λK), not all zero, such that the sum λ1·V1 + λ2·V2 + … + λK·VK equals the zero vector in R^n. The key point is the “not all zero” requirement: coefficients all equal to zero would always produce the zero vector and would not be informative.
Linear independence is defined as the opposite condition. A family of vectors is linearly independent when no non-trivial linear combination can produce the zero vector. Equivalently, if a linear combination of the vectors equals the zero vector, then the only solution is that every coefficient must be zero (aside from the trivial case where all vectors are scaled by zero). This definition sets up a practical test for redundancy: independent vectors cannot be reconstructed from the others, while dependent vectors can.
With the definition in place, the discussion points toward examples in the next segment, where these criteria can be applied concretely.
Cornell Notes
Linear dependence is defined for any family of vectors in R^n: the vectors are dependent if some non-all-zero set of real coefficients makes their linear combination equal the zero vector. In R2, two vectors are dependent exactly when one is a scalar multiple of the other (collinear). In R3, three vectors are dependent when one vector can be written as a linear combination of the other two (coplanar), which can be rearranged into a non-trivial combination equaling zero. Linear independence is the complementary idea: the only way to form the zero vector from the family is with all coefficients equal to zero. This distinction matters because it identifies whether vectors introduce new directions or are redundant.
How does collinearity in R2 translate into the formal definition of linear dependence?
Why does the “not all coefficients are zero” condition matter?
How does coplanarity of three vectors in R3 become a linear dependence statement?
What is the formal criterion for linear independence for a family of K vectors?
How do the examples in R2 and R3 fit into the general R^n definition?
Review Questions
- Given vectors V1, V2, V3 in R^n, what equation must hold for them to be linearly dependent, and what restriction applies to the coefficients?
- What is the exact logical difference between linear dependence and linear independence in terms of solutions to a linear combination equaling the zero vector?
- How would you rewrite an equation like U = λ·V + μ·W into the standard “sum equals zero” form used in the definition?
Key Points
- 1
Two vectors in R2 are linearly dependent exactly when one is a scalar multiple of the other (collinear).
- 2
Three vectors in R3 are linearly dependent when one vector can be expressed as a linear combination of the other two (coplanar).
- 3
A family of K vectors in R^n is linearly dependent if there exist real coefficients, not all zero, whose linear combination equals the zero vector.
- 4
Linear independence means the only way to form the zero vector from the family is with all coefficients equal to zero.
- 5
The “not all coefficients are zero” condition prevents the trivial zero combination from being counted as evidence of dependence.
- 6
Rearranging equations like U = λ·V + μ·W into λ·V + μ·W − 1·U = 0 is the standard step that matches the formal definition.