"We Ran Out Of Columns" - The Worst Codebase Ever
Based on The PrimeTime's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
SQL Server column limits forced a split from Merchants into Merchants too, locking in a schema that kept growing instead of being redesigned.
Briefing
A legacy Microsoft SQL Server system hit a hard ceiling on how many columns a single table could hold—then responded by creating a second “Merchants” table instead of redesigning the data model. The immediate trigger was a “ran out of columns” moment on a table storing customer data, where the team learned the practical limits of SQL Server: a maximum of 32,000 columns per table (and a much smaller limit for certain indexing scenarios). With the Merchants table already swollen to roughly 1,200+ columns and Merchants too reaching about 500+ more, the database became a culture machine—shaping how every part of the application was built and how developers reasoned about change.
From there, the story widens into a portrait of how codebases rot under constraints and incentives. When column limits became unavoidable, the workaround culture spread: storing JSON blobs in columns, using string-matching tricks to query those blobs, and leaning on “fuzzy” SQL patterns rather than fixing the schema. The system’s database-first mindset also produced other operational oddities, including an internal “calendar” table that effectively controlled login availability—so when the calendar ran out, employees couldn’t log in until an intern manually extended it by five years. Even more extreme: the employees table was dropped every morning at 7:15, repopulated from an ADP CSV, and then replicated to headquarters via a daily email-driven process. If the person who pressed the button went on vacation, the whole login pipeline could fail.
The database problems weren’t isolated; they were mirrored in the application architecture and development workflow. Source control lived in Team Foundation Server, with a mixed stack of half VB and half C# running on IIS, plus a grab bag of JavaScript frameworks like Knockout, Backbone, and Marionette, alongside jQuery and plugins. The team also faced a “bus factor one” reality: a key developer, Gilfoil (Munch), kept critical one-off Windows applications on his own hard drives and sometimes shipped fixes only through local artifacts. That meant customer bug reports could arrive for software no one else knew existed.
One recurring theme is that the system’s complexity wasn’t just technical—it was procedural and human. A sales-related “win” accounting mechanism let a salesperson game timing by moving records to the next month, triggering years of escalating requests and even a period where interns spent full-time writing SQL statements to keep the accounting sync working. Another example: a shipping queue bug persisted because old-order cleanup Cron jobs were disabled, and the shipping client pulled the entire database history, filtered by go-live date, and relied on a SOAP service designed as a “pure function” while side effects lived in the client.
Yet amid the chaos, the most memorable improvement came from Justin, a senior developer who couldn’t tolerate the slow Merchant search page. Instead of waiting for a master redesign, he made each UI section load independently, turning a multi-minute load into sub-second performance. That success was possible because the codebase lacked a master plan—no architectural review board, no consistent API design, no overarching system constraints—so developers naturally carved out small “sanity islands” and gradually rewired links to newer micro-app corners. The result was a “beautiful mess”: a system that was nearly impossible to fix globally, but still allowed targeted wins locally when someone refused to accept the status quo.
Cornell Notes
The system’s biggest failure was structural: SQL Server column limits forced a swollen schema to split into “Merchants” and “Merchants too,” and the organization kept patching instead of redesigning. Workarounds multiplied—JSON stored in columns, string-based querying, and even operational hacks like a “calendar” table that could lock out logins when it ran out. Development and deployment were equally brittle, with Team Foundation Server, a mixed VB/C# + IIS stack, and bus-factor-one dependencies on a developer who kept one-off apps on personal drives. Still, one senior developer improved the Merchant search page by loading page sections independently, cutting load time from minutes to sub-seconds. The key lesson: when there’s no master plan, global cleanup may be impossible, but local refactors can still create real wins.
What concrete technical constraint kicked off the “Merchants” schema explosion?
Why did the system keep accumulating hacks instead of fixing the schema?
How did the “calendar” table become an operational risk?
What does “bus factor one” look like in practice here?
What specific performance change made the Merchant search page dramatically faster?
How did the codebase’s lack of a master plan enable incremental improvements?
Review Questions
- What SQL Server limits were mentioned, and how did they influence the decision to create “Merchants too” instead of redesigning the schema?
- Describe two examples of business logic or operational behavior being encoded in database tables or scheduled processes (e.g., calendar-driven login, daily employee table rebuild).
- Why did Justin’s approach to speeding up Merchant search work better than a full-system redesign in this environment?
Key Points
- 1
SQL Server column limits forced a split from Merchants into Merchants too, locking in a schema that kept growing instead of being redesigned.
- 2
Workarounds proliferated when schema flexibility ran out, including storing JSON in columns and querying it with string/pattern methods rather than using a cleaner data model.
- 3
Database culture shaped the whole system: stored procedures and database constraints dictated how application behavior evolved over time.
- 4
Operational fragility appeared in surprising places, such as login availability depending on a manually extended calendar table and daily employee-table rebuilds driven by a human “push a button” step.
- 5
Development workflow and architecture were brittle, including Team Foundation Server, a mixed VB/C# + IIS stack, and bus-factor-one dependencies on locally stored one-off applications.
- 6
A major performance win came from local refactoring: splitting the Merchant search page into independently loading sections reduced load time from minutes to sub-seconds.
- 7
Incremental improvement was possible because the codebase lacked a master plan, letting developers create small replacement islands and rewire links over time.