Python Tutorial: Unit Testing Your Code with the unittest Module
Based on Corey Schafer's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use `unittest.TestCase` and name test methods with the `test_` prefix so `unittest` actually discovers and runs them.
Briefing
Unit testing in Python with the built-in `unittest` module is presented as a practical way to catch breakages during refactoring and updates—especially when a change in one function can silently damage other parts of a codebase. Instead of relying on manual `print` checks, tests provide a repeatable, automatable safety net that reports exactly which assertion failed, how many tests ran, and whether failures came from logic errors or edge cases.
The walkthrough starts with a simple calculator module and shows how to structure tests so `unittest` can discover them. A dedicated test file named with the `test_` prefix (for example, `test_calc.py`) imports both `unittest` and the module under test. Test cases live in a class that inherits from `unittest.TestCase`, and each test method must begin with `test_` or it will be skipped—an easy mistake that can lead to a false sense of coverage. Running tests is demonstrated both via the standard command `python -m unittest test_calc` and by adding an `if __name__ == "__main__": unittest.main()` block so the tests can be executed directly from the terminal and inside an editor.
Assertions form the core of verification. The tutorial uses `self.assertEqual` to validate expected outputs for `add`, then expands the same test method with multiple edge-case checks (negative numbers, mixed signs) to strengthen correctness without necessarily increasing the number of discovered test methods. It then scales to separate test methods for subtraction, multiplication, and division, showing how a single code typo (like turning multiplication into exponentiation) produces a clear failure report while other tests still pass. A second example highlights a subtle bug: switching from true division to floor division may not break existing tests if the chosen inputs happen to yield whole numbers. Adding a targeted test such as `5 / 2 == 2.5` forces the suite to detect the behavioral difference.
Exception handling is treated as first-class testing. For division by zero, the tutorial demonstrates two approaches: `assertRaises` by passing the callable and its arguments, and a preferred context-manager form (`with self.assertRaises(...)`) that calls the function normally inside the block. Both confirm that a `ValueError` is raised with the expected behavior.
The lesson then moves to a more realistic `Employee` class scenario, where properties like email and full name update automatically when first names change, and where `apply_raise` adjusts pay by a default percentage. To avoid repeating setup code across tests, it introduces `setUp` and `tearDown` instance methods: `setUp` runs before every test to create fresh `employee` objects, while `tearDown` can clean up resources. For expensive one-time initialization, it adds `setUpClass` and `tearDownClass`, which run once per test class.
Finally, the tutorial tackles dependencies outside the code’s control using mocking. A method that fetches an employee schedule from a website via `requests.get` is tested without making real network calls by patching `request.get` in the employee module. The mocked response is configured to simulate both success (`response.ok = True`, returning specific text) and failure (`ok = False`, returning `bad response`). The mock also records call arguments, allowing the test to assert the exact URL built from the employee’s last name and the requested month.
Best practices close the loop: tests should be isolated so they can run independently, and while full test-driven development isn’t required, writing tests first can guide correct implementation. The overall message is straightforward: start with basic assertions, and gradually adopt setup/teardown and mocking as complexity grows—because any testing is better than none when changes start accumulating.
Cornell Notes
The tutorial shows how to use Python’s built-in `unittest` framework to verify behavior with repeatable, automatable checks. Tests are discovered only when test methods start with `test_`, and running them typically uses `python -m unittest test_module` or `unittest.main()` under `if __name__ == "__main__"`. Assertions like `assertEqual` validate outputs across normal cases and edge cases, while `assertRaises` (often via a context manager) verifies exceptions such as `ValueError` for division by zero. For larger codebases, `setUp`/`tearDown` reduce repeated initialization, `setUpClass`/`tearDownClass` handle expensive one-time setup, and mocking with `patch` isolates tests from external systems like websites. This combination makes refactoring safer by pinpointing exactly what broke.
Why do test methods need to start with `test_`, and what happens if they don’t?
How can a single failing assertion help locate a bug during refactoring?
What’s the difference between testing “happy paths” and testing edge cases in this framework?
How do you test that a function raises an exception like `ValueError`?
When should `setUp`/`tearDown` be used versus `setUpClass`/`tearDownClass`?
How does mocking with `patch` make tests reliable when code depends on external websites?
Review Questions
- What naming conventions determine whether a method is run as a test in `unittest`?
- How would you modify the division tests to catch a switch from true division to floor division?
- In what situations would you prefer a context manager form of `assertRaises` over passing the callable and arguments directly?
Key Points
- 1
Use `unittest.TestCase` and name test methods with the `test_` prefix so `unittest` actually discovers and runs them.
- 2
Run tests with `python -m unittest test_module` or add `if __name__ == "__main__": unittest.main()` to allow direct execution.
- 3
Strengthen correctness by adding edge-case assertions (e.g., negative numbers, mixed signs) within tests, not just “happy path” inputs.
- 4
Detect subtle logic regressions by choosing inputs that differentiate behaviors (e.g., `5 / 2 == 2.5` to catch floor division).
- 5
Verify exceptions with `assertRaises`, ideally using a context manager to call the function normally inside the block.
- 6
Reduce repetition across tests with `setUp`/`tearDown` (fresh per test) and use `setUpClass`/`tearDownClass` for expensive one-time initialization.
- 7
Isolate external dependencies using `patch` to mock `requests.get`, then assert both the returned result and the URL/arguments used in the call.