Python Tutorial: Write a Script to Monitor a Website, Send Alert Emails, and Reboot Servers
Based on Corey Schafer's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use `requests.get()` with a short timeout (e.g., 5 seconds) so monitoring fails fast instead of hanging.
Briefing
A practical Python watchdog can keep a personal website online by checking for failures, emailing an alert, and automatically rebooting the hosting server when the site stops responding. The core workflow is simple: send an HTTP GET request to the site with a short timeout, treat anything other than an HTTP 200 response (or any request exception) as a problem, then notify the owner and trigger a server restart via the hosting provider’s API.
The script starts by setting up a dedicated project folder and a Python virtual environment (created using the standard-library `venv` module). It installs two key dependencies: `requests` for making the website health check and `node_api` for controlling the server through the hosting provider’s API. For email delivery, it uses Python’s built-in `smtplib` to connect to Gmail’s SMTP server (`smtp.gmail.com` on port 587), upgrades the connection with `starttls`, and authenticates using credentials stored in environment variables rather than hard-coded into the script.
The monitoring logic uses a 5-second timeout on the HTTP request to avoid hanging indefinitely if the site is unreachable. After the request returns, the script checks the response status code specifically for `200`. If the status code is not 200, it sends an email with a clear subject (“YOUR SITE IS DOWN!”) and a short message telling the recipient to confirm the server restart. Immediately after the email is sent, the script connects to the hosting account using a personal access token (also stored in environment variables), loads the target instance by ID, and calls `reboot` on that instance.
To make the system reliable, the script wraps the request in a `try/except` block so that network failures and timeouts—situations where no HTTP status code exists—still trigger the same recovery path. A key implementation detail is that the request response variable must be defined inside the `try` block; otherwise, exceptions can lead to follow-on errors when the code later tries to read an unset variable. The tutorial demonstrates this by stopping Apache on the server, confirming that the script catches the resulting exception, sends the alert email, and reboots the machine. After reboot, Apache comes back up and the website loads again.
Finally, the monitoring becomes automatic by scheduling the script with `cron` on macOS/Linux. The cron entry runs every 10 minutes and uses the full path to the virtual environment’s Python executable plus the full path to `monitor.py`, ensuring the scheduled job has access to the installed packages. The setup also includes basic guidance for verifying schedules (`crontab -l`) and suggests adding logging later so failures can be diagnosed without waiting for an email alert.
Overall, the result is a self-healing loop: detect downtime quickly, alert the owner, and reboot the server through API control—then repeat on a fixed schedule so manual intervention isn’t required every time something breaks.
Cornell Notes
The script monitors a website by sending an HTTP GET request with a 5-second timeout and treating anything other than an HTTP 200 response as downtime. When downtime is detected—or when the request fails entirely due to an exception—the script sends an email alert via Gmail SMTP and then reboots the hosting server using the provider’s `node_api` with a personal access token. Email and API credentials are stored in environment variables to avoid hard-coding secrets. Reliability comes from wrapping the request in `try/except` so timeouts and “no response” situations still trigger the same alert-and-reboot behavior. Automation is added by scheduling the script with `cron` to run every 10 minutes using the virtual environment’s Python interpreter.
How does the script decide that the website is “down” and needs action?
Why is a timeout important in this monitoring approach?
How does the email alert work, and what security practice is used?
How does the script reboot a server automatically after detecting downtime?
What changes make the script robust when the website returns no HTTP status code?
How is the monitoring automated every 10 minutes on macOS/Linux?
Review Questions
- What specific conditions trigger both the email and the server reboot in this design?
- Why does the cron job need to call the virtual environment’s Python executable rather than a system Python?
- What failure mode is addressed by the `try/except` block, and how would the behavior differ without it?
Key Points
- 1
Use `requests.get()` with a short timeout (e.g., 5 seconds) so monitoring fails fast instead of hanging.
- 2
Treat only HTTP 200 as healthy; any other status code triggers an alert and a reboot.
- 3
Wrap the HTTP check in `try/except` so timeouts and connection failures (no status code) still lead to recovery.
- 4
Send email alerts through Gmail SMTP using `smtplib`, and store credentials in environment variables.
- 5
Reboot the server through the hosting provider’s API (`node_api`) using a personal access token stored in environment variables.
- 6
Schedule the script with `cron` every 10 minutes, calling the virtual environment’s Python and the script’s absolute path.
- 7
Add logging later if you want to verify runs and diagnose issues without relying solely on email alerts.