Get AI summaries of any video or article — Sign up free
Python FastAPI Tutorial (Part 13): Pagination - Loading More Data with Query Parameters thumbnail

Python FastAPI Tutorial (Part 13): Pagination - Loading More Data with Query Parameters

Corey Schafer·
5 min read

Based on Corey Schafer's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Add skip and limit query parameters to list endpoints, with validation (skip ≥ 0; limit between 1 and 100, default 10) to prevent inefficient or invalid requests.

Briefing

Pagination moves the app from “send everything” to “send only what the client needs,” and the payoff is immediate: faster loads, less wasted bandwidth, and a cleaner user experience when post lists grow large. The core change is an API contract that returns not just posts, but also pagination metadata—total count, current offset (skip), requested batch size (limit), and a has-more flag—so the front end can reliably decide whether to show a “Load more” button.

The tutorial starts by generating realistic test data: a populate_db.py script clears the database, creates multiple users with profile pictures, and generates 44 sample posts spread across months. With enough content to stress list performance, the app’s current behavior becomes clear—every post is loaded at once on the homepage, which would become slow and wasteful at hundreds or thousands of entries.

On the API side, the get post endpoint is upgraded to accept skip and limit query parameters. Validation is enforced using FastAPI’s query constraints: skip must be ≥ 0 (default 0), and limit must be between 1 and 100 (default 10). The database query uses SQLAlchemy’s offset(skip) and limit(limit) to fetch only the requested slice. Consistency depends on ordering: posts are sorted by date_posted in descending order so the same skip/limit values always correspond to the same “page” of results. Before fetching the slice, a separate count query computes the total number of posts so the API can set has_more based on whether skip + returned_count is still less than total.

To make the response easy for clients, a new PageinatedPostResponse schema wraps the actual list of PostResponse objects in a post field and adds total, skip, limit, and has_more. Because the endpoint constructs the response object manually, it explicitly validates nested post data (including related author fields) so FastAPI serializes everything correctly.

The front end then shifts to a hybrid approach. The homepage route in main.py is changed to render only the first batch server-side (for fast initial load and better search-engine visibility), using a centralized post-per-page setting in config.py. The template adds a post container div plus a “Load more” button that appears only when has_more is true. JavaScript handles subsequent requests by calling the paginated API with the current offset and appending returned posts into the container. Since server-side rendering escapes user content automatically, the JavaScript path adds utilities to escape HTML and format ISO date strings.

Finally, the same pagination pattern is applied to user-specific post listings. The users router adds a paginated get user post endpoint that filters both the count and the query by user_id, and the user post template mirrors the homepage logic while pointing JavaScript at the user-specific API URL.

The result is a production-style pagination system: skip/limit-driven API slicing with validated parameters and metadata, paired with a front-end “load more” flow that consumes those guarantees. A brief note closes the loop by mentioning a fastapi-pagination library as an alternative, but the tutorial keeps the manual implementation to make the mechanics clear.

Cornell Notes

Pagination is implemented as an API-first contract: list endpoints accept skip and limit, return only that slice from SQLAlchemy using offset/limit, and include metadata (total, skip, limit, has_more). Validation prevents bad requests by requiring skip ≥ 0 and limit between 1 and 100 (default 10). The homepage and user-post pages render the first batch server-side for fast initial load, then use JavaScript to fetch additional batches from the API when a “Load more” button is clicked. JavaScript escapes injected HTML and formats ISO date strings to match the server-rendered output. The same pagination logic is reused for both all posts and posts filtered by user_id.

Why does the API need to return has_more when it already returns total?

has_more simplifies the client. The front end can show or hide the “Load more” button using a single boolean instead of computing whether skip + limit has reached total. The API sets has_more by comparing skip + number_of_posts_returned against total, so the UI logic stays minimal and consistent.

What guarantees that skip/limit pagination returns stable results across requests?

The database query orders results before applying offset and limit. Posts are ordered by date_posted in descending order, so the “first 10,” “next 10,” and subsequent batches refer to the same chronological sequence every time. Without ordering, the database could return rows in a different order, making pagination inconsistent.

How does SQLAlchemy pagination work in this implementation?

The endpoint uses offset(skip) to skip a number of rows and limit(limit) to cap how many rows are returned. These are chained onto a select query for models.Post, after a count query computes total. The count query does not need ordering because it only returns the total number of posts.

Why is the response model wrapped in a PageinatedPostResponse schema?

The schema defines the API contract between backend and frontend. It includes post (the list of PostResponse objects) plus total, skip, limit, and has_more. This structure lets the client both render the current batch and decide whether more requests are needed.

What changes were required on the homepage route even after the API became paginated?

The homepage still loaded all posts because main.py’s home route performed its own database query and rendered everything server-side. The fix was to update the home route to fetch only the first batch using the same pagination logic (count + first slice) and pass has_more into the template so JavaScript can load additional pages.

How does the tutorial prevent XSS when posts are appended via JavaScript?

When content is injected client-side, it must be escaped manually. The added escapeHTML utility sets text using textContent (treating it as plain text) and then reads back an escaped HTML string. This prevents malicious JavaScript in post titles or content from executing in the browser.

Review Questions

  1. How would you compute has_more from skip, limit, and total, and why does the implementation use the number of posts actually returned rather than assuming limit always equals the returned count?
  2. What role does ordering by date_posted play in making pagination deterministic, and what bug could appear if ordering were removed?
  3. Why does the tutorial render the first batch server-side but fetch subsequent batches with JavaScript, and how does has_more control the UI?

Key Points

  1. 1

    Add skip and limit query parameters to list endpoints, with validation (skip ≥ 0; limit between 1 and 100, default 10) to prevent inefficient or invalid requests.

  2. 2

    Return a paginated response schema that includes both the post list and pagination metadata: total, skip, limit, and has_more.

  3. 3

    Use SQLAlchemy offset(skip) and limit(limit) to fetch only the requested slice from the database, and compute total with a separate count query.

  4. 4

    Order results (by date_posted descending) before applying offset/limit so pagination remains consistent across repeated requests.

  5. 5

    Update server-rendered pages to fetch only the first batch, then rely on JavaScript “Load more” calls to request subsequent batches from the paginated API.

  6. 6

    Centralize the batch size in config.py (post per page) so changing pagination size requires editing one setting.

  7. 7

    When appending HTML via JavaScript, escape user content and format API date strings to match server-rendered output.

Highlights

Pagination is implemented as an API contract: the backend returns posts plus total, skip, limit, and has_more so the front end can act without extra calculations.
Stable pagination depends on ordering—date_posted descending must be applied before offset/limit, or batches can shift between requests.
The homepage becomes a hybrid: server-render the first 10 posts for speed and SEO, then fetch the next batches on demand via JavaScript.
JavaScript injection requires explicit HTML escaping to prevent XSS, since template auto-escaping doesn’t apply to client-side DOM updates.

Topics