Introduction
This article is about how to efficiently fetch large datasets (for example, all members) from the Perfect Gym API. You will learn why deep pagination with $skip becomes slow, and how to replace it with keyset paging, which keeps performance stable even with very large tables. This method supports data synchronization processes and helps ensure fast, scalable integrations.
In this article, you will learn how deep pagination affects performance, why keyset paging is recommended, and how to implement it step-by-step using the Perfect Gym API.
This article will be helpful for Technical Partners, Integration Developers, and PG Champions.
Before you start
Make sure you already have:
Access to the Perfect Gym API.
Basic understanding of OData queries (
$filter,$orderby,$top).Authorization set up according to your environment (test/live).
Fast lane
This is a brief overview. Detailed instructions follow in the sections below.
Do not use $skip for large datasets — it causes slow SQL performance.
Start with a query ordered by id asc and take the first batch using
$top.Check the highest ID returned.
Request the next page using
id gt <lastId>.Repeat until no results are returned.
Instruction
Why deep pagination with $skip becomes inefficient
Example of inefficient paging:
GET /Api/v2.2/odata/Members?$filter=isDeleted eq false&$orderby=id asc&$skip=9500&$top=50
When using deep offsets (for example, $skip=9500), the database still needs to find, process, and sort all 9,500 rows that come before the requested page. In the generated SQL query, the system must number every row in the result set and only then return the small portion you asked for. This makes each request slower as the skip value increases.
This causes:
High I/O reads
Increased CPU time
Slow response times
Example from real performance measurements on a database with 10,000 members:
| Skip value | IO Reads | CPU Time |
|---|---|---|
| 0 | ~24 000 | fast |
| 5000 | ~480 000 | slower |
| 9500 | ~900 000 | 11s |
Solution: Keyset Paging (Id gt <lastId>)
When your goal is to iterate through all records in a table, instead of using $skip, use a filter based on the last received record ID.
Step 1 - First request:
GET /Api/v2.2/odata/Members ?$expand=familyParents,familyChildren,customAttributes,consultant,paymentSources,memberBalance,marketingSources,agreementAnswers &$filter=isDeleted eq false and memberType eq 'Member' &$orderby=id asc &$top=50
Step 2 – Next request
Take the maximum id from the previous batch and use it in the next call:
GET /Api/v2.2/odata/Members ?$expand=familyParents,familyChildren,customAttributes,consultant,paymentSources,memberBalance,marketingSources,agreementAnswers &$filter=isDeleted eq false and memberType eq 'Member' and **id gt maximum_id** &$orderby=id asc &$top=50
Repeat this process until no more results are returned.
Why this is faster
Key idea: Keyset paging avoids sorting and numbering all prior rows, so performance remains stable as you move forward.
SQL doesn’t need to sort or number the entire preceding range.
Each query filters only forward (or backward) from the last known record.
CPU and I/O remain consistent, even near the end of large tables.
Example: Jumping near the end of the dataset (100 users from the end) yields:
IO reads: ~6,200
CPU time: ~250 ms
Total query time: ~500 ms
Practical notes
Each page depends on the last ID, so calls must be sequential.
However, since each request is much faster, overall throughput improves significantly.
If you run multiple API queries (e.g., visits, payments, memberships), you can still execute those in parallel.
Example safe pattern:
Query 1 (Members): 1 sequential stream Query 2–4 (related data): few concurrent calls each
Key takeaway
Avoid using $skip for large datasets - use keyset paging with id gt <lastId> for fast, scalable synchronization.