How to handle API Pagination with PowerShell

In this blog post, I will go through how I handle API pagination with Powershell when working with different APIs. I will use the rickandmorty API to showcase what pagination is and how you can work with it, in Powershell.

Why do some APIs even have pagination?

Pagination is just like when you are shopping on the web. Let’s say you are looking for a new laptop. When you search for the product on the webshop, the results will usually be split out into multiple pages, only showing maybe 20-50 products per page. The reason for this is that a request to the server only showing 50 products compared to a request containing hundreds of products, will be much more efficient and faster. 

The same thing goes for an API. Let’s say you are making a request to an API which have millions of search results. Now if the user only needed the first 50 search results, there is absolutely no reason to show millions of results to the user. Therefore we have pagination, this gives the user the possibility to only show the first 50 or 100 search results. Now if the user would need all the search results you would have to go through some actions to pull everything, this is what I will showcase in this blogpost.

How to see if an API have pagination

The first thing I always do when interacting with an API is to read through the documentation. Usually, if there is any pagination going on it will be explained. It will most likely also be explained how you can handle the pagination.

If we take a look at the Rick and Morty API documentation

First, the documentation will walk through the REST API, so the endpoints which you can collect resources from

Secondly, the documentation will explain how the pagination works

Here you can see that there are 4 information points you can use to navigate all the pages.

The documentation explains that there are 20 documents(‘search results’) per page. All the information on the documentation is self-explanatory.

Usually, the way can see that an API has pagination is that it will output an “info” dictionary containing information on how many pages there are in total.

How to go back and forth in the API pagination with Powershell

Let’s start by querying the API with Powershell:

$url = ""
$Method = "GET"

$request = Invoke-RestMethod -Uri $url -Method $Method

the output will give us two properties: ‘info’ and ‘results’. The info property will give us information on the pagination, so which page are we on, and how to we get to the next page. The results property will give us all the data on the search results

If we take a look at the info property we will see the following:

PS C:\> $ | fl

count : 671
pages : 34
next  :
prev  :

from this info we can see that there are 671 total results, we can see that there are a total of 34 pages, and we can see that we can call the URL to access the next page.

Now if we take a look at the URL for the second page we can see that we have the base-URL:

We have the resource:


and then we have the page selector:


This means that if we made the exact same API request as before but just added ‘?page2=’ to the end of the URL we would get the results from page two.

We can test this by checking the results from page 1 and then from page 2 and select the last result from each call.

$Method = "GET"
$url1 = ""
$url2 = ""

$page1 = (Invoke-RestMethod -Uri $url1 -Method $Method).results | Select -Last 1
$page2 = (Invoke-RestMethod -Uri $url2 -Method $Method).results | Select -Last 1

Write-Output "Last result from page 1"
Write-Output -InputObject $page1

Write-Output "Last result from page 2"
Write-Output -InputObject $page2

If we look at the output we can see that the result from page 1 has the id ’20’, and the result from page 2 has the id ’40’, which matches that we have 20 results per page.

Last result from page 1
id       : 20
name     : Ants in my Eyes Johnson
status   : unknown
species  : Human
type     : Human with ants in his eyes
gender   : Male
origin   : @{name=unknown; url=}
location : @{name=Interdimensional Cable; url=}
image    :
episode  : {}
url      :
created  : 04/11/2017 22.34.53

Last result from page 2
id       : 40
name     : Beth's Mytholog
status   : Dead
species  : Mythological Creature
type     : Mytholog
gender   : Female
origin   : @{name=Nuptia 4; url=}
location : @{name=Nuptia 4; url=}
image    :
episode  : {}
url      :
created  : 05/11/2017 10.02.26

How to retrieve all the API data

Now of course there will always be a time when we need all the data provided from the API. To do this we need to make a request per page, collect the data from that page and append it to an object. This can be done fairly easy, by looping through all the pages by adding 1 to the end of the URL: ?page=1, ?page=2, ?page=3. I will need to this up to and with page 34 which was the last page according to the ‘info’ from the API we called earlier.

The basic functionality for looping through the pages would look similar to below. Here I am using the variable $y to define the page number, which is starting at 1 and it will increase with 1 by using: $y++ in every loop.

$Method = "GET"
$y = 1
while($y -le 34){
    $url = "$y"
    $requst = Invoke-RestMethod -Uri $url -Method $Method

Now for creating the object I would start by creating a new empty ArrayList before the loop. Then I would just add all the results from the API call into the newly created list. One thing to remember is that if you for each page add all the results into the list, it would generate a list containing 34 items. 

Usually, you would rather want to have a list containing all 671 results. Therefore I would create a new foreach loop. This loop would run through each page and add every single search result to the array.

the logic would look similiar to the below:

# Defining the HTTP method as GET request
$Method = "GET"

# Setting the first page, to page 1
$y = 1

# Creating an empty ArrayList
$allResults = New-Object -TypeName System.Collections.ArrayList

# While the page number is less than or equal to 34 (Last page number)
while($y -le 34){
	# Generating the URL and making the API request
    $url = "$y"
    $requst = Invoke-RestMethod -Uri $url -Method $Method
	# Looping through all the search results
    foreach($r in $requst.results){
		# Adding each search result to the ArrayList
        $allResults.Add($r) | Out-Null
	# Adding the page number by 1

And of course, if you wanted to retrieve the data as JSON you could add the following line after the loop:

$jsonOutput = $allResults | ConvertTo-Json -Depth 4

JSON output:

    "id": 1,
    "name": "Rick Sanchez",
    "status": "Alive",
    "species": "Human",
    "type": "",
    "gender": "Male",
    "origin": {
      "name": "Earth (C-137)",
      "url": ""
    "location": {
      "name": "Earth (Replacement Dimension)",
      "url": ""
    "image": "",
    "episode": [

... .


In this blog post, I want over how to hand API pagination with Powershell, how you can easily retrieve all the information you need.

Working with APIs can be tricky sometimes, and if you are not aware of how pagination works it can be even harder to get the information you need. Unfortunately, some APIs do not have great documentation, and in some cases might not even mention pagination, so if you have some experience with it might be easier to understand what it is, and how it works.

One of the key aspects is to find out how many pages there are in total, then create a loop that alters the URL for the API call, in each loop. Then add the result from each API call to an ArrayList.

An important thing to remember is that if you are looping through each page and adding the page to the ArrayList you will get an array containing all the pages, this might be what you want, but in most cases, you would rather have an ArrayList containing all the result.

You can check if you created an ArrayList of all the pages or an ArrayList of all the results by doing the following:

# ArrayList containing all the pages:
PS C:\> $allResults.count


# ArrayList containing all the results
PS C:\> $allResults.count


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies. By continuing to use this site, you accept our use of cookies.