How to Get Started Developing a Powershell Module

For some time now i have been wanting to write a powershell module for administrating Cisco Meraki Networks through their dashboard APIv1, and i thought it would be a great use case for writing on my process, learning and ideas about writing powershell modules. I am going to use some tools i have created my self(available on Powershell Gallery) and then i will use some awesome modules also found on the Powershell gallery. In this first post will go in depth on how i start a new module development process, how i structure the project and then i will get the Meraki Athentication cmdlet done.
What is a powershell module?
So what is a module? well the best way to learn what a module is to read the Microsoft Docs Microsoft_Docs/about_Modules. I will try and explain it in short terms: A powershell module is a package that primarily contains powershell cmdlets, functions and variables. The Module package, makes it easy to distribute the functionality into other powershell sessions. So lets say you write som powershell functions and you want to distribute them to your colleagues, then you can package them as a powershell module, share it with your co-workers and then they can easily import the functionality into their powershell session with the Import-Module command. The good thing about putting your functions into a module is that you can develop specific to your organisation, and integrate the module documentation directly into your powershell code. This way the user can just run Get-Help or even Get-Help on the specific cmdlets you put into your module.
Generally a powershell module more or less can just contain a single .psm1 containing all your functions and cmdlets, lets tak a look at how that would work.
So i have created a new file called MyModule.psm1
in the file i will put my first cmdlet which takes a Name parameter and outputs the “Hello ”.
|
|
Now if I run the function I would get the following output:
|
|
|
|
Now to actually tell the module that I need this function as a cmdlet for my module I can enter Export-ModuleMember -Function Get-Hello below the function in our .psm1 file. I can even give the cmdlet an alias for ease of use in the terminal by adding the -Alias “gh”.
Note! I have entered the following two lines:
- [CmdletBinding()] – Used to define you function as an advanced function, this will also give you the default set of paramaters such as -Verbose
- [Alias(‘gh’)] – Used to create a New-Alias for your cmdlet.
|
|
You can now actually import the module and use the cmdlet Get-Hello and even use the alias ‘gh’, lets check it out.
I can import the module and check what commands are imported:
|
|
output:
Name ExportedCommands
---- ----------------
MyModule {[Get-Hello, Get-Hello], [gh, gh]}
|
|
output:
Hello Christian
And I can utilize the Alias I exported aswell:
|
|
output:
Hello Christian
So can I just create a single file and then develop the module in this file? Well you can but it might not be the best idea. Imagine you are developing a module with 15-20 cmdlets. If you have 15-20 cmdlets in a single file and lets say every cmdlets uses about 50 lines of code, then in no time you would have a .psm1 file with approximately 1000 lines of code. Maintaining a file with a 1000 lines of code can be a bit cumbersome. So to make it easy I can create a folder structure which allows us to keep every single function and cmdlet in its on single .ps1 file, and then automate the process of combining all the functions and cmdlets into a single .psm1 file to import. This way if you ever need to make a change in a cmdlet you can just open that single file with about 50 lines of code, make the change, build the new .psm1 file.
How to get started with a new project
So when i start a new project i have a certain folder structure i use for development and easy maintenance of the module. I think this way of structuring the module is more or less a standard now. The folder structure is displayed below:
|
|
The project is structured to easy develop on single files for each powershell function and then when you want to release the module you can run an automated build process which combines all function into a single psm1 file.
Now you could create all these folder your self by manually creating them or even create a script that does it for you. But you don’t have to. There are many greate Modules available on the powershell gallery which automates this for you.
I have tried some of the module, and in most cases either the folder structure didn’t fit my needs or i would have som additions to create or copy in a build script, and if im on a new system, i would have to install the different modules i use to develop with. So to automate my process and quickly get started with a new project i have create a script called New-ModuleProject.ps1. If you want to check out the script you can see all the code on my GitHub.
To install the script from the Powershell Gallery run the following:
|
|
I you are using Windows Powershell you can just run the script New-ModuleProject.ps1 to use the script. If you are on Powershell 7 on a Unix system you might have to run the script from your script folder. Now since i am on a mac i can call the script from /Users/hoejsagerc/.local/share/powershell/Scripts/New-ModuleProject.ps1.
So start a new project i will call the script like so:
|
|
this command will create the module names SCPSMeraki in my current directory. It will install all the modules i need by defining the switch parameter: -Prerequisites. It will create the entire folder structure by defining the switch parameter: -Initialize. And it will download the build script by defining the parameter: -Scripts
Once you have run the command you should see a folder named as the ModuleName you provided.
Providing som information on your new module
So the first thing i like to do when starting a new project is to update the Module manifest. The module manifest is a clear definition on what your module is about, it’s version, which commands it contains and other useful information for the users of your module. The way i have created the build.ps1 script you will only need to provide a few of the many informations available in the .psd1 file(module manifest).
Now first of all once you ran the New-ModuleProject script, it created a Module manifest for you to start using. The module manifest can be found in the /Source/
The things i like to edit before starting my development is the ‘Author’, ‘CompanyName’, ‘Copyright’ And ‘Description’
|
|
Now for the version of the powershell module. I always use semantic versioning. Semantic versioning follows the order of: … So how i think about it, is that the first time you release a fully functional build of the module, it would have version 1.0.0.
Versioning the module
Major
- Major changes is whenever you create a release which completely changes the functionality of the module. So for example with Cisco Meraki’s API. If i developed the module to their v0 API and called that version 1.0.0. Then meraki developed their v1 API. And If i then changed my module to now interact with v1 API i would define that as version 2.0.0
Minor
- The way i use minor versions i defined by how many functions i develop for the module so if i have 10 cmdlets and 5 functions i would give it version 1.15.0
Build
- The way i think of build versions i whenever i create a build for release i would increase the build number by 1.
Now if you use my build.ps1 script you do not to have think about version control in your module. The script will automatically calculate the correct version for you and provide the version to your Module manifest. It does this by calculating the number of cmdlets and function, and just append the build number by 1. The only thing you would have to manually control is the Major versions. So if you had a new Major version change you would have to open the Module manifest located in: /Source/ and change the major version number.
Lets make our first cmdlet
Now you develop your functions and cmdlet inside the Source folder. But inside the Source folder you also have a Public and a Private folder, what’s that about?
The Public folder Is where you would place all the cmdlets – so all the functions that you want to export so that the user can call those functions.
The Private folder Is where you would place all the functions you don’t want the user to be able to import or use. So these are the helper functions for your cmdlets. When you write your cmdlet you might need some functionality but it might not really have anything to do with the cmdlet you are actually writing. Then you can put that functionality inside a private function and call that function from your cmdlet script.
Why not just develop the entire functionality in the public function?? Well to keep your functions and cmdlet easy to maintain and easy for other people to use it is a good standard to keep your functions only do one thing.
Creating a new Private function
Now, the first cmdlet i want to create, should be used to authenticate the user to the Meraki Dashboard API
So to do this i will need a functionality which can actually handle the API call for me, and i will need the functionality to retrieve the users Meraki Organisations. So this is a perfect example on the use case of a Private function (which will handle the API call) and a Public function which provide the specific API parameters to the Private function, defined by the user.
I will start by creating the private function for handling the API call.
Start by creating a .ps1 file in the Private folder. Now the name of the file should be exactly the same as what you name your function!
So to create the basic functionality for connecting to the Meraki api i have created the following:
|
|
I have created a new function which i have named with a Verb-Noun naming convention. I put a prefix on the noun on my private function “PR” so i easily know if the function is a Private function.
The function takes three parameters a Method, a Resource and an API Key.
The Method
- Is used for basic HTTP methods so GET, POST, PUT and so on.
The Resource
- Is used for specific API Resources so instead of defining the url everytime, i can just specify the specific resource to an API call.
The Api Key
- Is the users API Key which, when passed as a parameter, is then set as a Script variable. The reason for this is that the first time a users uses the module they should authenticate to meraki, and then to save the user from having to provide an api key for every call, it is now stored as a script variable in the module. Once the users closes his powershell sessions, he will have to re-authenticate with an API key.
the $baseUrl is Meraki’s base url which is the same for every api call
the $headers is also the same headers for every single api call
and at the end I will just call:
|
|
Making your function advanced
Now there are a few things i can change in my function to make it an advanced function.
Handling parameters
The first thing i will do is to define my parameters a bit better. To learn in-depth on how to define your parameters you can read the Microsoft_Docs/about_Functions.
My parameters before:
|
|
My parameters now
|
|
So first of all i have set the [CmdletBinding()] which will make the function advanced and automatically provide the default parameters such as -Verbose.
If have then set the [validateSet = “POST”, “GET”, “PUT”, “DELETE”] – which will make sure that the only values that can be provided into this parameter is POST, GET, PUT and DELETE, and they should be provided as Strings defined by [String].
Then for the $resource parameter [Parameter(ValueFromPipeline=$true, Mandatory=$true)]
The reason i have set ValueFromPipeline is that, this way i could potentially set an array of resources, pipe them into the function and get multiple outputs of the function from different API calls. For example if i want to get both the data from a network /networks/{networkId} and all the devices from that network /networks/{networkId}/devices i could do the following:
|
|
Handling the powershell pipeline function
Now since this is a helper function and it handles the actual API call i think it could be very usefull for the the function to have pipeline support. This means that in my cmdlet i can use the function in a pipeline to pipe the data into another function.
To do this i will use the begin{}, process{} and end{} functionality. You can read in-depth about the subject on Microsoft_Docs/about_Functions_Advanced_Methods.
Now i have devided my function up with:
Begin:
- Setting up all the variables
Process:
- Calling the Invoke-RestMethod command to process the data
End:
- Finishes up the function with a verbose message with the status code of the call The last thing i have done is to set different verbose messages to provide some output for the user incase they want verbose output on whats happening.
and the final function looks like this:
|
|
Creating a new Public function (cmdlet)
Now i want to create a cmdlet for the user to authenticate to the Meraki Dashboard. My idea here is that a user would call the cmdlet Set-SCMrkAuth -ApiKey, which would make an API call and set the first organisation it retrieves as a Script variable and then save the api key as a script variable. This way it will be easy for the user to get authenticated and start manage the Meraki Network. My guess is that most users will only primarily, have acces to a single organisation. But incase a user has multiple organisations, i will create a parameter for the user to set a specific organisation id, for the organisation they want to connect to.
So i have created a new .ps1 file in the Public folder, and named it Set-SCMrkAuth. I will prefix all my public functions nouns with ‘SC’, for Scripting Chris, to avoid users having any mismatch with other cmdlets, from different modules.
Now the function will take two parameters: ApiKey and OrgId
|
|
I have set the HelpMessage parameter attribute, because the parameter is mandatory. So if the user tries to call the cmdlet but doesn’t provide the api key then the terminal will prompt the user, with the help text, to enter an api key.
Now i am going to set the cmdlet up for Pipeline support so i will create a Begin, Process and End statement again.
In the Begin process i will only set a verbose message to let the user know whats about to happen.
|
|
In the Process statement i want to, if the OrgId parameter was not set, query the Meraki API and retrieve the first organisation available, and if it was set then validate that the OrgId matches an organisation the user has access to:
The first if statement tries to retrieve the first Organisation Id the user has access to, and if the API call fails it will povide the $statusCode and $statusDescription to write out an error in the end statement
|
|
The second if statement looks if the OrgId provided by the user exists in the api call data to /Organizations if it does it will set the variable $OrgId scope to Script, if doesn’t it will provide a status code and a message to tell the user the OrgId could not be found:
|
|
The End block will output the status of the api call:
|
|
Now I can test the functionality by loading both of the functions into my Powershell session. And then run the cmdlet Set-SCMrkAuth
First without the -OrgId Parameter
|
|
output:
"Organization Id"
Second with the -OrgId Pamrameter
|
|
output:
"Organization Id"
Now that i get output with both commands, | know that both our Privat and Public function works.
Building the module
Now to test the compiled module if it works with both public and private functions combined in a single .psm1 file I can utilise the build.ps1 script.
Debug Build
I always run a “debug” build to make sure everything works before i run a release build. The reason for this is that the debug build creates a temp folder and places the module into this folder. It also doesn’t execute any cleaning processes or publishing processes and therefore no versioning on the module.
Now to execute a debug build i will navigate to the root of my module folder, and run the following command:
|
|
I should now see that a temp folder has been created inside the Output folder, and I can now test if the module actually works. First I will import the module
|
|
if I then run the Get-Command I should see that the public function is ready to use as a cmdlet and the private function is hidden.
|
|
output:
CommandType Name
----------- ----
Function Set-SCMrkAuth
I can now try and run the command Set-SCMrkAuth to see if it works
|
|
output:
VERBOSE: Initiating Meraki Dashboard Authentication
VERBOSE: Authenticating to Meraki Dashboard API
VERBOSE: Setting the API Key {api_key} as a Script Varaible
VERBOSE: Setting the base url:
VERBOSE: Setting the API Call headers
VERBOSE: Invoking the API call with uri: https://api.meraki.com/api/v1///organizations and the Method: GET
VERBOSE: GET https://api.meraki.com/api/v1///organizations with 0-byte payload
VERBOSE: received -byte response of content type application/json
VERBOSE: Content encoding: utf-8
VERBOSE: Setting the OrgId Variable as Script Scope
And since i don’t get any errors I know it worked!
Release Build
No that I know it works | can actually create a Release build to get the Module version numbers updated. To do this I will utilise the Invoke-Build command again. But this time I will set the -Configuration parameter to “Release”
|
|
I should now see my module build in the output folder and in this case it will have created a folder named 0.2.2 after the current version of the module. If I check the Module manifest inside the Source folder it should also show the new version.
Round-up
In the comming posts i will go in-depth on how i use GitHub for Source Control and CI/CD to automate my Releases to PowerShell Gallery.
Full code for Function: Invoke-PRMerakiApiCall
|
|
Full code for Function: Set-SCMrkAuth
|
|