How to get started developing a powershell module

powershell module development

For some time now i have been wanting to write a powershell module for administrating Cisco Meraki Networks through their dashboard APIv1, and i thought it would be a great use case for writing on my process, learning and ideas about writing powershell modules.
I am going to use some tools i have created my self(available on Powershell Gallery) and then i will use some awesome modules also found on the Powershell gallery.
In this first post will go in depth on how i start a new module development process, how i structure the project and then i will get the Meraki Athentication cmdlet done.

What is a powershell module?

So what is a module? well the best way to learn what a module is to read the Microsoft Docs Microsoft_Docs/about_Modules. I will try and explain it in short terms:
A powershell module is a package that primarily contains powershell cmdlets, functions and variables. The Module package, makes it easy to distribute the functionality into other powershell sessions. So lets say you write som powershell functions and you want to distribute them to your colleagues, then you can package them as a powershell module, share it with your co-workers and then they can easily import the functionality into their powershell session with the Import-Module command.
The good thing about putting your functions into a module is that you can develop specific to your organisation, and integrate the module documentation directly into your powershell code. This way the user can just run Get-Help <ModuleName> or even Get-Help on the specific cmdlets you put into your module.

Generally a powershell module more or less can just contain a single .psm1 containing all your functions and cmdlets, lets tak a look at how that would work.

So i have created a new file called MyModule.psm1

in the file i will put my first cmdlet which takes a Name parameter and outputs the “Hello <Name>”.

function Get-Hello {

	param (
		[String]$Name
	)

	Write-Output -InputObject "Hello $($Name)"
}

Now if I run the function I would get the following output:

Get-Hello -Name "Christian"
Hello Christian

Now to actually tell the module that I need this function as a cmdlet for my module I can enter Export-ModuleMember -Function Get-Hello below the function in our .psm1 file. I can even give the cmdlet an alias for ease of use in the terminal by adding the -Alias “gh”.

Note! I have entered the following two lines:

  • [CmdletBinding()] – Used to define you function as an advanced function, this will also give you the default set of paramaters such as -Verbose
  • [Alias(‘gh’)] – Used to create a New-Alias for your cmdlet.
function Get-Hello {
	[CmdletBinding()]
	[Alias('gh')]
	param (
		[String]$Name
	)
	Write-Output -InputObject "Hello $($Name)"
}

Export-ModuleMember -Function Get-Hello -Alias "gh"

You can now actually import the module and use the cmdlet Get-Hello and even use the alias ‘gh’, lets check it out.

I can import the module and check what commands are imported:

Import-Module .\MyModule.psm1

Get-Module -Name MyModule | Select Name, ExportedCommands
Name     ExportedCommands
----     ----------------
MyModule {[Get-Hello, Get-Hello], [gh, gh]}

I can call the cmdlet from the module:

Get-Hello -Name "Christian"
Hello Christian

And I can utilize the Alias I exported aswell:

gh -name "Christian"
Hello Christian

So can I just create a single file and then develop the module in this file? Well you can but it might not be the best idea. Imagine you are developing a module with 15-20 cmdlets. If you have 15-20 cmdlets in a single file and lets say every cmdlets uses about 50 lines of code, then in no time you would have a .psm1 file with approximately 1000 lines of code. Maintaining a file with a 1000 lines of code can be a bit cumbersome.
So to make it easy I can create a folder structure which allows us to keep every single function and cmdlet in its on single .ps1 file, and then automate the process of combining all the functions and cmdlets into a single .psm1 file to import. This way if you ever need to make a change in a cmdlet you can just open that single file with about 50 lines of code, make the change, build the new .psm1 file.

How to get started with a new project

So when i start a new project i have a certain folder structure i use for development and easy maintenance of the module. I think this way of structuring the module is more or less a standard now. The folder structure is displayed below:

ModuleName
|	build.ps1
|
|__Docs
|
|__Output
|
|__Source
|	|	ModuleName.psd1
|	|
|	|__Private
|	|
|	|__Public
|
|__Test

The project is structured to easy develop on single files for each powershell function and then when you want to release the module you can run an automated build process which combines all function into a single psm1 file.

Now you could create all these folder your self by manually creating them or even create a script that does it for you. But you don’t have to. There are many greate Modules available on the powershell gallery which automates this for you.

I have tried some of the module, and in most cases either the folder structure didn’t fit my needs or i would have som additions to create or copy in a build script, and if im on a new system, i would have to install the different modules i use to develop with. So to automate my process and quickly get started with a new project i have create a script called New-ModuleProject.ps1. If you want to check out the script you can see all the code on my GitHub or you read through the documentation on my Website.

To install the script from the Powershell Gallery run the following:

Install-Script New-ModuleProject

I you are using Windows Powershell you can just run the script New-ModuleProject.ps1 to use the script. If you are on Powershell 7 on a Unix system you might have to run the script from your script folder. Now since i am on a mac i can call the script from /Users/hoejsagerc/.local/share/powershell/Scripts/New-ModuleProject.ps1.

So start a new project i will call the script like so:

New-ModuleProject.ps1 -Path ./ -ModuleName SCPSMeraki -Prerequisites -Initialize -Scripts

this command will create the module names SCPSMeraki in my current directory. It will install all the modules i need by defining the switch parameter: -Prerequisites. It will create the entire folder structure by defining the switch parameter: -Initialize. And it will download the build script by defining the parameter: -Scripts

Once you have run the command you should see a folder named as the ModuleName you provided.

Providing som information on your new module

So the first thing i like to do when starting a new project is to update the Module manifest. The module manifest is a clear definition on what your module is about, it’s version, which commands it contains and other useful information for the users of your module.
The way i have created the build.ps1 script you will only need to provide a few of the many informations available in the .psd1 file(module manifest).

Now first of all once you ran the New-ModuleProject script, it created a Module manifest for you to start using. The module manifest can be found in the <ModuleName>/Source/<ModuleName.psd1>

The things i like to edit before starting my development is the ‘Author’, ‘CompanyName’, ‘Copyright’ And ‘Description’

ModuleVersion = '0.0.1'

# Supported PSEditions

# CompatiblePSEditions = @()
  

# ID used to uniquely identify this module

GUID = '1be9db23-a239-4ed6-8b53-a7efe394cbb2'
  

# Author of this module

Author = 'Christian Hojesager'	<--
  

# Company or vendor of this module

CompanyName = 'ScriptingChris'	<--
  

# Copyright statement for this module

Copyright = '(c) ScriptingChris. All rights reserved.' <--
  

# Description of the functionality provided by this module

Description = 'Module for administrating Cisco Meraki Network equipment' <--
  

# Minimum version of the PowerShell engine required by this module

PowerShellVersion = '5.1'

Now for the version of the powershell module. I always use semantic versioning.
Semantic versioning follows the order of: <Major>.<Minor>.<Build>.
So how i think about it, is that the first time you release a fully functional build of the module, it would have version 1.0.0.

Version changes:

Major

  • Major changes is whenever you create a release which completely changes the functionality of the module. So for example with Cisco Meraki’s API. If i developed the module to their v0 API and called that version 1.0.0. Then meraki developed their v1 API. And If i then changed my module to now interact with v1 API i would define that as version 2.0.0

Minor

  • The way i use minor versions i defined by how many functions i develop for the module so if i have 10 cmdlets and 5 functions i would give it version 1.15.0

Build

  • The way i think of build versions i whenever i create a build for release i would increase the build number by 1.

Now if you use my build.ps1 script you do not to have think about version control in your module. The script will automatically calculate the correct version for you and provide the version to your Module manifest. It does this by calculating the number of cmdlets and function, and just append the build number by 1. The only thing you would have to manually control is the Major versions. So if you had a new Major version change you would have to open the Module manifest located in: <ModuleName>/Source/<ModuleName.psd1> and change the major version number.

Lets make our first cmdlet

Now you develop your functions and cmdlet inside the Source folder. But inside the Source folder you also have a Public and a Private folder, what’s that about?

The Public folder
Is where you would place all the cmdlets – so all the functions that you want to export so that the user can call those functions.

The Private folder
Is where you would place all the functions you don’t want the user to be able to import or use. So these are the helper functions for your cmdlets. When you write your cmdlet you might need some functionality but it might not really have anything to do with the cmdlet you are actually writing. Then you can put that functionality inside a private function and call that function from your cmdlet script.

Why not just develop the entire functionality in the public function??
Well to keep your functions and cmdlet easy to maintain and easy for other people to use it is a good standard to keep your functions only do one thing.

Creating a new Private function

Now, the first cmdlet i want to create, should be used to authenticate the user to the Meraki Dashboard API

So to do this i will need a functionality which can actually handle the API call for me, and i will need the functionality to retrieve the users Meraki Organisations. So this is a perfect example on the use case of a Private function (which will handle the API call) and a Public function which provide the specific API parameters to the Private function, defined by the user.

I will start by creating the private function for handling the API call.

Start by creating a .ps1 file in the Private folder. Now the name of the file should be exactly the same as what you name your function!

So to create the basic functionality for connecting to the Meraki api i have created the following:

function Invoke-PRMerakiApiCall {

	param (
		[String]$method,
		[String]$resource,
		[String]$apiKey
	)

	Set-Variable -Name $apiKey -Value $apiKey -Scope Script

	$baseUrl = "https://api.meraki.com/api/v1/"

	$headers = @{
		"Content-Type" = "application/json"

		"Accept" = "application/json"

		"X-Cisco-Meraki-API-Key" = $apiKey
	}
	
	Invoke-RestMethod -Uri $baseUrl/$resource -Method $method -Headers $headers
}

I have created a new function which i have named with a Verb-Noun naming convention. I put a prefix on the noun on my private function “PR” so i easily know if the function is a Private function.


The function takes three parameters a Method, a Resource and an API Key.

The Method:

  • Is used for basic HTTP methods so GET, POST, PUT and so on.

The Resource:

  • Is used for specific API Resources so instead of defining the url everytime, i can just specify the specific resource to an API call.

The API Key:

  • Is the users API Key which, when passed as a parameter, is then set as a Script variable. The reason for this is that the first time a users uses the module they should authenticate to meraki, and then to save the user from having to provide an api key for every call, it is now stored as a script variable in the module. Once the users closes his powershell sessions, he will have to re-authenticate with an API key.

the $baseUrl is Meraki’s base url which is the same for every api call

the $headers is also the same headers for every single api call

and at the end I will just call:

Invoke-RestMethod -Uri $baseUrl/$resource -Method $method -Headers $headers

Making your function advanced

Now there are a few things i can change in my function to make it an advanced function.

Handling parameters

The first thing i will do is to define my parameters a bit better. To learn in-depth on how to define your parameters you can read the Microsoft_Docs/about_Functions.

My parameters before:

param (
	[String]$method,
	[String]$resorce,
	[String]$apiKey
)

My parameters now:

[CmdletBinding()]
param (
	[Parameter(Mandatory=$true)]
	[ValidateSet("POST", "GET", "PUT", "DELETE")]
	[String]$method,
	[Parameter(Mandatory=$true)]
	[String]$resorce,
	[Parameter(Mandatory=$true)]
	[String]$apiKey
)

So first of all i have set the [CmdletBinding()] which will make the function advanced and automatically provide the default parameters such as -Verbose.

If have then set the [validateSet = “POST”, “GET”, “PUT”, “DELETE”] – which will make sure that the only values that can be provided into this parameter is POST, GET, PUT and DELETE, and they should be provided as Strings defined by [String].

Then for the $resource parameter [Parameter(ValueFromPipeline=$true, Mandatory=$true)]

The reason i have set ValueFromPipeline is that, this way i could potentially set an array of resources, pipe them into the function and get multiple outputs of the function from different API calls.
For example if i want to get both the data from a network /networks/{networkId} and all the devices from that network /networks/{networkId}/devices i could do the following:

$myArray = '/networks/{networkId}', '/networks/{networkId}/device'

$myArray | Invoke-PRMerakiApiCall -method GET -apiKey {api_key}

Handling Pipeline

Now since this is a helper function and it handles the actual API call i think it could be very usefull for the the function to have pipeline support. This means that in my cmdlet i can use the function in a pipeline to pipe the data into another function.

To do this i will use the begin{}, process{} and end{} functionality. You can read in-depth about the subject on Microsoft_Docs/about_Functions_Advanced_Methods.

Now i have devided my function up with:

Begin:

  • Setting up all the variables

Process:

  • Calling the Invoke-RestMethod command to process the data

End:

  • Finishes up the function with a verbose message with the status code of the call

The last thing i have done is to set different verbose messages to provide some output for the user incase they want verbose output on whats happening.

and the final function looks like this:

function Invoke-PRMerakiApiCall {

	[CmdletBinding()]
	param (
		[Parameter(Mandatory=$true)]
		[ValidateSet("POST", "GET", "PUT", "DELETE")]
		[String]$method,
		[Parameter(ValueFromPipeline=$true, Mandatory=$true)]
		[String]$resorce,
		[Parameter(Mandatory=$true)]
		[String]$apiKey
	)
	
	begin {
		Write-Verbose -Message "Setting the API Key $($apiKey) as a Script Varaible"
		Set-Variable -Name $apiKey -Value $apiKey -Scope Script

		$baseUrl = "https://api.meraki.com/api/v1/"
		Write-Verbose -Message "Setting the base url: "

		$headers = @{
			"Content-Type" = "application/json"
			"Accept" = "application/json"
			"X-Cisco-Meraki-API-Key" = $apiKey
		}
		Write-Verbose -Message "Setting the API Call headers"
	}
	
	process {
		Write-Verbose -Message "Invoking the API call with uri: $($baseUrl)/$($Resource) and the Method: $($Method)"

		try {
			$result = Invoke-RestMethod -Uri $baseUrl/$Resource -Method $Method -Headers $headers
			return $result
		}
		catch {
			$statusCode = $_.Exception.Response.StatusCode.value__
			$statusDescription = $_.Exception.Response.StatusDescription
		}
	
	end {
		if(!($result)){
			Write-Error -Message " HTTP Status Code: $($statusCode) - Error Description: $statusDescription"
		}
	}
}

Creating a new Public function (cmdlet)

Now i want to create a cmdlet for the user to authenticate to the Meraki Dashboard. My idea here is that a user would call the cmdlet Set-SCMrkAuth -ApiKey, which would make an API call and set the first organisation it retrieves as a Script variable and then save the api key as a script variable. This way it will be easy for the user to get authenticated and start manage the Meraki Network. My guess is that most users will only primarily, have acces to a single organisation. But incase a user has multiple organisations, i will create a parameter for the user to set a specific organisation id, for the organisation they want to connect to.

So i have created a new .ps1 file in the Public folder, and named it Set-SCMrkAuth. I will prefix all my public functions nouns with ‘SC’, for Scripting Chris, to avoid users having any mismatch with other cmdlets, from different modules.

Now the function will take two parameters: ApiKey and OrgId

[CmdletBinding()]
param (
	[Parameter(Mandatory=$true, HelpMessage="Please provide and API Key:")]
	[String]$ApiKey,
	[Parameter(Mandatory=$false)]
	[String]$OrgId
)

I have set the HelpMessage parameter attribute, because the parameter is mandatory. So if the user tries to call the cmdlet but doesn’t provide the api key then the terminal will prompt the user, with the help text, to enter an api key.

Now i am going to set the cmdlet up for Pipeline support so i will create a Begin, Process and End statement again.

In the Begin process i will only set a verbose message to let the user know whats about to happen.

Write-Verbose -Message "Initiating Meraki Dashboard Authentication"

In the Process statement i want to, if the OrgId parameter was not set, query the Meraki API and retrieve the first organisation available, and if it was set then validate that the OrgId matches an organisation the user has access to:

The first if statement tries to retrieve the first Organisation Id the user has access to, and if the API call fails it will povide the $statusCode and $statusDescription to write out an error in the end statement

if(!($OrgId)){
	Write-Verbose -Message "Authenticating to Meraki Dashboard API"
	try {
		$OrgId = Invoke-PRMerakiApiCall -Method GET -Resource "/organizations" -ApiKey $ApiKey | Select-Object -ExpandProperty Id -First 1
		Write-Verbose -Message "Setting the OrgId Variable as Script Scope"
		Set-Variable -Name OrgId -Value $OrgId -Scope Script
	}
	catch {
		$statusCode = $_.Exception.Response.StatusCode.value__
		$statusDescription = $_.Exception.Response.StatusDescription
	}
}

The second if statement looks if the OrgId provided by the user exists in the api call data to /Organizations if it does it will set the variable $OrgId scope to Script, if doesn’t it will provide a status code and a message to tell the user the OrgId could not be found:

elseif($OrgId){
	Write-Verbose -Message "Validating Organisation Id provided in parameter agains Meraki Dashboard API"
	try {
		$OrgIds = Invoke-PRMerakiApiCall -Method GET -Resource "/organizations" -ApiKey $ApiKey | Select-Object -ExpandProperty Id
		if($OrgIds -contains $OrgId){
			Write-Verbose -Message "Setting the OrgId Variable as Script Scope"
			Set-Variable -Name OrgId -Value $OrgId -Scope Script
		}
		else {
			$statusCode = $_.Exception.Response.StatusCode.value__
			$statusDescription = "Organisation Id provided in Parameter, was not found in your Meraki Dashboard"
		}
	}
	catch {
		$statusCode = $_.Exception.Response.StatusCode.value__
		$statusDescription = $_.Exception.Response.StatusDescription
	}
}

The End block will output the status of the api call:

if($statusCode){
	Write-Error -Message "Status code: $($statusCode), Error Description: $($statusDescription)"
}

Now i can test the functionality by loading both of the functions into my powershell session.
And then run the cmdlet Set-SCMrkAuth

First without the -OrgId Parameter

Set-SCMrkAuth -ApiKey {api_key}

$OrgId
{Organization Id}

Second with the -OrgId Pamrameter

Set-SCMrkAuth -ApiKey {api_key} -OrgId {org_id}

$OrgId
{Organization Id}

Now that i get output with both commands, | know that both our Privat and Public function works.

Building the module

Now to test the compiled module if it works with both public and private functions combined in a single .psm1 file I can utilise the build.ps1 script.

Debug Build

I always run a “debug” build to make sure everything works before i run a release build. The reason for this is that the debug build creates a temp folder and places the module into this folder. It also doesn’t execute any cleaning processes or publishing processes and therefore no versioning on the module.

Now to execute a debug build i will navigate to the root of my module folder, and run the following command:

Invoke-Build -File ./build.ps1

I should now see that a temp folder has been created inside the Output folder, and I can now test if the module actually works.
First I will import the module

Import-Module ./Output/temp/SCPSMeraki/0.0.1/SCPSMeraki.psm1

if I then run the Get-Command I should see that the public function is ready to use as a cmdlet and the private function is hidden.

Get-Command -Module SCPSMeraki | Select CommandType, Name
CommandType Name
----------- ----
   Function Set-SCMrkAuth

I can now try and run the command Set-SCMrkAuth to see if it works

Set-SCMrkAuth -ApiKey {api_key} -Verbose
VERBOSE: Initiating Meraki Dashboard Authentication
VERBOSE: Authenticating to Meraki Dashboard API
VERBOSE: Setting the API Key {api_key} as a Script Varaible
VERBOSE: Setting the base url:
VERBOSE: Setting the API Call headers
VERBOSE: Invoking the API call with uri: https://api.meraki.com/api/v1///organizations and the Method: GET
VERBOSE: GET https://api.meraki.com/api/v1///organizations with 0-byte payload
VERBOSE: received -byte response of content type application/json
VERBOSE: Content encoding: utf-8
VERBOSE: Setting the OrgId Variable as Script Scope

And since i don’t get any errors I know it worked!

Release Build

No that I know it works | can actually create a Release build to get the Module version numbers updated.
To do this I will utilise the Invoke-Build command again. But this time I will set the -Configuration parameter to “Release”

Invoke-Build -File ./build.ps1 -Configuration "Release"

I should now see my module build in the output folder and in this case it will have created a folder named 0.2.2 after the current version of the module. If I check the Module manifest inside the Source folder it should also show the new version.

Round-up

In the comming posts i will go in-depth on how i use GitHub for Source Control and CI/CD to automate my Releases to PowerShell Gallery.

Full code for Function: Invoke-PRMerakiApiCall

function Invoke-PRMerakiApiCall {

    [CmdletBinding()]
    param (
        [Parameter(Mandatory=$true)]
        [ValidateSet("POST", "GET", "PUT", "DELETE")]
        [String]$Method,
        [Parameter(ValueFromPipeline=$true, Mandatory=$true)]
        [String]$Resource,
        [Parameter(Mandatory=$true)]
        [String]$ApiKey    
    )

    begin {
        Write-Verbose -Message "Setting the API Key $($ApiKey) as a Script Varaible"
        Set-Variable -Name $ApiKey -Value $apiKey -Scope Script
        
        $baseUrl = "https://api.meraki.com/api/v1/"
        Write-Verbose -Message "Setting the base url: "

        $headers = @{
            "Content-Type" = "application/json"
            "Accept" = "application/json"
            "X-Cisco-Meraki-API-Key" = $ApiKey
        }
        Write-Verbose -Message "Setting the API Call headers"
    }

    process {
        Write-Verbose -Message "Invoking the API call with uri: $($baseUrl)/$($Resource) and the Method: $($Method)"
        try {
            $result = Invoke-RestMethod -Uri $baseUrl/$Resource -Method $Method -Headers $headers
            return $result
        }
        catch {
            $statusCode = $_.Exception.Response.StatusCode.value__
            $statusDescription = $_.Exception.Response.StatusDescription
        }
    }

    end {
        if(!($result)){
            Write-Error -Message " HTTP Status Code: $($statusCode) - Error Description: $statusDescription"
        }
    }
}

Full code for Function: Set-SCMrkAuth

function Set-SCMrkAuth {

    [CmdletBinding()]
    param (
        [Parameter(Mandatory=$true, HelpMessage="Please provide and API Key:")]
        [String]$ApiKey,
        [Parameter(Mandatory=$false)]
        [String]$OrgId
    )

    Begin {
        Write-Verbose -Message "Initiating Meraki Dashboard Authentication"
    }

    Process {
        if(!($OrgId)){
            Write-Verbose -Message "Authenticating to Meraki Dashboard API"
            try {
                $OrgId = Invoke-PRMerakiApiCall -Method GET -Resource "/organizations" -ApiKey $ApiKey | Select-Object -ExpandProperty Id -First 1
                Write-Verbose -Message "Setting the OrgId Variable as Script Scope"
                Set-Variable -Name OrgId -Value $OrgId -Scope Script
            }
            catch {
                $statusCode = $_.Exception.Response.StatusCode.value__
                $statusDescription = $_.Exception.Response.StatusDescription
            }
        }
        elseif($OrgId){
            Write-Verbose -Message "Validating Organisation Id provided in parameter agains Meraki Dashboard API"
            try {
                $OrgIds = Invoke-PRMerakiApiCall -Method GET -Resource "/organizations" -ApiKey $ApiKey | Select-Object -ExpandProperty Id
                if($OrgIds -contains $OrgId){
                    Write-Verbose -Message "Setting the OrgId Variable as Script Scope"
                    Set-Variable -Name OrgId -Value $OrgId.ToString() -Scope Script
                }
                else {
                    $statusCode = $_.Exception.Response.StatusCode.value__
                    $statusDescription = "Organisation Id provided in Parameter, was not found in your Meraki Dashboard"
                }
            }
            catch {
                $statusCode = $_.Exception.Response.StatusCode.value__
                $statusDescription = $_.Exception.Response.StatusDescription
            }
        }
    }

    End {
        if($statusCode){
            Write-Error -Message "Status code: $($statusCode), Error Description: $($statusDescription)"
        }
    }
}

Related Post

2 thoughts on “How to get started developing a powershell module

  1. Hi Chris,
    First that all thank you so much for this work.
    I found your post on Reddit about this PowerShell module which helps you to create Modules.
    I’m using the latest version that I found on the PowerShell Gallery (1.1.5).
    I found a little “issue”.
    I have functions with parameters using the “[Alias]” “decorator (not sure which is the right name)”.
    When I run the Invoke-Build command, the build process takes the first [Alias] “decorator” on my parameters and it’s exporting the functions and assigning that alias.
    Example:

    function Hello-Chris {
    [CmdletBinding()]
    param
    (
    [Parameter(HelpMessage = ‘Chris Last Name’)]
    [Alias(‘LastName’)]
    [string]$ChrisLastName
    )
    #to do
    }

    The function gets exported like: Export-Module Member -function “Hello-Chris” -Alias “LastName”

    I found the code where you are adding that content to the module.
    File: build.ps1
    Lines: 121 to 122

    I have to remove the part where you are adding the alias.

    1. Hi Jose,

      Thank you very much for the nice response and i am glad you found it useful.
      Thank you for pointing out the issue, i always welcome criticism, helps me a lot in becoming a better developer.

      I have created a new release on the Gallery: https://www.powershellgallery.com/packages/New-ModuleProject/1.2.8

      Version 1.2.8

      the latest version takes into consideration if you are using alias for your parameters.

      So if you, when you call invoke-build now use the parameter -ExportAlias, it will look at if you have assigned alias to your function(not to the parameters).
      So if you have the following:
      [CmdletBinding()]
      [Alias(‘hc’)]
      Param(
      [Parameter(HelpMessage = ‘Chris Last Name’)]
      [Alias(‘LastName’)]
      [string]$ChrisLastName
      )

      It will only Export the function Alias ‘hc’, even if use parameter alias’s like ‘LastName’

      If you don’t set the parameter -ExportAlias in the Invoke-Build command, it will not export any parameters at all.

      Hope this solved the issue 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This website uses cookies. By continuing to use this site, you accept our use of cookies.