My Toolbox App

This post is intended to serve as an introduction to my Toolbox app, giving a brief history of it’s development over time and the current features available in this public release.

Before getting into the details, the project files for Toolbox (created with PowerShell Studio 5.4.139) can be found on GitHub. I’ve never used GitHub before, so I apologize in advanced if I’ve setup anything wrong. In addition to the project files on GitHub you can find ready to use options below:

  • x64 EXE (Just a simple executable you can run from anywhere on your system.)
  • x86 EXE (Just a simple executable you can run from anywhere on your system.)
  • x64 MSI Installer (A basic MSI installer that will install Toolbox to C:\Program Files and create a start menu shortcut.)
  • x86 MSI Installer (A basic MSI installer that will install Toolbox to C:\Program Files and create a start menu shortcut.)
  • PS1 Export (A PS1 file you can run, view, edit.)

I apologize in advanced for the lack of documentation around this tool. If there’s a lot of interest in this tool I’ll certainly circle back around and develop more documentation, however for now the below will have to do. Hopefully you find the Toolbox app as useful as my co-workers and I have. If you have any suggestions or feedback feel free to leave it.

Toolbox History
As for the history of the Toolbox app, initially this app started out as a very simple static list of other PowerShell forms based apps I’ve created over the years. Over time I configured all my PS forms apps to use a central repository that contained an INI file with variable values and a PS module that contained shared functions. Shortly after that I added the ability for the PS forms apps to see what version they were, compare that to a source version in the central repository and auto update if necessary. With the source versions of all my PS apps in the repository, the Toolbox app then gained the ability to auto-install PS forms apps from the repository. After polishing up the interface, functionality, and adding many more features (like using a ListView with icons instead of a listbox), I soon had the majority of my co-workers (in the IT Infrastructure/Support Desk areas) at my workplace using the Toolbox as a means of accessing the PowerShell forms apps I had built. To make the Toolbox more useful I added the ability to add custom apps to the Toolbox in addition to my custom built PowerShell forms apps. This allowed for my co-workers to add custom entries for things like Command Prompt, ADUC, GPMC, SCCM Console, ect to the Toolbox. Mix that with launching the Toolbox as an administrative account (at least in environments where you log in with a standard account and then elevate with an admin account for administrative things) and you now have a scenario where you launch the Toolbox as your administrative account in the morning and then are able to launch your other items from it without have to elevate every time.

Toolbox Public Release
For some time now I’ve debated releasing some of the PowerShell forms app’s I’ve made publicly. Unfortunately most of the time they’re fairly customized to my work place or the app is more than just running a single script / executable and having a functional solution. The Toolbox app though was something I felt I could easily enough generalize for a public release, and in a reddit comment I had at least two people interested in seeing it, so here it is.

For this initial public release of the Toolbox app I copied the latest version of my internal Toolbox app and went through removing any reference to my work place and any features that were really geared towards my work place. Because of this there might be some things in there that I left that don’t actually do anything with the form, or some things that look out of place or cryptic because they interface with a shared function that’s been modified for this public release.

What it is
The Toolbox in its public release form is a basic window that you’re able to add shortcuts to. The shortcuts can go to any shortcut or executable on your system. Ideally you log into your workstation as a standard account with no administrative permissions, and then you launch the Toolbox as a separate admin account. As you then launch the items from the Toolbox, they launch as your admin account.

To add custom apps (shortcuts) to the Toolbox, either right click anywhere in the white listview area and click ‘New Custom Tool’ or go to File -> New Custom Tool.

To edit an existing custom app, right click the app shortcut in the Toolbox and click Edit.

To remove an existing custom app, right click the app shortcut in the Toolbox and click Remove.

Behind the scenes
So behind the scenes the Toolbox works by recording settings to the registry. The default key that the settings are stored under is HKCU:\Software\intrntpirate\Toolbox. Under the Toolbox key there’s a key called ‘CustomAdd’ which will then have a key under it for each custom item added to the Toolbox. Each items key will have a couple of items in it that define the custom items name, path to execute, any arguments, and icon information.

When you launch a item from the Toolbox it’s really just running a simple Start-Process command using the path to execute from above and any arguments defined. Pretty basic…

The icons displayed in the Toolbox are pulled from whatever item is in the path to execute. If you’re pointing to a shortcut, the icon the shortcut has will be used. If you’re pointing to an executable, the executables icon will be used. Alternatively you can define a custom icon if you’d like.

Logging is set to basic logging by default, with the log file going to C:\Temp\intrntpirate\Logs. You can enable debug mode under the Help menu which will result in verbose logging. The log file is in a CMTrace format.

Below are some screenshots for those that would prefer to see the Toolbox before attempting to run it.

Main window when launched:



Window for adding a custom item:




Main window with custom item:

Building a responsive TreeView that displays your AD Domain

Often times you might find yourself wanting to build a TreeView in your form that represents your AD domain structure. The following post will show you how to do that. (Note that I pieced together this solution with the assistance of many Google searches and other folks code samples, however this was done over time and I don’t have any sources available.)

First you’ll want to setup your form with with a TreeView object. You’ll then need to devise a way to initially populate the TreeView. I personally have a function for initially building the TreeView which you can then call with a button click, or the form loading. (In addition to having the TreeView built when the form loads, you’ll also need to create a script scoped variable called “DNsLoaded” that’s an array object. You’ll see why later on.) The function that builds the TreeView contains the following:


if ($treeNodes)
$script:treeNodes = New-Object System.Windows.Forms.TreeNode
$Script:ADDomain = Get-ADDomain
$script:treeNodes.text = $ADDomain.DNSRoot
$script:treeNodes.Name = $ADDomain.DNSRoot
$script:treeNodes.Tag = $ADDomain.DistinguishedName
$script:treeNodes.StateImageIndex = 0
$script:treeNodes.SelectedImageIndex = 1
$treeView.Nodes.Add($script:treeNodes) | Out-Null
Get-NextLevel -selectedNode $treeNodes -dn $ADDomain.DistinguishedName

For a bit of explaining of what’s going on…

  • Lines 1-4 we looks to see if the $treeNodes variable already exists. If it does, it’s assumed the TreeView object is already populated and needs cleared.
  • Line 5 we create a base TreeNode object.
  • Line 6 we run the Get-ADDomain cmdlet.
  • Line 7-9 we populate the Text, Name, and Tag values of the base TreeNode object. The Text and Name values are populated with the DNS root (domain.local), while the Tag value is populated with the DN (DC=domain,DC=local).
  • Line 10 and 11 are only necessary if you have images that you want to use within the TreeView. The images a placed next to the node, with each node in this example being a container or OU.
  • Line 12 we add the TreeNode object to the TreeView.
  • Line 13 we run a function that we’ll cover next.
  • Line 14 causes the base node to expand in the TreeView.

The next function you’ll want to create is the one referenced above in line 13. This function is called anytime you want to load the next layer of objects in the TreeView.


function Get-NextLevel
	param (
	$form.Cursor = 'WaitCursor'
	ForEach ($OU in (Get-ADObject -Filter 'ObjectClass -eq "organizationalUnit" -or ObjectClass -eq "container"' -SearchScope OneLevel -SearchBase $dn))
		Add-Node -selectedNode $selectedNode -dn $OU.DistinguishedName -name $OU.Name -preload $true
	$form.Cursor = 'Default'

To explain the Get-NextLevel function…

  • Lines 3-6 we collect two parameters, the node object that we’ll be adding new nodes to, and the distinguished name of the OU that’s currently selected.
  • Line 7 sets the the cursor to the busy cursor while the function processes.
  • Line 8 is where we run Get-ADObject, specifically searching for OU’s and containers, with the search scope set to one level (so that we don’t get objects more than one level deep) and the search base set to the distinguished name that was passed as a parameter to the function. For each object returned, we run line 10.
  • Line 10 we run the function Add-Node. This function will be covered next, however it simply adds a new node object to the node object received as a parameter. In addition, we pass the parameter ‘preload’ as true. This will be covered next.

The last function necessary to create is Add-Node.


function Add-Node
	param (
	If (!($DNsLoaded -contains $dn))
		$newNode = new-object System.Windows.Forms.TreeNode
		$newNode.Name = $dn
		$newNode.Text = $name
		$newNode.StateImageIndex = 0
		$newNode.SelectedImageIndex = 1
		If ($preload -eq $true)
			ForEach ($OU in (Get-ADObject -Filter 'ObjectClass -eq "organizationalUnit" -or ObjectClass -eq "container"' -SearchScope OneLevel -SearchBase $dn))
				Add-Node -selectedNode $newNode -dn $OU.DistinguishedName -name $OU.Name -preload $false
			$Script:DNsLoaded += $dn
		$selectedNode.Nodes.Add($newNode) | Out-Null
		return $newNode

Now to explain the Add-Node function…

  • Lines 3-8 we collect 4 parameters. SelectedNode (the node in the treeview that we’re adding a node to), DN (the Distinguished Name of the OU we’re creating a node for), Name (the Name of the OU we’re creating a node for), and PreLoad (More details shortly…).
  • Line 9 we check to see if the DNsLoaded array contains the DN passed to the function. More details on why this is done later, however if the DN is not present in the array, lines 11 through 28 are processed.
  • Lines 11-15 we create a new node object and set the name (to the DN), text (to the name), and image states.
  • Lines 16-26 is where the PreLoad parameter comes into play. The PreLoad parameter is used to populate the new node being create with the next level of OU’s. This is done so that in the TreeView you’ll see a plus indicator next to the new node that was created in line 11 if there are sub OU’s to the OU passed on line 5. One line 20 you’ll see that the Add-Node function is called again, except this time with PreLoad set to false. This is so that we don’t try to load all the OU’s in our domain all at once. This is what makes this a responsive solution to populating the TreeView with a domain structure.
  • Line 27 we add the new node to the selected node that was passed to the Add-Node function.
  • Line 28 we return the NewNode object, however this isn’t necessary.

The last thing that needs done before this will work properly is to create a BeforeExpand event for the TreeView. In the BeforeExpand event, put the following:


$treeview.SelectedNode = $_.Node
foreach ($node in $treeview.SelectedNode.Nodes)
	Get-NextLevel -selectedNode $node -dn $node.Name
  • Line 1 we’re ensuring that the node being expanded gets set as the SelectedNode. Then, because we’ve already pre-populated the sub-nodes (The OU’s within the OU we’re selecting.), we do a ForEach on each sub-node, and run the Get-NextLevel function.

Throw this all together and you’ll have yourself a very responsive TreeView of your AD Domain.


Providing a filter for your data results in a PowerShell form

I’ve often found myself wanting to provide filtering functionality in my forms for datagridviews that contain a lot of results. The following is what I’ve done to provide filtering functionality.

  1. Have a datagridview that contains one or more columns that you want to be able to filter text in.
  2. Create a timer in your form with an interval of around 500ms.  Create a ‘Tick’ event for the timer with the following:


    $datagridview.CurrentCell = $null
    If ($textbox.Text.Length -eq 0)
    	foreach ($row in ($datagridview.Rows))
    		$datagridview.Rows[$row.Index].Visible = $true
    	foreach ($row in ($datagridview.Rows))
    		If ($row.Cells[$columnIndex/Name].Value -like "$($textbox.Text)*")
    			$datagridview.Rows[$row.Index].Visible = $true
    			$datagridview.Rows[$row.Index].Visible = $false
  3. Create a textbox within the form, then add a ‘TextChanged’ event with the following:



With the above in place if someone starts typing into the textbox and pauses for 500ms, the timer will execute and filter the datagridview for text that’s in the textbox. In the above example I use ‘-like’ when evaluating the cell content so that the use of a wildcard character is controlled by the textbox.

In the event that your datagridview (or data source in general) contains a ton of records, you could improve the experience by moving the actual filtering process to a Job, and then changing the cursor of the form to ‘WaitCursor’ until the job finishes.

Working with Outlook using PowerShell

There was recently a request by an end user to have email items in a particular folder automatically deleted after they were older than X number of days. Right off I knew this could be done with the Outlook.Application com object, however I didn’t know exactly how. After a bit of digging and messing around I found that it could be done, and fairly easily at that.

The first step is establishing a variable and connecting it to the Outlook.Application com object like so.

$outlook = new-object -comobject “Outlook.Application”

You then need to establish a variable for the mapi namespace.

$mapi = $outlook.getnamespace(“mapi”)

At this point you’re essentially connected to the mailbox for Outlook. You can navigate through the objects in the mailbox by using GetDefaultFolder(), putting the appropriate number in the parenthesis. I’ve mapped out the default objects, listed below. Depending on your situation you might have more.

3 – Deleted Items
4 – Outbox5 – Sent Items
6 – Inbox
9 – Calendar
10 – Contacts
11 – Journal
12 – Notes
13 – Tasks
16 – Drafts
19 – Conflicts
20 – Sync Issues
21 – Local Failures
23 – Junk E-Mail
25 – RSS Feeds
28 – To-Do List
30 – Suggested Contacts

To connect to the Inbox folder, you would do:

$inbox = $mapi.GetDefaultFolder(6)

Now, connecting to a sub-folder doesn’t appear to be as straightforward as I thought it would be. So far the only way I’ve been able to connect to a sub-folder is by doing the following.

$subfolder = $inbox.Folders | Where-Object {$_.Name -eq “SubFolderName”}

You can then work with the sub-folder using the $subfolder variable. If you want to list all the items in the sub-folder, you can do:


If you’re only interested in items older than 1 day, you can do:

$date = (Get-Date).AddDays(-1)
$subfolder.items | Where-Object {$_.SentOn -lt $date}

Back to what I was trying to figure out, which is delete items in a folder that are X numbers of days old, you can delete items like so:

$date = (Get-Date).AddDays(-1)
$subfolder.items | Where-Object {$_.SentOn -lt $date} | ForEach-Object {$_.Delete()}

As I find more things worth mentioning that you can do with the Outlook com object I’ll be sure to add them.

Working with ACL’s in PowerShell

I recently had the need to make some permission changes to a folder and decided to take the time to figure out how to do it with PowerShell without using Cacls. The sources I used will be at the bottom of this post.

The first thing you need is a folder. For this we’ll use a folder called “Test” on C:, so “C:Test”. You then need to use Get-Acl to aquire the security descriptor for the folder.

$acl = Get-Acl “C:Test”

At this point you can call the $acl variable and you’ll get back a table that specifies the Path, Owner, and Access for the folder “Test”. To get a more friendly view of the Access column, call $acl.Access. You’ll then get a nice breakdown of each rule.

Now that we have information on the folder we can start making changes. The first thing I’ll cover is working with configuring inheritable permissions. For this we’ll use the ‘ObjectSecurity.SetAccessRuleProtection’ method. There are two parameters to this method, ‘isProtected’ and ‘preserveInheritance’. ‘isProtected’ is what determines if a folder inherits permissions. A value of true will break inheritance, while a value of false will allow inheritance. ‘preserveInheritance’ is only used if ‘isProtected’ equals true. When ‘isProtected’ equals false, the parameter ‘preserveInheritance’ still needs to be set, however it will be ignored. If you’re breaking inheritance on the folder, this parameter determines if the existing permissions are retained or removed. A value of true will preserve the existing access rules, while a value of false will remove existing access rules. An example of using this method to allow inheritance is below.


Here’s an example of using the method to remove inheritance but copy existing access rules.


And here’s an example of using the method to remove inheritance and also remove existing access rules.


Now onto configuring access rules. Let’s assuming that for this folder we’ve gone with leaving inheritance in place. By default we have the following existing access rules.

FileSystemRights  : ReadAndExecute, Synchronize
AccessControlType : Allow
IdentityReference : BUILTINUsers
IsInherited       : False
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : FullControl
AccessControlType : Allow
IdentityReference : NT AUTHORITYSYSTEM
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : FullControl
AccessControlType : Allow
IdentityReference : BUILTINAdministrators
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : ReadAndExecute, Synchronize
AccessControlType : Allow
IdentityReference : BUILTINUsers
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

Lets say we want to change the access rules so that the built in Users group has modify access to the folder. The first thing we need to do is create a variable that will store the new access rule. For this we’re going to use the ‘FileSystemAccessRule’ class. This class has 7 properties, however we’re only going to need to use 5 to give the Users group modify access. The properties we’ll use are:

  • IdentityReference: The identity that the access rule will apply to. In this case, it will be “Users”.
  • FileSystemRights: The access being given. In this case it will be “Modify”.
  • InheritanceFlags: Determines how this rule is inherited by child objects (folders and files). In this case it will be “ContainerInherit, ObjectInherit” (Container meaning folder, Object meaning file).
  • PropagationFlags: Determines how inheritance of this rule is propagated to child objects. In this case it will be “None”.
  • AccessControlType: Specifies whether a access rule is used to allow or deny access. In this case it will be “Allow”.

Putting this all together you’ll get this:

$rule = New-Object System.Security.AccessControl.FileSystemAccessRule(“Users”,”Modify”,”ContainerInherit, ObjectInherit”,”None”,”Allow”)

You then need to add the rule to the $acl variable we already created by doing this:


That’s all there is to creating new access rules and applying them to the $acl variable, however that’s not all you have to do to apply the access rule and the inheritance settings to the folder. I”ll get to the step that applies the changes right after we go over making changes to the ownership.

When working with ownership you have to use the ‘ObjectSecurity.SetOwner’ method.  To set the owner to the local Administrators group, we would run:

$acl.SetOwner([System.Security.Principal.NTAccount] “Administrators”)

In the end, once you’ve finished with making your modifications, you need to apply your changes to the folder. To do this we use the ‘Set-Acl’ cmdlet like so:

Set-Acl “C:Test” $acl

After you do that, you can relist your access rules and you’ll see that the Users group now has modify access in addition to the existing inherited rules. Just to point it out, you’ll notice that the access rule for granting the modify access will have the ‘IsInherited’ flag set to False.

FileSystemRights  : Modify, Synchronize
AccessControlType : Allow
IdentityReference : BUILTINUsers
IsInherited       : False
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : FullControl
AccessControlType : Allow
IdentityReference : NT AUTHORITYSYSTEM
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : FullControl
AccessControlType : Allow
IdentityReference : BUILTINAdministrators
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : ReadAndExecute, Synchronize
AccessControlType : Allow
IdentityReference : BUILTINUsers
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

And that’s all there is to it.

My sources for figuring this all out are below:

Blog post from Jose Barreto on TechNet that pointed me in the right direction (while this article does describe how to do the above as well, I feel my post is much more clear and to the point of how to do this)

MSDN article on the FileSystemAccessRule class

MSDN article on the SetAccessRuleProtection Method

Working with data tables in PowerShell

I’m creating this post for my own reference for the most part, however hopefully it will assist someone else as well.

The following can be ran in PowerShell to create a basic data table:

$Table = New-Object System.Data.DataTable

For each column you want in the table, you’ll want to run something like the following:

$column = New-Object System.Data.DataColumn ‘NameofColumn’,([COLUMNTYPE])

Or to simplify things:

$Table.Columns.Add((New-Object System.Data.DataColumn ‘NameofColumn’, ([COLUMNTYPE])))

In the above, ‘NameofColumn’ is whatever you want the column title to be. It can contain spaces and symbols. COLUMNTYPE should be replaced with the type of column you want it to be. I most commonly use ‘string’ and ‘datetime’. You can reference for additional column types.

After you’ve created your table with the columns you want, you’ll need to start populating the table. The code below will show you how to add a row to the table, and assumes there are 3 columns, Name, Department, and Email.

$row = $Table.NewRow()
$row.’Name’ = “John”
$row.’Department’ = “IT”
$row.’Email’ = “”

If you find yourself needing to edit  a row, you have two ways IMO that you can do this. The first way is to explicitly call a particular row. You would do this by calling:


In the above, the [0] references the first row. Using [1] would reference the second row, and so on. If you were to run:


You would get the value for the Name column of row 1 ([0]) returned. You can do the following to change the value.

$Table.Rows[0].Name = “NewName”

To delete the row you would run:


The second way you can change a value is a bit more useful. You can essentially search the table for the row you want, and then change the value of a column.

To do this you use Where-Object. The following example will change the department column for the row where the name is equal to “Joe”.

($Table.Rows | Where-Object {($_.Name -eq “Joe”)}.Department = “NewDepartment”

Finally, if you find yourself needing to delete a row you can do so using the same two methods we used for modifying a row/column. You can delete a row by running:


and also by doing:

($Table.Rows | Where-Object {($_.Name -eq “Joe”)}).Delete()

Managing NPS using PowerShell and netsh

The following post will detail how you can use a combination of PowerShell and netsh to manage one or more Microsoft NPS servers (Server 2008 R2). In the example I’ll detail how you can update the shared secret on all the RADIUS clients across all your NPS servers.

The first thing you need to do in such a PS script is to have a way of identifying your NPS servers. If you’re running a small environment where all the NPS servers are going to need to be setup the same way, then this is easy and I’ll explain what I’ve done below. If you’re running a larger more complex environment where you have groups of NPS servers setup differently, then you’re likely going to want to create some config files for your script to pull server names from.

Assuming you have a small environment… make sure you have your NPS servers registered. To register an NPS server, launch the ‘Network Policy Server’ console, right click the top node ‘NPS (local)’ and click ‘Register server in Active Directory’. This will cause the servers computer account in Active Directory to be added to the security group ‘RAS and IAS Servers’. You can then have your PS script pull a list of NPS servers from this group using a line like

$NPSServers = Get-ADGroupMember “RAS and IAS Servers”

The next step is to build an array of all the Radius clients you have on your NPS servers. Ideally you would already have the same Radius clients on each NPS server, however if you happened to have a Raidus client setup on one NPS server and not on another it’s not a super big deal in regards to having the script update the shared secret.

To build the array of the Radius clients, the first thing you’ll want to do is create the array object.

$ClientArray = @()

You then will want to do a ForEach on your list of NPS servers and collect an export of the Radius clients using netsh. The below can be used to do this.

$NPSServers | ForEach-Object {
$Server = $_.Name
$Result = Invoke-Command -ComputerName $Server -ScriptBlock {
netsh nps show client

This will populate the variable Result with an export of all the clients. You’ll then have to parse through the result variable and pull out the client information. The below shows a way (maybe not the best way) to pull out the name of each client on the NPS server, along with the other information on the client.

$Result|ForEach-Object {


If ($currenttext-like“*Client configuration*”) {

$nameline= ($currentline+ 2)


$IPline= ($currentline+ 3)


$stateline= ($currentline+ 4)


$secretline= ($currentline+ 5)











If ($Global:ClientArray-contains$String) {



Else {






Just for the record, I’m well aware that the above code could be improved upon. I just copy and pasted that from the first version of a utility I wrote and have never gone back to review and clean it up. It works, so… yeah.

After the above has been ran against each NPS server, you’ll have an array that contains a list of Radius clients without any duplicate names. You can then run the following to go through each client in the array on each server and update the shared secret.

$ServerLoop= 1

$ServerArray|ForEach-Object {


Log (“Currently processing NPS server $CurrentServer.”)

$ClientLoop= 1

$Global:ClientArray|ForEach-Object {

Status (“Client $ClientLoop of $ClientCount. Server $ServerLoop of $ServerCount.”)


Log (“Updating secret on RADIUS client $Name on $CurrentServer.”)

$Result=Invoke-CommandComputerName $CurrentServerScriptBlock {



netsh nps set client name =$Name sharedsecret =$Secret

} ArgumentList $Name, $NewSecret

If ($Result-like“*Ok.*”) {

Log (“The client $Name successfully received the new secret on $CurrentServer.”)


Else {

Log (“ERROR: A problem occured while updateing the secret for $Name on $CurrentServer.”)








As you can see, the main thing to note is the use of netsh to specify the name of a Radius client, and then specify the new shared secret. Below are some other netsh commands you can use to manage your Radius clients using PS scripting like what you saw above.

Add a client

netsh nps add client name = $ClientName address = $ClientIP state = enable sharedsecret = $secret

Edit a client name

netsh nps rename client name = $Name newname = $NewName

Edit other details of a client

netsh nps set client name = $name address = $IP state = $status sharedsecret = $secret

Delete a client

netsh nps delete client name = $Name


Alternatively to having a bunch of PowerShell scripts that do this stuff for you, you can create a PS Forms based app that provides you central management of your NPS servers. Below are some screenshots of what I’ve created for the network team at my current employer to use to do day to day tasks with the NPS servers we have.

The screenshot below is of the main page of the app. From here you can add/edit/delete clients, as well as view the radius logs combined from all the NPS servers, update the shared secret across the board, and view information on a specific NPS server.


The screenshot below is of the interface to create a new radius client. When the create button is pressed, PS goes out and creates the new client on all the NPS servers.


The screenshot below is of the interface to edit a radius client. Like when creating a new client, clicking the update button causes PS to go out across all NPS servers and update information for the client on each server.


The screenshot below is of the interface for syncing an NPS server with another. This could be useful if a single NPS server seems to be out of sync from the others, or if you stand up a new NPS server. (I’m well aware of the ability to export your NPS config and then import it on new NPS servers.)


The screenshot below is the first of several interfaces for viewing the NPS logs. The screenshot below just shows a list of all the dates there are logs for on all the NPS servers. Upon clicking a date, the screen would update with a list of all the systems that have connected to the NPS servers. Clicking a system would result in a cleaned up view of the NPS logs for that system. Useful when troubleshooting connectivity issues.


Profile Picture Manager

Last week I threw together a quick form in PrimalForms that allows users to do the following:

  • View their current AD user profile picture
  • Upload a new picture from a saved JPEG image
  • Use an attached webcam to take a new picture

It can be downloaded HERE.

The code I used for capturing the webcam content was found somewhere online. I didn’t initially plan to post the finished product online, so didn’t think to save the source of code. If I happen to run across the source I’ll post it. UPDATED: SOURCE

The icons used in the form were just ones I found on a Google image search. Most certainly didn’t make them, but seeing how I’m not selling this or making any money off the hits to this site, I don’t see how anyone should have a problem with it.

If you find any problem with the form, or have suggestions for improvements post a comment.



Convert string number to numerical number

Assume that you have a number contained in a variable as a string.

$var = “2”

Now assume you want to use that variable to specify an item in an array, like so:


In PowerShell v3, you’ll get the expected outcome. However when using the above in PrimalForms (which is based on PowerShell v2) it won’t work. The string based number has to be converted to a numerical number. You can do this by running the following line:

$var = [int]$var

You’ll now have a numerical number stored in your variable.

Query against 389 Directory Services (Open Source LDAP) with PowerShell

Recently I had the need to query an LDAP server running 389 Directory Services with PowerShell. I found my way to this website -> where you get the basics of connecting to a non-Microsoft LDAP directory using PowerShell, however in my case the steps provided didn’t work. Below are the steps I took in order to successfully query information on user accounts on the 389 Directory Services server in my environment.

I start off in my script with the following lines of code.

$c = New-Object System.DirectoryServices.Protocols.LdapConnection “LDAP-SERVER-NAME-HERE”
$c.SessionOptions.SecureSocketLayer = $false
$c.AuthType = [System.DirectoryServices.Protocols.AuthType]::Anonymous
$credentials = New-Object “System.Net.NetworkCredential”
$basedn = “”
$scope = [System.DirectoryServices.Protocols.SearchScope]::subtree
$attrlist = “*”

The differences between what I did and what mikemstech did was that I disabled SSL and set the AuthType to Anonymous. My 389 DS refused connections if I had those two items set any other way. It’s hard to tell if this was due to it being a 389 DS server, or if it was a unique customization of the environment here.

The next thing I did in my script was create a function for querying user information, however you wouldn’t have to do this.

function Unix-LDAP {
Log (“Beginning Unix-LDAP”)
clear-variable LDAPuidNumber, filter, r, re
Clear-Variable TermEmp, ADsamAccountName, FoundUnix -Scope Global
$LDAPuidNumber = $args[0]
Log (“The UID number being queried in Unix LDAP is $LDAPuidNumber.”)
Log (“The ADsamAccountName variable is currently $Global:ADsamAccountName.”)
$filter = “(uidNumber=$LDAPuidNumber)”
$r = New-Object System.DirectoryServices.Protocols.SearchRequest -ArgumentList $basedn,$filter,$scope,$attrlist
$Global:TermEmp = 0
$Global:FoundUnix = 0
$re = $c.SendRequest($r)
$Global:ADsamAccountName = $re.entries.attributes.uid[0]
If ($Error[0] -like “Cannot index into a null array.*”) {
Log (“ERROR: The user no longer has a Unix ID. Likely a terminated employee.”)
$Global:TermEmp = 1
Else {
Log (“The Unix-LDAP query returned a value of $Global:ADsamAccountName.”)
$Global:FoundUnix = 1
Log (“Finished Unix-LDAP”)

So the important lines from above are lines 8, 9, 14, and 15. In my case I was query users based upon their uidNumber, however you could replace uidNumber in line 8 with any other property you wanted to query on.

On line 14 you’ll see that at the end of $re.entries.attributes.uid I added [0]. This was something not mentioned on the site I linked to above that I ended up figuring out. Once again, I’m not sure if this is just a difference in how 389 DS operates, or a special configuration in my environment.

You’ll also see that I work with the $Error variable. I do this so that I can determine if the query succeeded or not.

Hopefully the above will save someone else out there a bunch of time.