Yet another Using statement

If I have seen further it is by standing on ye sholders of Giants.
— Isaac Newton

I haven’t been blogging for a while and to get back on a track I’ve decided to start with something simple.

Have you ever found a code sample online which shows, you how to do something useful? In my case, it usually ends being improved in some way over original concept. I call this “standing on the shoulders of giants”, hence the quote in the beginning.

So by this post, I’m opening a new series of notes where I’ll be writing about something that I’ve found online and ended (hopefully) improving on.

What’s exactly is a Using?

Using is a C# statement that helps you to ensure that the object is disposed as soon as it goes out of scope, and it doesn’t require explicit code to ensure that this happens. Please note that there is also a using directive which has its PowerShell counterpart since v5.0. It allows to indicate which namespaces are used in the session. This is not the using we’re looking for.

Continue reading

Advertisements

Building PowerShell modules with Swagger Codegen

A word of warning. This was written some time ago and I did’t have time to actually publish it till now. It’s probably rendered obsolete by the release of PSSwagger, but I decided to post it anyway.
APIs. APIs EVERYWHERE

Web API’s are everywhere. They provide cross-platform interface for applications to communicate with each other enabling dev/ops people to create highly automated interconnected systems. But there is a catch: to use the API you need to write an API client in the language of your choice.

For PowerShell this means that you need to read API spec, write code that send correct POST/GET requests and transform raw XML/JSON responces to the .NET/PowerShell friendly objects. And don’t forget tests!

As it happens, this problem was solved a long ago for other languages by Swagger:

Swagger is the world’s largest framework of API developer tools for the OpenAPI Specification(OAS), enabling development across the entire API lifecycle, from design and documentation, to test and deployment.

Basically Swagger allows you to design and document web-API and share it with the world by using OpenAPI specification:

The OpenAPI Specification creates the RESTful contract for your API, detailing all of its resources and operations in a human and machine readable format for easy development, discovery, and integration.

Moreover, once you got your hands on the someone’s API spec, you can build a fully featured client in your programming language for this spec automatically, using Swagger Codegen:

Build APIs quicker and improve consumption of your Swagger-defined APIs in every popular language with Swagger Codegen. Swagger Codegen can simplify your build process by generating server stubs and client SDKs from your Swagger specification, so your team can focus better on your API’s implementation and adoption.

Sounds good, huh?

Continue reading

Visualizing PowerShell pipeline

A picture1 is worth a thousand words.

Occasionally, I see people having issues while trying to understand how PowerShell pipeline is executed. Most of them have no problems when Begin/Process/End blocks are in the single function. And if in doubt, I can always point them to the Don Jones’ The Advanced Function Lifecycle article. But when multiple cmdlets are chained into the one single pipeline, things become a little less clear.

Consider this example.

function Use-Begin {
    Begin {
        Write-Host 'Begin'
    }
}

function Use-End {
    End {
        Write-Host 'End'
    }
}

Let’s try to pipe one function into another:

PS C:\Users\beatcracker> Use-Begin | Use-End

Begin
End

So far, so good, nothing unexpected. The Begin block of the Use-Begin function executes first, and the End block of the Use-End function executes last.

But what happens if we swap the functions in our pipeline?

Continue reading

Using Group Managed Service Accounts without Active Directory module

Hello and, again, welcome to the Aperture Science computer-aided enrichment center.
We hope your brief detention in the relaxation vault has been a pleasant one.
— GLaDOS

Managed Service Accounts (MSA) appeared first in the Windows Server 2008 R2 and received major overhaul (gMSA) in the Windows Server 2012. Those accounts have automatically managed passwords and tied to specific computer (WS 2008 R2) or group of computers (WS 2012). They cannot be locked out, and cannot perform interactive logons, which makes them ideal for running services. Under the hood MSA are the user accounts that inherit from a parent object class of “Computer” and the only supported way to manage them is the PowerShell.

To do so, you have to use cmdlets in the Active Directory module. This is two-step process: first, you create gMSA in the AD and then “install” this account on the target computer(s).

Here is the cmdlets used for the AD part of the process:

And here is the ones used to manage gMSA on the target computers:

When life gives you lemons, don’t make lemonade. Make life take the lemons back!
Get mad! I don’t want your damn lemons, what the hell am I supposed to do with these?
Demand to see life’s manager! Make life rue the day it thought it could give Cave Johnson lemons!
— Cave Johnson

While I don’t often create gMSAs in the AD, I do need to be able to install them en-masse on a servers, preferably via remote PowerShell session. And here comes the pain.

Continue reading

Try PowerShell on Linux in your browser

Try latest release of the PowerShell 6.0 on the Ubuntu 16.04 in the free cloud server from Dply:

How-to

Dply

  1. Click on the button, login with GitHub account and start your server.
  2. When server is up (~3 minutes), navigate to server’s IP address in your browser.
  3. Login with root as username and hostname you’ve set in the server configuration as a password.
  4. Type powershell, hit Enter and start hacking around!

POSH

Get/set XML Schema and Content Types for SharePoint list directly in the content database

There is a charm about the forbidden that makes it unspeakably desirable.
— Mark Twain

Why would you do it?

Sometimes, despite all the warnings, you need to modify XML Schema and/or Content Types for SharePoint list directly in the content database. This could be caused by moving stuff, failed upgrade or removed SharePoint feature that resulted in broken lists.

In SharePoint 2007 and earlier that was fairly easy: you could just fire up SQL Management Studio, dig into the content database and fix it there: list’s Fields, that are part of list’s XML Schema are stored in tp_Fields column and Content Types are stored in the tp_ContentTypes column of the AllLists table in the content database.

So, what’s changed?

Starting with SharePoint 2010 most of the columns which contained plain XML (content type definitions, views, etc…) or BLOBs are now compressed in the database.

With luck and some googling around I’ve found that compressed objects format is documented in [MS-WSSFO3]: Windows SharePoint Services (WSS): File Operations Database Communications Version 3 Protocol. Those objects are called WSS Compressed Structures and consist of simple header followed by zlib comressed string.

Continue reading

Migrating a SVN repo to Git, part deux: SubGit to the rescue

  1. Migrating a SVN repo to Git: a tale of hacking my way through
  2. ➤ Migrating a SVN repo to Git, part deux: SubGit to the rescue

To improve is to change; to be perfect is to change often.
— Winston Churchill

In my previous post about SVN→GIT conversion I’ve described steps to convert a nested SVN repo to GIT using svn2git, svndumpfilterIN, SVN::DumpReloc and some manual editing of SVN dump files.

This process worked fine for smaller repos, but after some threshold I’ve hit the wall: final conversion with svn2git for one of the larger repos was taking 5 days and was never quite finished because of Windows version of Git crashing in the middle of process. Those crashes were related to Cygwin’s implementation of fork which requires some address space to be reserved for Cygwin heap and 5 days long run was exhausting those addresses.

After a couple of attempts to convert a repo (which took about 2 weeks!), I’ve realized that I need a more robust and preferably faster solution. And that’s when I finally found SubGit!

SubGit is a tool for a smooth, stress-free SVN to Git migration. Create writable Git mirror of a local or remote Subversion repository and use both Subversion and Git as long as you like. You may also do a fast one-time import from Subversion to Git or use SubGit within Atlassian Bitbucket Server.

SubGit is a commercial closed-source Java application. Fortunately, it’s free for one-time conversions and mirroring for repos with up to 10 Git and SVN users. It also has time-trial version that will mirror repo with any amount of users for one month. If you’re daring enough, you can also use EAP or interim builds. Note that it seems that interim builds don’t have any time/user limits whatsoever.

With SubGit, I was able to convert abovementioned SVN repo to Git overnight without any extra steps, using this simple command:

subgit import --svn-url http://server/svn/my/nested/repo --authors-file .\authors.txt .\repo.git

Continue reading

Writing stealth code in PowerShell

What happens in module, stays in module.

Most of my scripts are using Import-Component function to bulk-import dependencies (PS1 files with functions, modules, source code, .Net assemblies).

To import PS1 files with functions, they have to be dot-sourced and that provided me with some challenge: if PS1 is dot-sourced inside the function, it will be available only in that function’s scope. To overcome this, I could scope each contained function, alias, and variable as global (nasty!) or call Import-Component function itself using dot-sourcing (yes, you can dot-source more than just files).

For a while, dot-sourcing Import-Component seemed to work fine, until one day, I realized, that this effectively pollutes caller’s scope with all Import-Component‘s internal variables. Consider this example:

function DotSource-Me
{
    $MyString = 'Internal variable'
}

$MyString = 'External variable'

# Calling function as usual
DotSource-Me
Write-Host "Function was called, 'MyString' contains: $MyString"

# Dot-sourcing function
. DotSource-Me
Write-Host "Function was dot-sourced, 'MyString' contains: $MyString"

If we run this script, the output will be:

Function was called, 'MyString' contains: External variable
Function was dot-sourced, 'MyString' contains: Internal variable

As you can see, when the DotSource-Me function is called as usual, it’s internal variable is restricted to the function’s scope and doesn’t affect the caller’s scope. But when it’s dot-sourced, variable in the caller’s scope is overwritten.

Continue reading

Dynamic parameters, ValidateSet and Enums

Good intentions often get muddled with very complex execution. The last time the government tried to make taxes easier, it created a 1040 EZ form with a 52-page help booklet.
— Brad D. Smith

I suppose that many of you have heard about Dynamic Parameters, but thought of them as too complicated to implement in real-life scenarios. Just look at the amount of code you have to write to add one simple parameter with dynamic ValidateSet argument.

Recently I had to write a fair amount of functions which use Enum‘s values as parameters (Special Folders, Access Rights, etc). Naturally, I’d like to have this parameters validated with ValidateSet and have tab-completion as a bonus. But this means to hardcode every enum’s member name in the ValidateSet argument. Today’s example is a function, that returns Special Folder path. It accepts one parameter Name, validates it values against all known folders names and returns filesystem paths. Here is how it looks with hardcoded ValidateSet:

function Get-SpecialFolderPath
{
    [CmdletBinding()]
    Param
    (
        [Parameter(Mandatory = $true, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true, Position = 0)]
        [ValidateNotNullOrEmpty()]
        [ValidateSet(
            'Desktop', 'Programs', 'MyDocuments', 'Personal', 'Favorites', 'Startup', 'Recent', 'SendTo',
            'StartMenu', 'MyMusic', 'MyVideos', 'DesktopDirectory', 'MyComputer', 'NetworkShortcuts', 'Fonts',
            'Templates', 'CommonStartMenu', 'CommonPrograms', 'CommonStartup', 'CommonDesktopDirectory',
            'ApplicationData', 'PrinterShortcuts', 'LocalApplicationData', 'InternetCache', 'Cookies', 'History',
            'CommonApplicationData', 'Windows', 'System', 'ProgramFiles', 'MyPictures', 'UserProfile', 'SystemX86',
            'ProgramFilesX86', 'CommonProgramFiles', 'CommonProgramFilesX86', 'CommonTemplates', 'CommonDocuments',
            'CommonAdminTools', 'AdminTools', 'CommonMusic', 'CommonPictures', 'CommonVideos', 'Resources',
            'LocalizedResources', 'CommonOemLinks', 'CDBurning'
        )]
        [array]$Name
    )

    Process
    {
        $Name | ForEach-Object { [Environment]::GetFolderPath($_) }
    }
}

Not fancy, to say the least.


Sidenote: if you wonder, did I typed all this ValidateSet argument, the answer is no. Here is trick that I’ve used to get all enum’s members strings enclosed in single quotes and comma-separated. Just copy and paste this snippet to the PowerShell console and get formatted enum list in your clipboard:

PS C:\Users\beatcracker> "'$([Enum]::GetNames('System.Environment+SpecialFolder') -join "', '")'" | clip

As you see, the ValidateSet above is as bad as you can get: it’s large, it’s easy to make typo and it’s hardcoded. Whenever the new special folder is added to the Windows or it doesn’t exists in previous versions of OS this code will fail.

Continue reading

Parameter validation gotchas

I didn’t fail the test, I just found 100 ways to do it wrong.
— Benjamin Franklin

PowerShell’s parameter validation is a blessing. Validate parameters properly and you’ll never have to write a code that deals with erroneous user input. But sometimes dealing with Validation Attributes, requires a bit more knowledge that built-in help can provide. Here is what I’ve learned so far and want to share with you.

  • You can have more than one Validation Attribute

This may seem trivial, but PowerShell’s help and various online tutorials do not mention this fact (they just imply). You can have as much Validation Attributes as you like for your parameter. For example, this function requires parameter Number to be even and fall in range from 1 to 256:

function Test-MultipleValidationAttributes
{
    [CmdLetBinding()]
    Param
    (
        [Parameter(Mandatory = $true, ValueFromPipeline = $true)]
        [ValidateScript({
                if($_ % 2)
                {
                    throw 'Supply an even number!'
                }
                $true
        })]
        [ValidateRange(1,256)]
        [int]$Number
    )

    Process
    {
        Write-Host "Congratulations, $Number is an even and it's between 1 and 256!"
    }
}