Pseudo Singleton Pattern in PowerShell

1 Oct

I have had a few modules that I have developed where I desired to use the Singleton pattern as I didn’t want multiple instances of a particular object running about across scripts. Building full fledge objects in PowerShell is a bit verbose and not the most straightforward process. I’m also not certain you can properly build an honest Singleton class natively in PowerShell. An alternative to building objects natively in PowerShell is to use C# in line and compile the class with Add-Type. Although that is an acceptable solution and benefits from full enforcement of restricting object creation to calling the Instance method, I wanted to stay native to PowerShell. I’ve adopted the following pattern to accomplish this:

function Get-ObjectInstance{

    if(Get-Variable -Scope Global -Name 'tmpObject' -EA SilentlyContinue){ 
        return (Get-Variable -Scope Global -Name 'tmpObject').Value 

    $Global:tmpObject = New-Object PSObject

   return $tmpObject

Using a global variable is required if you want the object available across scripts which is helpful if you want to do things like logging. This concept was initially prompted by an issue I was having with a Logging module I had written. I was invoking a Global logging object in each script which was causing problems on multiple runs. I would call:

# Instatiate Log #
if($GLOBAL:Log -eq $null){
    $GLOBAL:Log = New-LogObject "Log Description"

On subsequent runs I would get errors of “Can’t add-member… “ something or other and the logging would error. It required me closing the object properly which was difficult to do cleanly and consistently when creating the Global object in each script as the name could change from script to script if you weren’t careful. Instead of making the variable global in each script, I moved that into a module using the “singleton” pattern as described above in the function Get-ObjectInstance . I initially moved to this method to make it easier to close out the global variable so I wouldn’t run into issues on subsequent runs. The side benefit of this change was that I no longer needed to maintain consistency from script to script for the variable name. Using the pseudo singleton pattern, I could now call:

# Instatiate Log #
$Log = Get-LogInstance "Log Description"

This remove the need for checking if the variable had been created in each script, and I could call scripts in any order knowing I would get a valid and the correct instance of the object I wanted.

Removing a Global Variable – Revisited

14 Nov

I had previously written about an issue that I had with closing out a custom PSObject that caused scripts to fail on subsequent runs. After watching a PluralSight video that covered the concept of passing variables by reference and by value, I re-visited my issue. Initially, I was trying to solve the issue by capturing the variable name and using that name to call:

Remove-Variable -Name 'VariableName'

Although Remove-Variable requires a name to remove the variable, I was going about identifying the name of variable the wrong way. Since the $this variable in the custom PSObject is going to reference the same object as the variable I assigned it to in my scripts I could leverage System.Object.ReferenceEquals() method to identify the variables. My new Close() method for my PSLogging module looks like this:

$tmpVars = Get-Variable -Scope Global | Where-Object{
    [System.Object]::ReferenceEquals($this, $_.Value)}
if($tmpVars.GetType().FullName -eq 'System.Management.Automation.PSVariable'){
    Remove-Variable -Scope Global -Name $tmpVars.Name
    for($i=0;$i -lt $tmpVars.Count;$i++){
        Remove-Variable -Scope Global -Name $tmpVars[$i].Name

I can now reliably identify all references to the log variable and remove them successfully. The concepts of By Value and By Reference weren’t new to me, but I didn’t fully grasp them until watching that video.

Fixing QueryFeatures Error When Looking for Features to Upgrade

26 Jun

UPDATE 11/9/13: Added SourceCode Formatting


I was running QueryFeatures on the SPWebApplication to find a SPFeature object to upgrade, but was receiving an error about a deleted site.

An error occurred while enumerating through a collection: The site with the id 8e79d05f-3f0c-4627-8311-7d29b2a2c4a3 could not be found..
At line:1 char:1
+  <<<< (Get-SPWebApplication‘afda6a62-4389-402f-af19-11bd9ee2b99a’) | ?{$_.Parent.Url -like ""}    + CategoryInfo          : InvalidOperation: (Microsoft.Share…esultCollection:SPFeatureQueryResultCollection) [], RuntimeException    + FullyQualifiedErrorId : BadEnumeration

Simon Doy does a good job of explaining the issue in more detail. I had previously created and deleted a site collection for testing purposes and that is when my problems started.


To solve the issue I started by running the Get-SPDeletedSite command to find the problem sites. I then removed the deleted sites with Remove-SPDeletedSite. Next, I ran a refresh on the content database to make sure the list of sites was update in the configuration database. The whole PowerShell process looked a little something like this:

PS C:\Users\beavel> Get-SPDeletedSite

WebApplicationId   : 00000000-0000-0000-0000-000000000023 
DatabaseId         : 00000000-0000-0000-0000-000000000024 
SiteSubscriptionId : 00000000-0000-0000-0000-000000000000 
SiteId             : 8e79d05f-3f0c-4627-8311-7d29b2a2c4a3 
Path               : /team/site1 
DeletionTime       : 6/25/2012 8:32:31 PM

WebApplicationId   : 00000000-0000-0000-0000-000000000023 
DatabaseId         : 00000000-0000-0000-0000-000000000024 
SiteSubscriptionId : 00000000-0000-0000-0000-000000000000 
SiteId             : 5fcf2a3b-ee9e-4d27-8e7b-4c7185396dce 
Path               : /team/site2 
DeletionTime       : 6/25/2012 8:18:48 PM

PS C:\Users\beavel> Remove-SPDeletedSite -Identity 5fcf2a3b-ee9e-4d27-8e7b-4c7185396dce

Are you sure you want to perform this action? 
Performing operation "Remove-SPDeletedSite" on Target "/team/site2". 
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help 
(default is "Y"):y 
PS C:\Users\beavel> Get-SPDeletedSite

WebApplicationId   : 00000000-0000-0000-0000-000000000023 
DatabaseId         : 00000000-0000-0000-0000-000000000024 
SiteSubscriptionId : 00000000-0000-0000-0000-000000000000 
SiteId             : 8e79d05f-3f0c-4627-8311-7d29b2a2c4a3 
Path               : /team/site1 
DeletionTime       : 6/25/2012 8:32:31 PM</pre>

<pre>PS C:Usersbeavel> Remove-SPDeletedSite -Identity 8e79d05f-3f0c-4627-8311-7d29b2a2c4a3

Are you sure you want to perform this action? 
Performing operation "Remove-SPDeletedSite" on Target "/team/site2". 
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help 
(default is "Y"): 
PS C:\Users\beavel> $db = Get-SPContentDatabase -Identity 5a83c168-d8e8-44c7-b5d3-46b8d87d9a06 
PS C:\Users\beavel> $db.RefreshSitesInConfigurationDatabase()

I was still running into issues with my original command. To get things working, the “Gradual Site Delete” time job needed to be run for the Web Application that contained the two sites. This was run from Central Admin instead of PowerShell. Once it had completed, I was again able to run my original command. I’m not sure the DB refresh was needed in this process. You may be able to get by without it as I have not tested.

Regex to the Rescue

29 Feb


At work, we had a requirement to transform text URL’s into clickable links. Ideally, these links would have been entered in as well formatted hyper links, but that wasn’t the case and we didn’t have the option to implement that fix. Initially, the idea was have end users include the http:// protocol on links so only text would that had http:// would be transformed. This was rather optimistic as end users didn’t follow this training. Users were entering links in all shapes and forms. Some links had no protocol or host. This lead to many links being missed.


To solve the problem, I wrote a little regex to capture the links and then reformatted them. The regex is below:


This will match a link with or without a protocol or host specified, an alphanumeric domain with dashes, and a path and query string following it. It only matches on the TLD listed in the middle section. This can be problematic or useful depending on what you want to match. Overall, this was a major improvement that allowed end users to continue with their behavior and still get the result we were looking for. This solution has been rock solid so far in capturing the links entered, but it still has the opportunity to miss certain URL’s. YMMV.

GetJar is logging more than you think

26 Feb

I was doing some network analysis on my phone related to another matter, and noticed that GetJar was logging some of my activity. This isn’t that surprising for an app store that provides free apps. Nothing comes for free. I would expect that they would log some information related to their apps provided through their store. However, what surprised me was logging occurred as I was uninstalling apps that I did not purchase or install through GetJar. After seeing this behavior, GetJar got an immediate uninstall. I don’t know what other data GetJar might have been logging as I didn’t leave it on long enough to find out anymore.


Here is what was logged:

GET /backchannel/metadata/
?gjClientInstallationID=<24char string>
&androidID=<44 char string>


I have reformatted this GET request for easier reading. The character count is based on the decoded URL. There is nothing super personal in there, but they are definitely collect what apps you are using.


I took a quick look at GetJar’s privacy policy to see if this was disclosed. As many privacy policies, the sections on personal information collection are a bit vague and open-ended. Even so, I didn’t get the sense that they would be collecting information on what apps I was uninstalling or using. Here’s the relevant excerpt from their privacy policy:

Personal Information Collected via Technology

As you use the GetJar Site or any GetJar Service, some information may also be collected passively, including your Internet protocol address, browser type, access time, mobile phone model, and telecom carrier. We may also store a small text file called a "Cookie" on your computer or phone to store certain information about your use of the GetJar Site or GetJar Services. We may use both session Cookies (which expire once you close your browser) and persistent Cookies (which stay on your computer or phone until you delete them).

Personal Information from Other Sources

We may receive Personal Information about you from other sources, including other users. We may associate this information with the other Personal Information we have collected about you.



I went on to take a quick look at their logging server It discloses some configuration information which I am not sure how accurate it is. If the information disclosed is to be trusted, the jetty.config.contextMap seems to give an indication on what else is collected or sent to GetJar.


* Reformatted for easier reading.

It appears that messaging, usage, and event details might be logged as well. What those all entail I’m not sure as uninstalling an app fell under Metadata.

Disclaimer: By writing this, I’m not claiming that GetJar is engaging in malicious activities. If anything, I want others to be aware of this and make an informed decision. No one is being forced to use this app so choose to do what you will.

Removing a Global Variable from the PowerShell environment

26 Jan

UPDATE: It appears that this fix only works when called from the commandline. InvocationPoint.MyCommand.Definition is populated with the script that called it initially. Back to the drawing board for another solution.

Update 11/9/13: Added source code highlighting; typo fixes


I had created a custom PowerShell module to use to log activity in other scripts I was writing. The module exposes a function that outputs a custom PSObject to hold the messages. In my scripts, I assigned this PSObject to a global function so it was available to other scripts I might call. At the end of the script, I wanted a method to close or remove the variable the the PSObject was assigned to so that it wouldn’t cause issues on the subsequent runs.


The problem I ran into is there isn’t an easy way for removing the variable holding the PSObject. Trying to set $this = $null in the module doesn’t null out the containing object but the $this variable instead. Attempting to remove the variable with the Remove-Variable and $this also failed. My temporary solution was to iterate through all of the NoteProperty’s that were set and $null them out individually. This worked for a while, but required me to close and reopen the ISE between running scripts as the next attempt to run a script caused PowerShell to complain that it couldn’t add to the log variable.


I finally tired of having to reopen the ISE and decided to fix my problem properly. The following is my solution:

function Get-VariableName{ 
    $tmpRegex = '(?&lt;=$[A-Z,a-z,0-9]*:).*(?=(s=|=))' 
    $this.InvocationPoint.MyCommand.Definition -match $tmpRegex | Out-Null 
    return $Matches[0].Trim() 


# Close # 
$tmpObject | Add-Member -MemberType ScriptMethod -Name Close ` 
    -Value { 
        Remove-Item -Path "Variable:$(Get-VariableName)" 

The Get-VariableName uses regex to parse the variable name from the InvocationPoint of $this. The property $this.InvocationPoint.MyCommand.Definition exposes the full command used to create the variable. This was the only place I could find this information exposed in the object itself without adding a property. Now that I had the variable name, I used the Remove-Item function to remove the variable. That is probably the long way of removing instead of just using:

Remove-Variable –Name $(Get-VariableName) –Scope Global

Somehow I overlooked that command when I implemented the fix.

This works for now, but I may need to update the regex at some point to support both global and none global variables as I believe it will fail to capture none global variables. Fortunately I only use this with globally defined variables.

SharePoint 2010, compat.browser, & the Saga of Safari and Chrome

23 Sep


Working with a public anonymous site hosted on SharePoint 2010 there was an issue with the mobile view on certain devices. Anybody who has worked with SharePoint knows that by default it will do browser detection for mobile browsers and render a mobile version of the site OOTB. Although this can be helpful with some browsers, most mobile touch screen devices make the need for a specific mobile version unneeded.

The specific issues we were facing are listed below:


  1. The files required for the mobile view are located in a directory which was not accessible anonymously causing a 401.
  2. Files that were in the Style Library were not accessible for certain browsers, but worked just fine on other browsers.
  3. Changing the compat.browser file located in inetpubwwwrootwssVirtualDirectories{WebAppFolder}App_Browsers allowed all browsers to see the full site, but the changes caused problems for Safari and Chrome on the desktop as well on mobile devices.


For issue #1, we did not want to display the mobile version of the site so the solution was to make sure browser detection stopped rendering the alternative version. This lead to first testing the site by appending ?mobile=0 ( to the end of the URL so that SharePoint would skip the browser detection. This revealed issue #2.

Although the mobile view was no longer rendered, the styling on the page was severely off. Users of Windows Phone 7 and desktop browsers were rendering the site just fine, but Android and iPhone devices rendered without proper styling. By looking at IIS logs, I noticed that Android and iPhone devices were getting 401’s when trying to connect to the Style Library. This is where a number of key .js and .css files were stored so it made sense that the site did not render properly. This next part is purely conjecture on my part, but I believe the discrepancy between devices came down to NTLM support. Although the site had anonymous access turned on, the Style Library on a Publishing site does not inherit permissions therefore when anonymous access gets turned on it does receive the anonymous permissions unless directly specified. This can go unnoticed for browsers/OS’s that support NTLM as the Style Library has the group Style Resource Readers on it which contains NT AUTHORITYauthenticated users. My assumption is that browsers that support NTLM use the special user NT AUTHORITYiusr to connect and it is considered authenticated and therefore the site rendered properly for most OS’s and browsers. IIS logs seem to support this as NT AUTHORITYiusr shows up when accessing the Style Library. The solution here was simply to add anonymous access to the Style Library. With this done, appending ?mobile=0 now rendered the site as expected. This left the final issue.

The next step was to turn off browser detection so that no devices were being sent to the mobile experience. From reading various sources, the compat.browser file needed to be edited to set isMobileDevice to false. I wrote a PowerShell script to quickly replace all instances. Find and replace probably would have worked as well, but I like trying new things in PowerShell. With my modified compat.browser file ready, I took and replaced it in our test environment and everything worked as expected. I then moved it to our Production environment and this is where issues started. In Production, the menu system was not being rendered properly on Chrome and Safari. Thinking the issue was related to my change, I reverted to the original file thinking this would fix the problem. It did not. A number of other solutions such as file permissions, editing other compat files, and adding .browser files were attempted but they were unsuccessful at resolving the problem. It wasn’t until we were able to successfully break another system that we stumbled on the answer. Deleting the compat.browser file and then browsing to the site successfully replicated the issue on a dev server. Restoring the file would no longer fix the issue. Server reboots also failed. The final solution turned out to relate to the second file in the App_Browsers directory which is the compat.moss.browser file. When this file was deleted from the directory and the site was visited, the problem went away. The compat.moss.browser file could then be restored without ill effect. My recommendation DON”T DELETE compat.browser when editing. It seems like the “stickiness” of this issue would be considered a bug as one would assume restoring an original file would fix the issue, but it did not. Hope this saves someone else the same headache.

Extracting and Parsing the MFT of a logical disk from a Live Windows Machine

17 May

Over the weekend I had a family Windows XP box come to me with fake AV malware. The user indicated they didn’t know how or when it happened. Out of curiosity I began to investigate the cause and decided to see if could replicate Tim Mugherini’s work as presented on Pauldotcom on this system to determine when the infection took place, and any possible related files.

My first step was to grab the latest release of The Sleuth Kit (TSK). Once the files were obtained and extracted the next trick was how to extract the MFT. Most of the reference materials I was able to locate on using TSK involved passing an image file to the utility in use. I had no desire to image the whole 160GB drive just for the MFT. Looking at further reference sites, I did see some passing the a hard drive directly, but they were Linux systems. I also didn’t have the option of booting from a Live Linux distro at the time. I finally found a post by Rob Lee on the SANS Computer Forensics blog that referenced a logical drive in Windows in a way The Sleuth Kit utilities would recognize it. The commands below have been translated from here for Windows:

fsstat \.C:

ifind -d number_address(from above) \.C:

istat \.C: | more



icat \.C: 0-128-1 > MFTextracted

This left me with a raw MFT file which isn’t very useful unless you parse it. I first tried to use Mark Menz MFT Ripper, but the free version is limited to 50,000 lines. Next I turned to the David Kovar’s I didn’t have a Python environment on the system I wanted to analyze from so I need to quick setup Python. I decided to use ActivePython, and grabbed the 3.2 version of Python. I made sure Python was in my Path and tried to run the script:

python .analyzeMFT -f MFTextrated -o T:TempMFTextracted.csv

Everything seemed fine, but I kept receiving syntax errors. I couldn’t find anything obvious on this error, but I did see at least one post reference using Python 2.7. To troubleshoot, I grab ActivePython 2.7, installed, and then changed my Path to point to 2.7 instead of 3.2. After reloading my command prompt, I kicked off the same command to successfully begin the parsing.

Using VMware Converter to Convert Virtual PC VM’s

8 May

The easiest method to convert a Virtual PC virtual machine is to use the VMware vCenter Converter provided by VMware. It can be found here. Free registration is required. Once you have downloaded, and installed the converter the following instructions should get you through the rest of the conversion process.

After installation is complete, launch the converter. Once the converter is open select “Convert Machine” as shown below.


A new window will open, and you start by specifying the source details. In order to convert a Virtual PC VM, select “Backup image or third-party virtual machine”. Then click “Browse…” and locate the VM you need to convert.


Hit “Next” and you make decisions on how the converted VM will be configured. Start by selecting “Select destination type:”. Home users will choose “VMware Workstation or other VMware virtual machine” as the resulting Virtual Machine (VM) will work with either VMware Player or Server. The other option is “VMware Infrastructure virtual machine” which is used with ESX or vCenter Server products, which will mainly be found in corporate setups.


You now can name the virtual machine as you desire so it is easily identified when you want to use it. When specifying the location, make sure the directory exists because the converter will not handle this for you. If you hit “Next,” the Converter will nag you that it doesn’t exist and won’t let you proceed.


Hit “Next” and review the settings for he virtual machine. Here you can adjust hard drive, RAM, and Network settings for the final VM.

I am converting a Windows 2000 machine which initially had been assigned a small amount of RAM. I boosted mine to 512MB to improve performance of the VM. Make a decision based on the host computers actual RAM, and the guest OS requirements you will be running which is best for your setup. The yellow exclamation points are items which should be reviewed. In my case, the “Processors” is set to one. Although your computer might have a dual core, don’t increase the number of processors, especially in older OS’s as this will require manual update of components after the conversion. The other item requiring my attention referred to needing sysprep files. (After some investigation, I determined I would try the conversion without them, and everything went fine.) You can ignore this, and hit “Next” to move the process along. On the next screen take a second to review that all the settings are as you want, and hit “Finish” when you are ready to proceed.

The conversion process can vary depending on the systems being converted. With the VM’s for Windows 98, and Windows 2000 it took about a minute, but for a Windows Vista VM it took about 10 minutes. Times will obviously depend on your computers specs as well. I ran this on a AMD Athlon 64 X2 5600+ with 2GB of RAM.

If everything went as planned, you should have a new VM which work in your preferred VMware product. Hope that helps.

Windows System Control Center

24 Apr

Windows System Control Center (WSCC) is an organizational front end for the excellent utilities from Nirsoft and Sysinternals. Applications offered from theses two sites range from monitoring process activity, and secure disk clean up to viewing the cache of a web browser, and password recovery. Although the WSCC was a nice front end previously for users unfamiliar with all the utilities, and there function, it wasn’t necessarily needed for an advanced user who knew what each of the utilities did. Yesterday, WSCC released version 1.6 of the front end, and added an update process which with one click updates all of the utilities from both sites. Since WSCC is a portable app, this makes keeping your thumb drive updated with the latest utilities a simple process. Manually update on a regular basis can be a time consuming process, and something that just doesn’t get done. The update feature alone makes WSCC an app which even advanced users might check out at this point as it now provides time savings. And isn’t that what technology is supposed to provide after all. I had been using Ketarin previously to maintain updated copies, but it required extraction, and transferring files after downloading. WSCC’s new update process definitely streamlines the process.

Go check it out here. It is free for personal, and commercial use.