Monthly Archives: December 2014

DCM Script – Detect if a Mac is a Member of the Domain and Join If Not

As I’ve said before, Macs can be a pain in the backside when you’re trying to manage a lot of them. One of the particular bugbears that I’ve found is that they have a habit of unbinding themselves from your Active Directory domain for no apparent reason. Usually this would mean a helpdesk call because someone can’t log on and disruption and annoyance and well, you get the idea.

This script is a bit of a kludge. My Bash isn’t the best by any stretch of the imagination and I’ve put detection and remediation into the same snippet as for some reason, I couldn’t get a separate remediation script to work. No matter. It’s not ideal but it still works. Anyway, the script looks like this:

DOMAIN_STATUS=$(dsconfigad -show | awk "/Active Directory Forest/" | cut -d "=" -f 2)"_Member"
if [[ ${DOMAIN_STATUS} == "{{domain.fqdn}}_Member" ]]; then

echo "OK"

exit 2 # already a domain member, exit script


dsconfigad -add {{domain.fqdn}} -user {{user}} -password {{password}} -force
EXIT_CODE=$(echo $?)

if [[ ${EXIT_CODE} != 0 ]]; then

echo "There was an error. Code is " $EXIT_CODE
exit ${EXIT_CODE}


echo "OK"

Change anything in dual braces to reflect your environment.

The script runs a command called dsconfigad which gets information about the Active Directory domain that the Mac belongs to. It trims out the FQDN of the domain, appends _Member onto the end of it and adds it to a variable. I’m adding _Member to the end of the string because if the Mac isn’t a member of a domain, dsconfigad returns a null value and the variable doesn’t get created.

The script compares the output with what it should be. If it matches, it returns “OK” to ConfigMgr and exits. If not, it joins the Mac to the domain and returns “OK” to ConfigMgr. If for some reason the domain join fails, the script sends the error code back to ConfigMgr.

As always, you set the detection rule to look for a string called “OK”, add the rule to a new or pre-existing baseline and deploy the baseline to a collection. After you do, any Mac which is managed by ConfigMgr but which is not a member of your domain will find itself joined.

As I say, I know that my Bash scripting skills are fairly minimal so if you see a better way for this script to work, please feel free to contact me. The usual “I’m not responsible if this script hoses your Mac and network” disclaimers apply.

Merry belated Christmas

Well, Merry Christmas I guess.

This has been a bit of a strange one for me. For starters, I’ve been sick with the flu for a good chunk of it and really have only started to feel myself again now.

The other strange thing about is that I think that this is the first time I have started and finished the day completely on my own. With previous Christmases, I’ve either still been living at home, have stayed with my parents for the duration of the holiday or stayed with my girlfriend’s family. This time around, with my parents moving to a smaller house and my girlfriend spending it up at her parent’s in Scotland I’ve started and ended the day in my own flat.

I spent the day with my family of course but it was a bit weird all the same. Oh well, time marches on! I hope that anyone who reads this had a good day on Thursday and enjoys the rest of the festive period! Best wishes for 2015.

Controlling Dual Monitor Modes via the Command Line

This one is absurdly simple but pretty useful nevertheless.

At work, we have been getting a lot of calls recently where the teacher has complained that their interactive whiteboards aren’t working properly and all that they can see on the projected surface is their wallpaper. I’m sure that anyone who has experience with this things will immediately see that of course, their whiteboards are fine and that the PCs are set to extend the desktop onto a secondary display rather than clone it.

There are some big advantages to extending the desktop and I think that there are a few more IT literate teachers who have figured this out and decided to extend their desktop. However, what they’re also doing is forgetting to set it back when they’re finished and therefore upsetting the next teacher who goes to use the room. This of course generates a call to us and wastes everybody’s time.

I wanted to see if there was a way to control extending or cloning displays using a script or a PowerShell command. I googled for a while and found a few third party programs which claimed they could do it but I found that they didn’t work that well. I eventually came across this page which informed me about a program built into Windows versions from Windows called displayswitch.exe. It even has some command line switches!

displayswitch.exe /clone
displayswitch.exe /extend
displayswitch.exe /internal
displayswitch.exe /external

Those are pretty self explanatory I think! I then created a couple of GPOs with WMI filters which detect interactive whiteboards. Inside those GPOs are startup and logout scripts with the following command:

displayswitch.exe /clone

So each time a PC with an interactive whiteboard attached to it is started or logged out, it puts itself back into clone mode. Easy!

DCM Script – Detect and disable Intel Graphics Card Service

As I imagine the majority of corporate PCs do these days, all of the computers at my workplace have integrated Intel graphics chipsets. And why not, for a business PC they’re perfectly adequate; their 3D acceleration is good enough for Aero on Windows 7 and for anything else the vast majority of users need.

However, there is a rather… annoying feature of the drivers which I like to suppress. The driver puts an application into the System Notification Area which makes it easy for people to mess around with graphical settings and which lets them change the orientation of the screen by pressing certain key combinations. I’m sure that for a lot of corporate settings this isn’t too much of a problem but for a school or college it generates a lot of helpdesk calls because the little sods darlings like hitting those keys and turning the screens upsidedown.

Anyway, this DCM script detects whether the service is running and kills and disables it if it is

$IntelGFXService = Get-Service | Where-Object {$_.Name -like 'igfx*'}

 if ($IntelGFXService -ne $null) {

    $IntelGFXServiceName = $IntelGFXService.Name
    $IntelFGXStartupMode = Get-CimInstance Win32_Service -Filter "Name='$IntelGFXServiceName'"

       if ($IntelGFXService.Status -eq "Running" -and $IntelFGXStartupMode.StartMode -eq "Auto")
            echo "Service Started, Startmode Automatic"
       elseif ($IntelGFXService.Status -eq "Stopped" -and $IntelFGXStartupMode.StartMode -eq "Auto")
            echo "Service Stopped, Startmode automatic"
       elseif ($IntelGFXService.Status -eq "Running" -and $IntelFGXStartupMode.StartMode -eq "Disabled")
            echo "Service Started, Startmode Disabled"
            echo "all disabled"

    echo "all disabled"

That checks the status of the service and reports the status back to ConfigMgr. The remediation script looks like this:

$IntelGFXService = Get-Service | Where-Object {$_.Name -like 'igfx*'}

Set-Service -Name $IntelGFXService.Name -StartupType Disabled
Stop-Service -Name $IntelGFXService.Name
get-process igfx* | stop-process

That stops the service, disables it and kills any relevant processes running alongside the service.

Set the compliance rule to look for a string called “all disabled” and apply the rule to either a new or existing baseline. That’s it for today!

Mac Servers – What I’m Doing

Hopefully this should be the last post about Macs for the time being! I don’t know why I’ve written so many recently, I guess it’s just because the bloody things take up so much of my time at work. Don’t get me wrong, I like Macs a lot for my own purposes. I own two, my girlfriend has one, we both have iPhones, I have an Apple TV and an AirPort Extreme router. I even subscribed to MobileMe when you had to pay for it. It’s just that professionally, they’re a pain in the arse and if you really want I’ll put that on a certificate for you to put on your wall.

Anyway, my last post was originally supposed to be about the Apple server solution that we’re using at work. However, it kind of turned into a rant about how Apple has abandoned the server market. I stand by the post entirely but I thought I’d try to write the post I was originally intending to write!

A few months ago I got a call from the technician in the Media Studies department who looks after their Macs. As far as he could see, the Mac had stopped working entirely and he was getting into a bit of a panic about it (The Mac was actually fine, it had just buggered up its own disk permissions preventing anything from launching). The reason he was panicking was because he thought that all of the student work stored on it had been lost. These Macs are used for editing video using Final Cut Pro; because of the demands that video editing puts on a computer’s storage subsystem, it is totally impractical to edit video over a network on one computer let alone 40 at once so therefore the student has to store any work that he or she does on the Mac itself. If that Mac fails then the user kisses goodbye to all of the work done that year. That wasn’t acceptable but he and the college had put up with it up until that point. He wanted to know if there was a way to back them up so that if one does go kaboom, we have a way of recovering their work.

This turned into a fairly interesting project as I had to investigate viable solutions to achieve this. I came up with four:

  1. Backing the Macs up using an external hard drive and Time Machine
  2. Backing the Macs up using a server running OS X Server and Time Machine
  3. Backing the Macs up using a server from another manufacturer using some other piece of backup software
  4. Backing the Macs up to a cloud provider

First of all, I investigated and completely discounted the cloud solution. We would need in the order of 20TB of space and a huge amount of bandwidth from the cloud provider for the backups to work. The Macs would need to be left on at night as there would be no way we’d be able to back then up during the day and maintain normal internet operations. It all ended up costing too much money, if we had gone for a cloud solution we would have spent as much money in six months as it would have cost for a fairly meaty server.

There was also quite some thought put into using USB disks and Time Machine to back them up. This certainly would have worked, it would have been nice and simple and relatively cheap. Unfortunately there were some big downsides too. Securing them would have been almost impossible, it would be far too easy for someone to disconnect one and walk off with it. If we wanted any decent kind of speed and capacity on them, we probably would have had to of used enclosures with 3.5″ drives in them which would have meant another power supply to plug in. Finally, I couldn’t see a way to stop students just using them as an additional space to store their stuff on completely negating the point of them in the first place.

So that left having a network storage for the Macs to store their backups on. First of all, I looked at Dell (Other server vendors are available) to see how much a suitable server would cost. Dell’s range of servers with lots of 3.5″ drives is frustratingly small but eventually I managed to spec a T620 with 2x1TB and 8x4TB drives with a quad core CPU and 16GB RAM, a half decent RAID controller and a three year warranty for about £7500 retail. Add on top the cost of a suitable backup daemon and it would cost somewhere in the region of £9000-£9500. Truth be told, that probably would have been my preferred option but a couple of things kept me from recommending it. First of all, £9500 is a lot of money! Secondly, although Apple have been deprecating AFP and promoting SMB v2/v3 with Mavericks and Yosemite, Apple’s implementations of SMB have not been the best since they started migrating away from SAMBA. AFP generally is still the fastest and most reliable networking protocol that a Mac can use. With this in mind, I decided to have a look at what we could get from Apple.

The only Server in Apple’s range at the time was the Mac Mini Server. This had 8GB of RAM installed, a quad core i7 CPU at roughly 2.6GHz and 2x1TB hard disks. Obviously this would be pretty useless as a backup server on its own but that Thunderbolt port made for some interesting possibilities.

At first, I looked at a Sonnet enclosure which put the Mac into a 1U mount and had a Thunderbolt to PCI Express bridge. I thought that I could put a PCIe RAID controller in there and attach that to some kind of external SAS JBOD array and then daisychain a Thunderbolt 10Gbe ethernet card from that. I’m sure it would have worked but it would have been incredibly messy.

Nevertheless, it was going to be my suggested Apple solution until I saw an advert on a website somewhere which lead me to HighPoint’s website. I remembered HighPoint from days of old when they sold dodgy software RAID controllers on Abit and Gigabyte motherboards. A Thunderbolt storage enclosure that they promote on their site intrigued me. It was a 24 disk enclosure with a Thunderbolt to PCIe bridge with three PCIe slots in it and, crucially, a bay in the back for a Mac Mini to be mounted. Perfect! One box, no daisy chaining, no mess.

Googling the part code lead me to a local reseller called Span where I found out that what HighPoint were selling was in fact a rebadged NetStor box. This made me happy as I didn’t want to be dealing with esoteric HighPoint driver issues should I buy one. Netstor in turn recommended an Areca ARC-1882ix-24 RAID controller to drive the disks in the array and an ATTO Fastframe NS12 10Gbe SFP+ network card to connect it to the core network. Both of these brands are reasonably well known and well supported on the Mac. We also put in 8x4TB Western Digital Reds into the enclosure. I knew that the Reds probably weren’t the best solution for this but considering how much more per drive the Blues and the Blacks cost, we decided to take the risk. The cost of this bundle including a Mac Mini Server was quoted at less than £5000. Since OS X Server has a facility to allow Time Machine on client Macs to back up to it, there would have been no additional cost for a backup agent.

Considering how squeezed for money the public sector is at the moment, the Mac Mini Server plus Thunderbolt array was the chosen solution. We placed the order with Span for the storage and networking components and an order for a Mac Mini Server from our favourite Apple resellers, Toucan.

Putting it all together was trivial, it was just like assembling a PC. Screw the hard drives into their sledges, put the cards in the slots, put the Mac Mini into the mount at the back and that’s it. After I connected the array to the Mac, I had to install the kexts for the RAID controller and the NIC and in no time at all, we had a working backup server sitting on our network. In terms of CPU and memory performance it knocked the spots off our 2010 Xserve which surprised me a bit. We created a RAID 6 array giving us 24TB of usable storage or 22.35TiB if you want to use the modern terminology. The RAID array benchmarked at about 800MB/sec read and 680MB/sec write which is more than adequate for backups. Network performance was also considerably better than using the on-board NIC too. Not as good as having a 10Gbe card in its own PCIe slot but you can’t have everything. The array is quite intelligent in that it powers on and off with the Mac Mini; you don’t have to remember to power the array up before the Mac like you have to with external SAS arrays.

I know that it’s not an ideal solution. There is a part of me which finds the whole idea of using a Mac Mini as a server rather abhorrent. At the same time, it was the best value solution and the hardware works very well. The only thing that I have reservations about is Time Machine. It seems to work OK-ish so far but I’m not sure how reliable it will be in the long term. However I’m going to see how it goes.

Mac Servers in a Post Xserve World

About three years ago, Apple discontinued the Xserve line of servers. This presented a problem. While the Xserve never was top tier hardware, it was at least designed to go into a server room; you could rack mount it, it had proper ILO and it had redundant power supplies. You would never run an Apache farm on the things but along with the Xserve RAID and similar units from Promise and Active, it made a pretty good storage server for your Macs and it was commonly used to act as an Open Directory Server and a Workgroup Manager server to manage them too.

Discontinuing it was a blow for the Enterprise sector who had came to rely on the things as Apple didn’t really produce a suitable replacement. The only “servers” left in the line were the Mac Pro Server and the Mac Mini server. The only real difference between the Server lines and their peers were that the Servers came with an additional hard drive and a copy of OS X Server preinstalled. The Mac Mini Server was underpowered, didn’t have redundant PSUs, it only had one network interface, it didn’t have ILO and it couldn’t be racked without a third party adapter. The Mac Pro was a bit better in terms of spec, it at least had two network ports and in terms of hardware it was pretty much identical internally to its contemporary Xserve so it could at least do everything an Xserve could do. However, it couldn’t be laid down in a cabinet as it was too tall so Apple suggested you stood two side by side on a shelf. That effectively meant that you had to use 10U to house four CPU sockets and eight hard drives. Not a very efficient use of space and the things still didn’t come with ILO or redundant power supplies and it was hideously expensive, even more so than the Xserve. It also didn’t help that Apple didn’t update the Mac Pro for a very long time and they were getting rapidly outclassed by contemporary hardware from other manufacturers, both in terms of hardware and price.

Things improved somewhat when Thunderbolt enabled Mac Mini Servers came onto the scene. They came with additional RAM which could be expanded, an extra hard drive and another two CPU cores. Thunderbolt is essentially an externally presented pair of PCI Express lanes. It gives you a bi-directional interface providing 10Gbps of bandwidth to external peripherals. Companies like Sonnet and NetStor started manufacturing rack mountable enclosures into which you could put one or more Mac Minis. A lot of them included ThunderBolt to PCI Express bridges with actual PCIe slots which meant you could connect RAID cards, additional network cards, faster network cards, fibre channel cards and all sorts of exciting serverish type things. It meant for a while, a Mac Mini Server attached to one of these could actually act as a semi-respectable server. They still didn’t have ILO or redundant PSUs but Mac Servers could be at least be reasonably easily expanded and the performance of them wasn’t too bad.

Of course, Apple being Apple, this state of affairs couldn’t continue. First of all they released the updated Mac Pro. On paper, it sounds wonderful; up to twelve CPU cores, up to 64GB RAM, fast solid state storage, fast GPUs, two NICs and six(!) Thunderbolt 2 ports. It makes an excellent workstation. Unfortunately it doesn’t make such a good server; it’s a cylinder which makes it even more of a challenge to rack. It only has one CPU socket, four memory slots, one storage device and there is no internal expansion. There is still no ILO or redundant power supply. The ultra powerful GPUs are no use for most server applications and it’s even more expensive than the old Mac Pro was. The Mac Pro Server got discontinued.

Apple then announced the long awaited update for the Mac Mini. It was overdue by a year and much anticipated in some circles. When Apple finally announced it in their keynote speech, it sounded brilliant. They said it it was going to come with an updated CPU, a PCI Express SSD and an additional Thunderbolt port. Sounds good! People’s enthusiasm for the line was somewhat dampened when they appeared on the store though. While the hybrid SSD/hard drive was still an option, Apple discontinued the option for two hard drives. They soldered the RAM to the logic board. The Mac Mini Server was killed off entirely so that means that you have to have a dual core CPU or nothing. It also means no memory expansion, no RAIDed boot drive and the amount of CPU resources available being cut in half. Not so good if you’re managing a lot of iPads and Macs using Profile Manager or if you have a busy file server. On the plus side, they did put in an extra Thunderbolt port and upgraded to Thunderbolt 2 which would help if you were using more external peripherals.

Despite all of this, Apple still continue to maintain and develop OS X Server. It got a visual overhaul similar to Yosemite and it even got one or two new features so it clearly matters to somebody at Apple. So bearing this in mind, I really don’t understand why Apple have discontinued the Mac Mini Server. Fair enough them getting rid of the Mac Pro Server, the new hardware isn’t suitable for the server room under any guise and it’s too expensive. You wouldn’t want to put an iMac or a Macbook into a server room either. But considering what you’d want to use OS X Server for (Profile Manager, NetRestore, Open Directory, Xcode), the current Mac Mini is really too underpowered and unexpandable. OS X Server needs some complimentary hardware to go with it and there isn’t any now. There is literally no Apple product being sold at this point that I’d want to put into a server room and that’s a real shame.

At this point, I hope that Apple do one of two things. Either:

Reintroduce a quad or hex core Mac Mini with expandable memory available in the next Mac Mini refresh


Start selling a version of OS X Server which can be installed on hypervisors running on hardware from other manufacturers. OS X can already be run on VMware ESXi, the only restriction that stops people doing this already is licensing. This would solve so many problems, people would be able to run OS X on server class hardware with whatever they want attached to it again. It wouldn’t cause any additional work for Apple as VMware and others already have support for OS X in their consumer and enterprise products. And it’d make Apple even more money. Not much perhaps but some.

So Tim Cook, if you’re reading this (unlikely I know), give your licensing people a slap and tell them to get on it. kthxbye

Ian’s Pasta Sauce Recipe

Right, I’m going to try to write about something other than bloody computers for once and share a recipe for a pasta sauce that I think I’m quite good at. I think that it’s classed as a ragout, I’m hesitant to call it a Bolognese as it has whole tomatoes in it rather than tomato puree and beef stock and if I did, I’d probably offend the whole city of Bologna and have a Nonna from there chase me down the road with a pickaxe. Anyway, it’s a good recipe and it generally goes down very well with my family when I do it. It’s a lot better than buying a jar of Dolmio or similar, it tastes better and you have a better idea of what’s actually in it!

Serves 6-8. It freezes well so if you don’t have 6-8 people to give this to, just put the leftover sauce in the freezer.


  • 500g Minced beef, 15-20% fat content
  • 2 tins of chopped tomatoes
  • 1 red bell pepper, coarsely chopped
  • 1 large chilli pepper, finely chopped (Optional)
  • 1 large red onion, finely chopped
  • 1 large carrot, grated
  • 4 cloves of garlic, crushed or finely sliced
  • 1 tbsp tomato puree
  • 50g hard garlic sausage such as salami, finely chopped. If you’re in the UK, Pepperami is a good one to use
  • 100ml red wine
  • Some herbs such as basil and oregano. Fresh preferably but dried will do too.
  • Salt and pepper
  • Mild olive oil


Put a large pan on a medium to high heat. Heat around 2 tbsp of the olive oil until it’s very hot. Add the onion, pepper, chilli if you’re using it and the garlic sausage and fry it all for around five minutes or until the onion is soft and translucent, stirring occasionally. Add the garlic and fry for another minute. Make sure that you stir the mixture constantly so that the garlic doesn’t burn. Add the tomato puree and fry for another couple of minutes.

Add the beef mince and fry it until it’s browned. Add the grated carrot and cook for a minute. Add the red wine and let it simmer for about two minutes. Finally, add the chopped tomatoes and give the whole thing a good mix. Add a splash of water if you think it’s necessary. Bring the mixture to the boil then reduce the heat. Simmer for about 90 minutes to two hours or until most of the liquid has boiled off and the mixture thickens. Add the herbs and season the mixture with the salt and pepper towards the end.

Serve with your favourite pasta (I suggest tagliatelle) or with gnocchi. It’s quite good with an Italian hard cheese such as Grana Padano or Pecorino Romano grated over it.

I’m aware that this recipe is about as Italian as I am but I’m just end with “Buon appetito!”. Enjoy!


Sometimes I substitute the beef mince for sausages squeezed out of their skins. It’s very good with some of the flavoured sausages that you get in the more upmarket supermarket brands such as Taste the Difference or Finest (Other supermarket brands are available).

You can use this sauce for lasagnes and for pastistio. To make a pastistio, make up a quantity of the sauce and some cheese sauce. Cook some tubular pasta such as macaroni, penne or rigatoni. Mix it in with the pasta sauce and put the mixture into a ovenproof dish. Top it with the cheese sauce and bake for 30-40 minutes or until the top is browned. Take it out of the oven and let it stand for five minutes then serve.

Managing Macs using System Center Configuration Manager – Part Two

In my previous article, I described the agent that Microsoft have put into System Center Configuration Manager to manage Macs with. Overall, while I was happy to have some kind of management facility for our Macs I found it to be somewhat inadequate for our needs and I wished it was better. I also mentioned that Parallels, the company behind the famous Parallels Desktop virtualisation package for the Mac, contacted us and invited us to try out their plugin for the Mac. This article will explain what the Parallels agent is capable of and how well it works and how stable it’s proven to be since we’ve had it installed.

Parallels Mac Management Agent for ConfigMgr

The Parallels agent (PMA) is an ISV proxy for ConfigMgr. It acts as a bridge between your Macs and the management point in your ConfigMgr infrastructure. The agent doesn’t need HTTPS to be enabled on your ConfigMgr infrastructure, ConfigMgr sees Macs as full clients. The Parallels agent fills in a lot of the gaps which the native Microsoft agent has such as:

  1. A graphical and scriptable installer for the agent
  2. A Software Center-like application which lists available and installed software. Users can also elect to install published software if desired.
  3. Support for optional and required software installations
  4. Operating System Deployment
  5. The ability to deploy .mobileconfig files through DCM
  6. A VNC client launchable through the ConfigMgr console so you can control your Macs remotely
  7. Auto-enrollment of Macs in your enterprise
  8. Support for FileVault and storage of FileVault keys

It supports almost everything else that the native ConfigMgr client does and it doesn’t require HTTPS to be turned on across your infrastructure. In addition, if you use Parallels Desktop for Mac Enterprise Edition, you can use the plugin to manage VMs.


The PMA requires a Windows server to run on. In theory, you can have the PMA installed on the server hosting your MP or it can live on a separate server. There are separate versions of the PMA available for ConfigMgr 2007 and 2012/2012 SP1/2012 R2.

Earlier versions of the PMA didn’t support HTTPS infrastructures properly so you needed to have at least one MP and one DP running in HTTP mode. However, the latest version supports HTTPS across the board. However, you do need to have at least one DP running in anonymous mode for the Macs to download from.

IIS needs to be installed on the server along with WebDAV and BITS. Full and concise instructions are included so I won’t go over the process here. Anybody who has installed a ConfigMgr infrastructure will find it easy enough.

If you are using the OSD component, it needs to be installed on a server with a PXE enabled DP. If you have multiple subnets and/or VLANs, you will need an IP helper pointing at the server for the Macs to find it.

Software Deployment

The PMA supports two methods of deploying software. You can use either Packages or Applications.

Generally speaking, there are three ways to install a piece of software on a Mac, not counting the App Store:

  1. You can have an archive file (Usually DMG) with a .app bundle in to be copied to your /Applications or ~/Applications folder
  2. You can have an archive file with a PKG or MPKG installer to install your application
  3. You can install from a PKG or MPKG.

Installing using Packages

Unlike the Microsoft agent, you don’t need to repackage your software to deploy it with the PMA. To avoid doing so, you can deploy them using legacy style Packages. To deploy a piece of software using ConfigMgr Packages, you need to create a Package in the same way as you would for Windows software. You copy it to a Windows file share. You need to create a Program inside the package with a command line to install the package. Using the three above scenarios, the command lines would look like this:

  1. :Firefox 19.0.2.dmg/
  2. :iTunes11.0.2.dmg/Install iTunes.pkg::
  3. install.pkg

The first command tells the PMA to open the DMG and copies the bundle in the DMG to the /Applications folder. The second tells the PMA to open the DMG and execute the .pkg file with a switch to install it silently. The third runs an arbitrary command.

Once the package and the program inside the package have been created, you distribute to a DP and deploy it to a collection as standard.

Deploying software using this method is functional and it works nicely. The disadvantage is that there is no way to tell if a deployment has worked without either delving through the logs or looking in the Applications folder. If you deploy the Package to a collection, the PMA will try to install the Package on the Mac whether it’s already on there or not.

Installing using Applications

As of version 3.0 of the PMA, Parallels have started supporting ConfigMgr Applications as well as Packages. Parallels are using Microsoft’s cmmac file format to achieve this. This means that you need to repackage applications and add them to the ConfigMgr console using the same method as you do for the native ConfigMgr client. This is a bit of a pain but the benefits that doing so brings make it worthwhile. As with the Microsoft client, there are detection rules built into the Application meaning that the Parallels client can check to see if a piece of software is deployed on the Mac before it attempts to deploy it. If it is already there, it gets marked as installed and skips the installation.

It also brings this to the table:


That is the Parallels Application Portal. Much like Software Center on Windows, this gives you a list of software that has been allocated to the Mac. It allows for optional and required installations. If a deployment is marked as optional, the software is listed with a nice Install button next to it.

As with the Microsoft agent, you need to be careful with the detection rules that you put into your Applications. The PMA runs a scan of /Applications and /Library folders looking for info.plist files. It extracts the application name and version from those PLISTs and adds them to an index. It then looks at the detection rules in the Application and compares them to the index that it builds. If there’s a match, it marks them as installed. If you don’t get the detection rules right, the PMA won’t detect the application even if it has been installed properly and then it eventually tries to reinstall it. I spent a very interesting afternoon chasing that one down. There are also some applications which don’t have info.plist files or get installed in strange places. The latest Java update doesn’t have an info.plist, it has an alias to another PLIST file called info.plist instead. The PMA didn’t pick that one up.

Operating System Deployment

Quite impressively, Parallels have managed to get an operating system deployment facility built into the PMA. It’s basic but it works.

First of all, you need to create an base image for your Macs and take a snapshot of it using the System Image Utility found in OS X at /System/Library/CoreServices/. You can create custom workflows in this utility to help you blank the hard disk before deployment and to automate the process. Make sure you tell it to make a NetRestore image, not a NetBoot image like I first did. Once you’ve done that, you tell it where you want to save your image and set it on its way. The end result is an NBI file which is a bundle with your system image and a bootstrap.

You then copy the resulting NBI file onto a PC or server with the ConfigMgr console and Parallels console addon installed. Once it’s on there, open the console and go to the Software Library workspace. Go to Operating Systems, right click on Operating System Images and choose Add Mac OS X Operating System Image. A wizard appears which asks you to point at the NBI file you generated and then to a network share where it creates a WIM file for ConfigMgr.

add image

Once the WIM has been generated, it adds itself to the console but one quirk I have noticed is that if you try to create it in a folder, it doesn’t go in there. It gets placed in the root instead. You can move it manually afterwards so it’s not a huge issue.

The next step is to create a task sequence. There is a custom wizard to this too, you need to right click on Task Sequences under Operating System Deployment in the Software Library workspace then choose Create Task Sequence for Macs.

task sequence

You give the task sequence a name, choose the image that you want to use and press the Finish button. Again, if you’re using folders to organise your task sequences and you try to create the task sequence in a folder, it will get placed in the root of the structure rather than in the folder that you tried to create it in. You can move it if you wish.

From there, it’s pretty much standard ConfigMgr. You need to distribute the image to a DP and publish the task sequence to a collection. The task sequence then appears to the Macs in that collection as a standard Netboot image with the title that you gave to it. You can access it the usual way, either through the Startup Disk pane in System Preferences or by holding down the option key on startup.

boot disk

Unfortunately, what it doesn’t do is allow for any kind of automatic, post image deployment sequences. Although in theory the System Image Utility is supposed to support custom workflows which allow software installations and the running of scripts, I haven’t managed to get it to deploy the agent automatically. I have therefore created a script which the admin deploying the Mac needs to run which (amongst other things) names the Mac and installs the PMA. From what Parallels tell me, this is being worked on.

DCM – Scripts and Profiles

The PMA supports the usage of DCM Bash scripts in the same way as the Microsoft agent does. There isn’t much to say about this, it works and it’s a useful thing to have. The other way of getting settings onto Macs with the PMA is via mobileconfig files generated either by Profile Manager in OS X Server or by a generator built into the ConfigMgr console addin. The generator looks like this:


Look familiar? Unfortunately there aren’t all of the options here that are in Profile Manager so if you want to do something more advanced than what’s on offer here, you still need a copy of OS X Server and Profile Manager to generate the profile.

To deploy a mobileconfig file using the PMA, you need to go to the Assets and Compliance workspace, go to Compliance Settings and right click on Configuration Items. Go to Create Parallels Configuration Item then to Mac OS X Configuration Profile.


You then give the configuration item a name, decide whether it’s a User or System profile and point the wizard at the mobileconfig file generated by Profile Manager. Once you’ve done that, there is a new configuration item in your console which can be added to a DCM baseline and deployed to a collection.

I have used this facility for various purposes; for configuring Airport, for joining the Mac to the AD domain, for setting up the user’s desktop settings and wallpaper, setting up Time Machine and for more besides. It’s a great facility to have and rather more user friendly than delving through PLISTS to find settings.

Other features – VNC Client, Auto-enrolment and  Inventory

Well, these do what they say on the tin. We have a VNC client:


It takes an inventory:

mac inventory

It has the ability to detect Macs on your network and automatically install the agent on them. They all work. There isn’t really much else to be said.

How well does it work?

So clearly, the PMA has far more features than the Microsoft agent does but a piece of software can have all the features in the world and still be useless if it isn’t stable. In this regard, I am delighted to say that the Parallels agent has been rock solid for us. It has been continually improved and has had feature after feature added. It doesn’t quite make a Mac a first class citizen on a Windows network but it comes very close and going by the way that Parallels have improved the product over the last two years, I’m confident that the gap will close. Parallels have been a lot quicker in getting new versions of OS X supported with their plugin too, it already has support for Yosemite.

It hasn’t been entirely problem free but when I’ve contacted Parallels Support, they’ve been quick and efficient and they’ve got the problem resolved. Most problems that I’ve come across I’ve managed to solve myself with a bit of thought.

Although Parallels claim that it the PMA can be installed on the same server as your management point, I couldn’t get it to work when I did this. I ended up putting it on its own hardware. This was with v1.0 of the product though, we’re now on v3.1 so they may have improved that since then.

Having the PMA has also meant that I no longer need a Magic Triangle set up to support and configure my OS X clients. I don’t need Profile Manager or Workgroup Manger to deploy settings, I don’t need OS X server or DeployStudio to deploy images. The only thing I need OS X Server for is Profile Manager to generate the profiles and (with the arrival of Yosemite) the System Image Utility.

The only real downside to the PMA is that it’s expensive and that you have to pay full price for it annually. That may be hard to justify if you’ve already spent a fortune on a System Center license.


So lets do a quick advantage/disadvantage comparison:

Microsoft client advantages:

  • Native client so no additional cost
  • Support from Microsoft

Microsoft client disadvantages:

  • Sparse feature set
  • Required installs only
  • Complicated DCM
  • Takes a long time to install multiple applications
  • Requires HTTPS
  • Slow to support new versions of OS X
  • No visible status indicators, next to impossible to see what’s going on.

Parallels client advantages

  • Includes lots of additional features, brings level of management of Macs to near parity of Windows machines
  • Optional and user initiated installs supported
  • Software Center equivalent and a System Preferences pane to check status of agent. Very thorough logs to check on what the agent is doing are available too.
  • OSD
  • Doesn’t require HTTPS
  • Supports SCCM 2007
  • Much easier to deploy settings by using mobileconfig files

Parallels client disadvantages

  • Expensive
  • Requires an additional DP
  • Probably requires an additional server to install the proxy

They’re as good as each other when it comes to running DCM scripts and taking inventories. So the answer is simple: If you can afford it, get the Parallels Management Agent. If I were the head of the System Center division at Microsoft, I’d be going cap in hand to Satya Nadella and telling him to drive all the money to Parallels HQ to acquire their product from them. Frankly, it puts Microsoft’s own efforts to shame.

Automating SCVMM Patching

Patching. What a pain in the backside it is. It’s bad enough when you just have to look after one or two machines at home. When you have thousands of the things to look after, it can be horrific.

A recent brace of high level, out of band Microsoft security updates has forced me to take a close look at our patching infrastructure at work and make some improvements. Among those improvements was to look at the patching of our Hyper-V hosts. I’m a little ashamed to admit it but I haven’t really looked at patching our virtual hosts since they were installed back in August 2013. This meant that when I went to install those updates, there were another 130 to install with them. Not so good.

We are using System Center Virtual Machine Manager to manage our virtualisation farms. Part of its functionality is keeping your hosts patched. To do so through the UI is a pretty long winded process, you have to add a WSUS server to the SCVMM infrastructure, synchronise SCVMM with WSUS, add any new updates to a baseline, scan the hosts against that baseline to see if they’re compliant and remediate them if they’re not. It’s simple enough but it’s long winded. Unfortunately there is no way to automate this process through the GUI.

However, thanks to PowerShell it can be scripted! I am using this script to mostley automate the process for me:

Import-Module virtualmachinemanager
Import-Module virtualmachinemanagercore

$anonUsername = "anonymous"
$anonPassword = ConvertTo-SecureString -String "anonymous" -AsPlainText -Force
$anonCredentials = New-Object System.Management.Automation.PSCredential($anonUsername,$anonPassword)
$PSEmailServer = "smtp.server.domain"

Get-VMMServer -ComputerName
Start-SCUpdateServerSynchronization -UpdateServer wsus.server.domain

$2012Updates = $(Get-SCUpdate | Where-Object -FilterScript { `
        $_.Products -like "Windows Server 2012" -and `
        $_.IsSuperseded -eq $false -and `
        $_.CreationDate -gt $((Get-Date).AddMonths(-1)) `

$Baseline = New-SCBaseline -Name "$(Get-Date -format y) Updates"
$AddedUpdateList = @()
$2012Updates | foreach {
        $AddedUpdateList += Get-SCUpdate -ID $_.ID

$scope = Get-SCVMHostGroup -Name "Hyper-V" -ID "8db6c432-7326-429d-af6d-8c93d201ca9f"
Set-SCBaseline -Baseline $baseline -AddAssignmentScope $scope -JobGroup "a0bcf812-b866-474b-a69a-13db8f8ec360" -RunAsynchronously
Set-SCBaseline -Baseline $baseline -RunAsynchronously -AddUpdates $addedUpdateList -JobGroup "a0bcf812-b866-474b-a69a-13db8f8ec360" -StartNow

Start-Sleep -Seconds 10

Get-SCVMHostCluster -Name "" | Start-SCComplianceScan
Get-SCVMHostCluster -Name "" | Start-SCComplianceScan

Send-MailMessage -to "" `
       -from "" `
       -subject "Time to patch the Hyper-V Farms" `
       -Body "Dear IT Support,

Virtual Machine Manager has downloaded the $(Get-Date -format y) updates for the Windows hosts in the Hyper-V clusters. Please ask a member of the team to check the compliance status of the hosts in SCVMM and remediate any problems that are found.


       -priority High `
       -credential $anonCredentials

That script connects to SCVMM, looks for any updates released for Windows Server 2012 in the last month and adds them to a new baseline named after the month. It assigns the new updates to a group called Hyper-V and starts a compliance scan on two clusters. Finally, it emails the helpdesk to let them know that there are new updates to be installed. That’s what the anonymous credentials are in there for, the account that I use to run the script doesn’t have a mailbox so our Exchange server rejects the message when it tries to authenticate using it.

If you’re feeling particularly brave, you can add this to the script:

Get-VMHostCluster -Name ClusterName | Start-SCUpdateRemediation -RemediateAllClusterNodes

That will, in theory, cycle through each of the hosts in your cluster putting them into maintenance mode, migrate the VMs on that host onto another in the cluster, install the updates and reboot them. I say “In theory” because I haven’t managed to get this to work 100% reliably yet.

I then created a scheduled task which runs this script on the second Wednesday of every month.

Anyway, there you have it. Feel free to steal this if you want it but run it at your own risk, I’m not responsible if it does something unfortunate.

DCM Script – Disable On-Board Sound Card if USB Device is Attached

The first script in my new library is one that I am quite proud of as it was the first that I created to solve a relatively complex problem. It came to be because of the Music department at the college that I work at. The PCs in their department have external USB sound cards for students to plug MIDI instruments into and other such things (Hell, I’m not a musician, I don’t understand what they’re for exactly!). The problem was that Sonar, the music sequencing software that they use, was giving them trouble when both the on-board audio and the USB device was enabled. They therefore wanted me to disable the on-board sound card in the machines so that it wouldn’t happen again.

I could have gone to each of their PCs and just disabled the onboard sound in the BIOS or in Windows but that would be a short term fix; if the PC got replaced or rebuilt the person doing that would have to remember to disable it again. I therefore wrote this:

$SoundDevices = Get-CimInstance Win32_SoundDevice

if ($SoundDevices.DeviceID -like "USB*")
         #USB Sound Card detected, will now check to see if on-board HDAUDIO is still active
         $HDAudio = Get-CimInstance Win32_SoundDevice -Filter 'DeviceID LIKE "HDAUDIO%"'
         $AudioStatus = $HDAudio.StatusInfo
         If ($AudioStatus -eq '3')
                 #On-board still active, need to disable
                 echo "USB Audio detected, on-board audio needs to be disabled"
                 #USB detected, on-board disabled
                 echo "OK"
        #No USB, onboard sound to be left alone
        echo "OK"


This script queries WMI to find out what sound devices are installed in the machine. If it detects one with USB in the device string, it goes on to see if the onboard HDAUDIO device is enabled. If it is, it sends a string back to ConfigMgr saying that remediation needs to happen. If the on-board is disabled it sends “OK” back to ConfigMgr. If there is no USB audio device installed at all, it sends “OK back to ConfigMgr.

The remediation script is quite simple. Since these are Windows 7 machines, there is no PowerShell way of managing hardware. I therefore had to use the old DEVCON command. The remediation script therefore looks like this:

%Path_to_file%\devcon.exe disable HDAUDIO\*

That disables any device with HDAUDIO at the beginning of its device ID.

Set the compliance rule on the DCM script to look for a string that says “OK” and that’s it. Then add the rule to a existing baseline or create a new one and deploy it to a collection.