Category Archives: Macs

Parallels Mac Management – What’s been happening?

Yet another post about the Parallels Mac Management plugin for Configuration Manager. Anyone would think they pay to write this crap! Well, unfortunately for me, they don’t. I have to admit, this post has been an embarrassingly long time in the making; in between work, parenting and being absolutely cream crackered all the time, I just haven’t had time to work on this.

Despite having not used ConfigMgr or the PMM in almost two years, I’m still very enthusiastic about both subjects. They combine my favourite kind of computer with my favourite professional application and by God, I’ve just realised what a complete spod I am. Oh well. Anyway, I was very curious about the progress Parallels have made with their product. The first release of the PMM was almost unrecognisable (in a good way) compared to the version that I last used at my old college (v3.5) and I knew that Parallels were working on some pretty cool stuff for version 4. In the end, my curiosity got the better of me so I reached out to one of my contacts at Parallels and asked if I could take a look at the latest version. He said “Yes” and the seeds for this article were sown; I downloaded the latest version of the product (v5 at the time) and started writing this article.

Quite a lot of time later, the Multi Academy Trust (MAT) of schools that I work for decided that they were going change the management system that it uses for its computers. The schools all run RM Community Connect 4 and I would say that it’s likely that the vast majority of those who have ever worked in a school and have had the misfortune of working with a version of RM Community Connect will most likely understand why. One day, I may write an article about what we’ve done and I’m sure a lot of people will call us insane for doing so but never mind. We chose System Center Configuration Manager to replace the computer management aspect of CC4 because it’s part of our Microsoft OVS-ES license and because of my experience of installing and using it.

As well as using Configuration Manager to manage our Windows machines, we also needed something to manage our Macs. There are about 200 of them dotted around the organisation and at the moment, they’re just about completely unmanaged. Yes, there are some which are connected to one of three instances of Profile Manager and there are a couple of Munki instances in-place but it’s (ugh) organic, unplanned and barely functional. The Macs are taking up a disproportionate amount of our IT Support team’s time compared to the Windows machines and we needed something to manage them properly. There seem to be a relative lack of decent management tools out there for Macs but in the end, we looked at JAMF (formerly Casper and probably the market leader for Mac management), the PMM and KACE, formally Dell but now spun out of Dell as part of Quest Software. KACE was the cheapest, the PMM was in the middle and JAMF was the most expensive. I got trials for all three but the PMM won the business because it integrates into ConfigMgr seamlessly which in turn means that we will be properly manage our entire environment through a single pane of glass. Admittedly, my experience with the product also helped but I will say that the other two are top notch products and under different circumstances, I would be happy to use either of them. However, using the PMM along with InTune and Configuration Manager means that everything that we need to manage is managed using one console which hopefully will keep things relatively simple for our IT teams. Time will tell.

So what’s new? We are now on v6.1 of the PMM so there are a fair few new features to look at.

V4’s headline feature was task sequencing for OSD. Instead of having to follow that pain in the arse procedure I developed for OSD, there is a better one built-in to the PMM now. You no longer need to create operating system images using Apple’s tools and put in a myriad of scripts to automate the rest of the process. Parallels have put in a proper task sequence wizard similar to the one built into ConfigMgr for Windows machines. They’ve also given you the tools you require to build and capture a master Mac image. You no longer need a copy of OS X Server to capture your image or to create a workflow.

V4.5’s new feature was update management and deployment. While you could always deploy updates to Macs using by downloading the update and deploying it with a Package, it wasn’t an especially elegant way of going about it. There was a lot of manual processing involved and personally I found it fairly awkward to deploy large OS X updates using this method. Well, it’s no longer required. Parallels have done something I never would have thought to do in a million years: they’ve leveraged WSUS to deploy updates to Macs. I’ll just wait for a moment to let you recover from that one.

V5 introduced support for Apple’s Device Enrolment Protocol so that new Macs can be automatically enrolled into your environment. They also put in a license manager for the PMM in this release too.

In V6, they started to support Software Metering, they’ve put in the ability to lock and wipe a Mac remotely, they’ve started using Maintenance Windows and they added a few options to OSD task sequences, the main one being the ability to deploy ConfigMgr Applications during a task sequence as well as Packages.

Finally, V6.1 added support for macOS High Sierra. This has always been a strength of the PMM; High Sierra was released on the 25th September 2017 and Parallels put in official support for the OS on October 10th. The version of KACE that I was evaluating (v8.0) wouldn’t deploy High Sierra to a Mac and Microsoft’s native Configuration Manager client didn’t get support for it until the end of November 2017. Parallels have always been very fast to officially support new version of macOS and, generally speaking, I’ve found that the PMM has worked even when unofficial support hasn’t been there.

Operating System Deployment

When I last used it, Operating System Deployment in the PMM was something of a weak point. It was technically very impressive that they managed to get it into V3 and it was much better than nothing but if you wanted a fully automated workflow, you couldn’t do it without a lot of hacking about. Essentially, you had to create a Netboot image (NBI) with third-party tools then upload that NBI to ConfigMgr. You then published that image to a collection and any Mac in that collection was able to boot from it and do a deployment. If you wanted to deploy more than one image, you had to repeat the process again. For each image, there was a separate Netboot image published and an extra entry for the Mac to boot from. There’s a fairly long article about it on this site that got linked by Parallels. It’s here if you want to read it.

The situation is a lot better now. Instead of having to mess about as described in the linked article, Parallels have implemented proper task sequencing for your Macs. It is very much like the task sequence wizard that Windows machines use. To use it, you need to create a boot image for your Macs to boot from then an operating system image for them to download and deploy.

To create a boot image, you need to download a DMG file from the server that your PMM is running on. Inside that DMG file is a command line utility that generates your boot image. The utility uses the version of macOS that’s installed on the Mac it’s run from to create an NBI file. The NBI contains a cut down image of macOS and a single app which contacts the PMM and requests and runs task sequences. Once the NBI file is generated, you need to copy it to the server hosting the PMM and add it to the Configuration Manager console using a wizard. This is good; instead of having multiple NBIs published to your Macs, you can have just one. The app in the NBI contacts the PMM and decides whether it has a task sequence it needs to run.

You can then start creating task sequences. Again, like Windows, task sequences created by the PMM can either capture or deploy an operating system image. I expect that the first task sequence that you will create is one to capture an operating system.

Capture a macOS Image

To do so, you open the Configuration Manager console, you go to the Software Library workspace, Operating Systems then Task Sequences. Right click on Task Sequences and select “Create OS X Task Sequence”

This brings up a window which asks you to give the task sequence a name:

The “Steps” tab is well named, it contains all of the steps of the task sequence:

In this case, we are only going to have one step in the sequence, we’re going to capture an image. Unlike Windows, macOS doesn’t require any kind of pre-preperation such as Sysprep before an image is created.

The options here are pretty simple; it wants to know where it should save the captured image to and it wants to know the credentials that it can use to connect to the server. Don’t do what I did here in my development environment and use the domain admin account, that would be bad. Use a proper network access account instead.

Once the form is filled out, press the OK button and the sequence will be saved. You can then deploy the task sequence to a collection just like you do with a Windows one.

Deploy a macOS Image

The other side of this coin is to deploy an image to a Mac. You use the same wizard to create a Deployment task sequence as well so lets look at that screenshot with the possible steps again:

As you can see, there are nowhere near as many possible steps here as there are for Windows machines but that said, there are probably enough cover most people. You don’t need to install drivers on a Mac as they’re already all included in the OS image. There isn’t an equivalent of the USMT for macOS that I’m aware of so there are no options for that. Really, the only thing that I’d say is missing is the ability to deploy any updates that are available. Lets look at an example task sequence:

The sequence that I’m showing is as basic as you can get but it works. It partitions the disk (Only HFS+ is supported at the moment, I don’t know how Parallels are going to handle APFS. I’ll ask the question when this article is published and update it later on), downloads and applies the OS, sets the Mac’s hostname, joins it to a domain and installs a mobileconfig file. You can also install software during the task sequence; when I first looked at this in the V4 beta, you could only install software using legacy packages. I found this disappointing and told Parallels so. However, since V6 you can now use modern style Applications as well. The only slight snag with deploying Applications with a task sequence is the detection routine that Parallels uses to detect whether an installation was successful or not. It frequently does not update itself quickly enough before it moves onto the next Application to install so therefore, more often than not, it thinks that the installation of the software hasn’t been successful. Parallels therefore recommend that you check the “Continue on Error” option when deploying applications otherwise the task sequence will fail. The downside to this is that if the deployment of an Application actually does fail, it’s not going to be immediately obvious that it has and you may find it a little harder to troubleshoot if it does. Parallels acknowledge that this is a problem in their documentation but I suspect that it’s probably not something that’s going to be solved without rearchitecting their solution.

Once again, you the task sequence is finished, you press the OK button to save it and you can deploy it to a collection in the usual manner. There is now an “Unknown Macs” collection created inside Configuration Manager when the PMM is installed; this means that a Mac doesn’t have to be known to Configuration Manager before it can run a task sequence.

The PMM’s Task Sequence engine also supports task sequence variables. This means that settings such as the Host Name can be automatically assigned and if you wish, you can assign blank variables to a collection as well. If the wizard detects a blank variable, it will let you fill it in.

 

macOS Operating System Updates

V4.5 of the PMM brought the ability to manage updates. The way that they’ve achieved this is quite interesting. They have leveraged the Local Updates Publishing facility of WSUS to catalog updates from Apple. Because the updates from Apple are cataloged in WSUS, they are entered into the Configuration Manager console in the Updates section.

Because the updates are now in the console, you can create Software Update Groups and deploy them to the collections that your Macs are members of. However, the software update point just catalogues the updates; it doesn’t send them to distribution points and your Macs don’t download and install them from your Configuration Manager servers. Instead, the Parallels Update Point tells the Mac which update it wants the Mac to download then the Mac downloads them either directly from Apple’s servers (which is the default option) or from a local Mac update server.

The process to set this up is very involved and because of this, I haven’t yet had the opportunity to evaluate how well the process works. It’s on my to-do list and I will update this article when I get the opportunity to try it. If you want a more detailed overview of how this works, the Parallels documentation is very thorough and clear. You can find it here: Parallels Administrator Guide from page 153 to 166.

Device Enrolment Protocol (DEP)

I’m afraid I don’t have much to say about this. In theory, I expect that this would be very useful as it gives you the ability to automatically enrol any new Mac that you buy into your Configuration Manager instance. However, all of the Macs that we own are relatively old and none have been bought using the DEP program so I can’t evaluate how well this facility works. I wish I could say more but I can’t.

License Manager

This one is for Parallel’s benefit more than anything else but I understand why they’ve implemented it. The PMM has always been licensed per Mac and it’s always been a timed license. However, older versions of the PMM had no way of tracking usage or expiry dates. You could buy a license for X amount of Macs and use it for as many as you wanted with no recourse. I imagine that a lot of people did as well so Parallels have put in this facility to limit you to managing the amount of Macs that you’ve bought licenses for.

There isn’t much to say about it really. It keeps track of the amount of Macs that are enrolled in Configuration Manager and I would imagine that it does a good job of stopping the PMM from working once your licenses expire. As I say, I understand why they’ve done this. Anyway, this is probably the most important part of the license manager that shows in the console:

Seamless…

Software Metering

It works. There isn’t much more to say about it. It works in exactly the same way as it does for Windows machines, you configure it in the same way and you use the same reports as you do for Windows software to view the data. This is a good thing.

Remote Lock and Wipe

This does what it says on the tin, it allows you to remotely lock and wipe a Mac which is lost or stolen. This works by enrolling your Mac into an MDM which is part of the PMM. This is achieved either through DEP or with a separate MDM component inside the PMM.

Again, I haven’t had the opportunity to try this facility but considering that a huge amount of the laptops that we have are MacBooks which are used by a lot of the senior people in our organisation and that GDPR is going to be a thing, this is something I’m going to take a closer look at very soon. According to the documentation, you can tell the PMM to automatically enrol any Mac in a particular collection into its MDM. Once you’ve done that, you can send a remote wipe or lock command to the Mac as soon as it connects itself to the Internet. Once I’ve implemented this, I’ll post an update to this article.

Maintenance Windows

As with Software Metering, this just works. You can now use the same method of assigning maintenance windows to your Macs as you do for your Windows machines and your Macs will respect them. Again, a good thing.

Room for improvement?

It’s safe to say that the PMM, already a very good product, has improved considerably in the last two years. They’ve added new features and added support for more and more features that were Windows only before. The product still seems to be stable and it works well. I’m pleased with what I’ve seen and have been happy to spend a not inconsiderable amount of money on the PMM once again. However, that doesn’t mean that I don’t think that there is room for improvement in some regards.

Multi-site scenarios

My MAT has three school clusters. Two of the school clusters have three geographical sites, the other one has two. All of these sites are connected by WAN links, either leased lines or point-to-point wireless connections. The point is, although a computer can access any other computer or server at any location in our organisation, it might not have the fastest or lowest latency link to that other computer. Therefore, all of the separate sites that are in the organisation have at least some local resources to use such as a domain controller, a file server and Configuration Manager infrastructure.

My original plan when implementing Configuration Manager was to have a single primary site covering all of our organisation. I would have at least one Management Point and Distribution Point per geographical site and clients would choose which MP and DP to use with its boundary group. This made sense as I’m not managing anything like 150,000 clients so a CAS and multiple primary sites were deemed unnecessary. The hope was that I would be able to install an instance of the PMM at each of the secondary schools and have those PMMs integrate into the site.

Unfortunately, this wasn’t to be. Each Configuration Manager site can only support a single instance of the PMM. It doesn’t matter if you have multiple management points in your site; if you try to install more than one PMM in a single SCCM primary site, the PMM configuration program detects that there is already a PMM in that site and refuses to configure the PMM. So this left me with three choices:

  1. Have a single PMM across the entire organisation and have at least two thirds of our Macs download their policies over a WAN connection
  2. Have a single primary site in one school cluster and deploy secondary sites to the other school clusters
  3. Deploy a Central Administration Site (CAS) and a primary site in each of the school clusters.

The IT teams discussed this and we came to the following conclusions:

  1. We didn’t want management traffic of any kind going over the WAN links. They have limited bandwidth, relatively high latencies and if one of them stopped working for some reason, the Mac management would stop working for at least some of the Macs. We decided that this was unacceptable
  2. Secondary sites are designed for very slow WAN connections and they don’t support all of the available roles for Configuration Manager. While our WAN connections are relatively slow, they’re not that slow and we wanted to be able to install all possible roles locally.
  3. To allow us to have a PMM per cluster and for the reasons above, we decided to have a CAS and a primary site per school cluster. This generated quite a lot of debate but this configuration won out eventually. It does mean that we have a lot more servers set aside for Configuration Manager than we originally intended to have but everything is in place now.

It would be nice, however, if all of this wasn’t necessary and if Parallels would allow you to have multiple PMMs per primary site. Configuration Manager has no issue with multiple management points per site so it can’t be that hard.

Mac App Store

Love it or loathe it, the App Store is here and it isn’t going anywhere. I can’t buy Final Cut Pro, Logic, Pages, Numbers, Keynote or any other application that’s only available through the App Store by any other means. The PMM has an MDM built into it for Remote Lock and Wipe. I’d like to leverage it to deploy App Store apps to my Macs. I don’t want to use Profile Manager to do this because Profile Manager sucks and macOS server is being severely deprecated in its next release. I also don’t want to use InTune or another MDM because I don’t want my Macs to be managed in two places. Pretty please Parallels?

Operating System Deployment

So, OSD has been much improved with some nice wizards but there are a couple of areas where it still falls a bit short. The first thing that I’d like to see improved is the “Execute Script” action inside the task sequence wizard. Let me illustrate what I mean:

Don’t get me wrong, it’s adequate but personally I don’t like the fact that it’s just a box to copy and paste a script into. I would much rather that it mirrored what Microsoft does and have the script in a package which the task sequence then downloads from a DP and executes. You might think that I’m being rather petty saying this but consider this: You might have 10/15/20 task sequences and you might want them all to execute the same script. If you needed to change that script, you have to go through each task sequence and edit each one in turn. You don’t have to do that with a Windows task sequence, all you do is edit the script stored in your definitive software library and update your distribution point. I think it’s obvious which method scales better. The same gripe applies to the “Apply Configuration Profile” step as well, the mobileconfig file is imported into the task sequence and stored in there instead of being retrieved from a package.

The second issue that I have with this is that this operation system deployment system still requires more action from a technician than I would like. To build or rebuild a Mac, a tech needs to visit the Mac, set it to boot from the network by holding down the option key or by using the Startup Disk preference pane, waiting for the NBI to boot, authenticating, selecting a task sequence then setting it going. When you have an entire room of Macs to rebuild, this is time consuming. It would be so much better if task sequences appeared in the Parallels Application Portal as well as to boot media and it would be even better if you could schedule when the task sequence ran, just like what happens with Windows and Software Center. This way a tech could set up a job in the Configuration Manager console and leave it at that. Even if you couldn’t schedule it, having a single-click action inside the Parallels Application Portal which automatically selected your task sequence and authenticated you against the PMM would also be a vast improvement. Maybe in V7?

The last thing I’d like to see is a method of creating a bootable preinstallation media. It’s not always possible to netboot so the ability to boot into the preinstallation environment with a USB stick could be useful.

The End. For Now.

This article has been coming for a downright embarrassing amount of time for which I can only apologise. I’ve wanted to get this completed for a very long time and I’ve only just recently found the time to do so. I hope it’s been informative and useful for you. When Parallels updates the PMM with some new functionality, assuming the functionality merits it, I will post a new article talking about it. However, I don’t want this blog to be a Parallels mouthpiece so I am going to try to post more frequently about other things but they’re probably going to be computer related. Hey, I’m a geek, this is what interests me.

I don’t consider any of the shortcomings above to be show-stoppers. The PMM is a very capable product that’s come a very, very long way in the last four years, I have no hesitation in recommending it for use. I invite Parallels to contact me about this article and inform me of any inaccuracies or mistakes that I’ve made. I’m happy to post corrections to the article if need be. With their permission, I will also publish any comments that they have to make about it as well.

If you’ve read this and have any questions that you’d like to ask, please post a comment in the section below and I will do my best to answer.

DCM Script – Detect if a Mac is a Member of the Domain and Join If Not

As I’ve said before, Macs can be a pain in the backside when you’re trying to manage a lot of them. One of the particular bugbears that I’ve found is that they have a habit of unbinding themselves from your Active Directory domain for no apparent reason. Usually this would mean a helpdesk call because someone can’t log on and disruption and annoyance and well, you get the idea.

This script is a bit of a kludge. My Bash isn’t the best by any stretch of the imagination and I’ve put detection and remediation into the same snippet as for some reason, I couldn’t get a separate remediation script to work. No matter. It’s not ideal but it still works. Anyway, the script looks like this:


DOMAIN_STATUS=$(dsconfigad -show | awk "/Active Directory Forest/" | cut -d "=" -f 2)"_Member"
if [[ ${DOMAIN_STATUS} == "{{domain.fqdn}}_Member" ]]; then

echo "OK"

exit 2 # already a domain member, exit script

fi

dsconfigad -add {{domain.fqdn}} -user {{user}} -password {{password}} -force
EXIT_CODE=$(echo $?)

if [[ ${EXIT_CODE} != 0 ]]; then

echo "There was an error. Code is " $EXIT_CODE
exit ${EXIT_CODE}

fi

echo "OK"

Change anything in dual braces to reflect your environment.

The script runs a command called dsconfigad which gets information about the Active Directory domain that the Mac belongs to. It trims out the FQDN of the domain, appends _Member onto the end of it and adds it to a variable. I’m adding _Member to the end of the string because if the Mac isn’t a member of a domain, dsconfigad returns a null value and the variable doesn’t get created.

The script compares the output with what it should be. If it matches, it returns “OK” to ConfigMgr and exits. If not, it joins the Mac to the domain and returns “OK” to ConfigMgr. If for some reason the domain join fails, the script sends the error code back to ConfigMgr.

As always, you set the detection rule to look for a string called “OK”, add the rule to a new or pre-existing baseline and deploy the baseline to a collection. After you do, any Mac which is managed by ConfigMgr but which is not a member of your domain will find itself joined.

As I say, I know that my Bash scripting skills are fairly minimal so if you see a better way for this script to work, please feel free to contact me. The usual “I’m not responsible if this script hoses your Mac and network” disclaimers apply.

Mac Servers – What I’m Doing

Hopefully this should be the last post about Macs for the time being! I don’t know why I’ve written so many recently, I guess it’s just because the bloody things take up so much of my time at work. Don’t get me wrong, I like Macs a lot for my own purposes. I own two, my girlfriend has one, we both have iPhones, I have an Apple TV and an AirPort Extreme router. I even subscribed to MobileMe when you had to pay for it. It’s just that professionally, they’re a pain in the arse and if you really want I’ll put that on a certificate for you to put on your wall.

Anyway, my last post was originally supposed to be about the Apple server solution that we’re using at work. However, it kind of turned into a rant about how Apple has abandoned the server market. I stand by the post entirely but I thought I’d try to write the post I was originally intending to write!

A few months ago I got a call from the technician in the Media Studies department who looks after their Macs. As far as he could see, the Mac had stopped working entirely and he was getting into a bit of a panic about it (The Mac was actually fine, it had just buggered up its own disk permissions preventing anything from launching). The reason he was panicking was because he thought that all of the student work stored on it had been lost. These Macs are used for editing video using Final Cut Pro; because of the demands that video editing puts on a computer’s storage subsystem, it is totally impractical to edit video over a network on one computer let alone 40 at once so therefore the student has to store any work that he or she does on the Mac itself. If that Mac fails then the user kisses goodbye to all of the work done that year. That wasn’t acceptable but he and the college had put up with it up until that point. He wanted to know if there was a way to back them up so that if one does go kaboom, we have a way of recovering their work.

This turned into a fairly interesting project as I had to investigate viable solutions to achieve this. I came up with four:

  1. Backing the Macs up using an external hard drive and Time Machine
  2. Backing the Macs up using a server running OS X Server and Time Machine
  3. Backing the Macs up using a server from another manufacturer using some other piece of backup software
  4. Backing the Macs up to a cloud provider

First of all, I investigated and completely discounted the cloud solution. We would need in the order of 20TB of space and a huge amount of bandwidth from the cloud provider for the backups to work. The Macs would need to be left on at night as there would be no way we’d be able to back then up during the day and maintain normal internet operations. It all ended up costing too much money, if we had gone for a cloud solution we would have spent as much money in six months as it would have cost for a fairly meaty server.

There was also quite some thought put into using USB disks and Time Machine to back them up. This certainly would have worked, it would have been nice and simple and relatively cheap. Unfortunately there were some big downsides too. Securing them would have been almost impossible, it would be far too easy for someone to disconnect one and walk off with it. If we wanted any decent kind of speed and capacity on them, we probably would have had to of used enclosures with 3.5″ drives in them which would have meant another power supply to plug in. Finally, I couldn’t see a way to stop students just using them as an additional space to store their stuff on completely negating the point of them in the first place.

So that left having a network storage for the Macs to store their backups on. First of all, I looked at Dell (Other server vendors are available) to see how much a suitable server would cost. Dell’s range of servers with lots of 3.5″ drives is frustratingly small but eventually I managed to spec a T620 with 2x1TB and 8x4TB drives with a quad core CPU and 16GB RAM, a half decent RAID controller and a three year warranty for about £7500 retail. Add on top the cost of a suitable backup daemon and it would cost somewhere in the region of £9000-£9500. Truth be told, that probably would have been my preferred option but a couple of things kept me from recommending it. First of all, £9500 is a lot of money! Secondly, although Apple have been deprecating AFP and promoting SMB v2/v3 with Mavericks and Yosemite, Apple’s implementations of SMB have not been the best since they started migrating away from SAMBA. AFP generally is still the fastest and most reliable networking protocol that a Mac can use. With this in mind, I decided to have a look at what we could get from Apple.

The only Server in Apple’s range at the time was the Mac Mini Server. This had 8GB of RAM installed, a quad core i7 CPU at roughly 2.6GHz and 2x1TB hard disks. Obviously this would be pretty useless as a backup server on its own but that Thunderbolt port made for some interesting possibilities.

At first, I looked at a Sonnet enclosure which put the Mac into a 1U mount and had a Thunderbolt to PCI Express bridge. I thought that I could put a PCIe RAID controller in there and attach that to some kind of external SAS JBOD array and then daisychain a Thunderbolt 10Gbe ethernet card from that. I’m sure it would have worked but it would have been incredibly messy.

Nevertheless, it was going to be my suggested Apple solution until I saw an advert on a website somewhere which lead me to HighPoint’s website. I remembered HighPoint from days of old when they sold dodgy software RAID controllers on Abit and Gigabyte motherboards. A Thunderbolt storage enclosure that they promote on their site intrigued me. It was a 24 disk enclosure with a Thunderbolt to PCIe bridge with three PCIe slots in it and, crucially, a bay in the back for a Mac Mini to be mounted. Perfect! One box, no daisy chaining, no mess.

Googling the part code lead me to a local reseller called Span where I found out that what HighPoint were selling was in fact a rebadged NetStor box. This made me happy as I didn’t want to be dealing with esoteric HighPoint driver issues should I buy one. Netstor in turn recommended an Areca ARC-1882ix-24 RAID controller to drive the disks in the array and an ATTO Fastframe NS12 10Gbe SFP+ network card to connect it to the core network. Both of these brands are reasonably well known and well supported on the Mac. We also put in 8x4TB Western Digital Reds into the enclosure. I knew that the Reds probably weren’t the best solution for this but considering how much more per drive the Blues and the Blacks cost, we decided to take the risk. The cost of this bundle including a Mac Mini Server was quoted at less than £5000. Since OS X Server has a facility to allow Time Machine on client Macs to back up to it, there would have been no additional cost for a backup agent.

Considering how squeezed for money the public sector is at the moment, the Mac Mini Server plus Thunderbolt array was the chosen solution. We placed the order with Span for the storage and networking components and an order for a Mac Mini Server from our favourite Apple resellers, Toucan.

Putting it all together was trivial, it was just like assembling a PC. Screw the hard drives into their sledges, put the cards in the slots, put the Mac Mini into the mount at the back and that’s it. After I connected the array to the Mac, I had to install the kexts for the RAID controller and the NIC and in no time at all, we had a working backup server sitting on our network. In terms of CPU and memory performance it knocked the spots off our 2010 Xserve which surprised me a bit. We created a RAID 6 array giving us 24TB of usable storage or 22.35TiB if you want to use the modern terminology. The RAID array benchmarked at about 800MB/sec read and 680MB/sec write which is more than adequate for backups. Network performance was also considerably better than using the on-board NIC too. Not as good as having a 10Gbe card in its own PCIe slot but you can’t have everything. The array is quite intelligent in that it powers on and off with the Mac Mini; you don’t have to remember to power the array up before the Mac like you have to with external SAS arrays.

I know that it’s not an ideal solution. There is a part of me which finds the whole idea of using a Mac Mini as a server rather abhorrent. At the same time, it was the best value solution and the hardware works very well. The only thing that I have reservations about is Time Machine. It seems to work OK-ish so far but I’m not sure how reliable it will be in the long term. However I’m going to see how it goes.

Mac Servers in a Post Xserve World

About three years ago, Apple discontinued the Xserve line of servers. This presented a problem. While the Xserve never was top tier hardware, it was at least designed to go into a server room; you could rack mount it, it had proper ILO and it had redundant power supplies. You would never run an Apache farm on the things but along with the Xserve RAID and similar units from Promise and Active, it made a pretty good storage server for your Macs and it was commonly used to act as an Open Directory Server and a Workgroup Manager server to manage them too.

Discontinuing it was a blow for the Enterprise sector who had came to rely on the things as Apple didn’t really produce a suitable replacement. The only “servers” left in the line were the Mac Pro Server and the Mac Mini server. The only real difference between the Server lines and their peers were that the Servers came with an additional hard drive and a copy of OS X Server preinstalled. The Mac Mini Server was underpowered, didn’t have redundant PSUs, it only had one network interface, it didn’t have ILO and it couldn’t be racked without a third party adapter. The Mac Pro was a bit better in terms of spec, it at least had two network ports and in terms of hardware it was pretty much identical internally to its contemporary Xserve so it could at least do everything an Xserve could do. However, it couldn’t be laid down in a cabinet as it was too tall so Apple suggested you stood two side by side on a shelf. That effectively meant that you had to use 10U to house four CPU sockets and eight hard drives. Not a very efficient use of space and the things still didn’t come with ILO or redundant power supplies and it was hideously expensive, even more so than the Xserve. It also didn’t help that Apple didn’t update the Mac Pro for a very long time and they were getting rapidly outclassed by contemporary hardware from other manufacturers, both in terms of hardware and price.

Things improved somewhat when Thunderbolt enabled Mac Mini Servers came onto the scene. They came with additional RAM which could be expanded, an extra hard drive and another two CPU cores. Thunderbolt is essentially an externally presented pair of PCI Express lanes. It gives you a bi-directional interface providing 10Gbps of bandwidth to external peripherals. Companies like Sonnet and NetStor started manufacturing rack mountable enclosures into which you could put one or more Mac Minis. A lot of them included ThunderBolt to PCI Express bridges with actual PCIe slots which meant you could connect RAID cards, additional network cards, faster network cards, fibre channel cards and all sorts of exciting serverish type things. It meant for a while, a Mac Mini Server attached to one of these could actually act as a semi-respectable server. They still didn’t have ILO or redundant PSUs but Mac Servers could be at least be reasonably easily expanded and the performance of them wasn’t too bad.

Of course, Apple being Apple, this state of affairs couldn’t continue. First of all they released the updated Mac Pro. On paper, it sounds wonderful; up to twelve CPU cores, up to 64GB RAM, fast solid state storage, fast GPUs, two NICs and six(!) Thunderbolt 2 ports. It makes an excellent workstation. Unfortunately it doesn’t make such a good server; it’s a cylinder which makes it even more of a challenge to rack. It only has one CPU socket, four memory slots, one storage device and there is no internal expansion. There is still no ILO or redundant power supply. The ultra powerful GPUs are no use for most server applications and it’s even more expensive than the old Mac Pro was. The Mac Pro Server got discontinued.

Apple then announced the long awaited update for the Mac Mini. It was overdue by a year and much anticipated in some circles. When Apple finally announced it in their keynote speech, it sounded brilliant. They said it it was going to come with an updated CPU, a PCI Express SSD and an additional Thunderbolt port. Sounds good! People’s enthusiasm for the line was somewhat dampened when they appeared on the store though. While the hybrid SSD/hard drive was still an option, Apple discontinued the option for two hard drives. They soldered the RAM to the logic board. The Mac Mini Server was killed off entirely so that means that you have to have a dual core CPU or nothing. It also means no memory expansion, no RAIDed boot drive and the amount of CPU resources available being cut in half. Not so good if you’re managing a lot of iPads and Macs using Profile Manager or if you have a busy file server. On the plus side, they did put in an extra Thunderbolt port and upgraded to Thunderbolt 2 which would help if you were using more external peripherals.

Despite all of this, Apple still continue to maintain and develop OS X Server. It got a visual overhaul similar to Yosemite and it even got one or two new features so it clearly matters to somebody at Apple. So bearing this in mind, I really don’t understand why Apple have discontinued the Mac Mini Server. Fair enough them getting rid of the Mac Pro Server, the new hardware isn’t suitable for the server room under any guise and it’s too expensive. You wouldn’t want to put an iMac or a Macbook into a server room either. But considering what you’d want to use OS X Server for (Profile Manager, NetRestore, Open Directory, Xcode), the current Mac Mini is really too underpowered and unexpandable. OS X Server needs some complimentary hardware to go with it and there isn’t any now. There is literally no Apple product being sold at this point that I’d want to put into a server room and that’s a real shame.

At this point, I hope that Apple do one of two things. Either:

Reintroduce a quad or hex core Mac Mini with expandable memory available in the next Mac Mini refresh

Or

Start selling a version of OS X Server which can be installed on hypervisors running on hardware from other manufacturers. OS X can already be run on VMware ESXi, the only restriction that stops people doing this already is licensing. This would solve so many problems, people would be able to run OS X on server class hardware with whatever they want attached to it again. It wouldn’t cause any additional work for Apple as VMware and others already have support for OS X in their consumer and enterprise products. And it’d make Apple even more money. Not much perhaps but some.

So Tim Cook, if you’re reading this (unlikely I know), give your licensing people a slap and tell them to get on it. kthxbye

Managing Macs using System Center Configuration Manager – Part Two

In my previous article, I described the agent that Microsoft have put into System Center Configuration Manager to manage Macs with. Overall, while I was happy to have some kind of management facility for our Macs I found it to be somewhat inadequate for our needs and I wished it was better. I also mentioned that Parallels, the company behind the famous Parallels Desktop virtualisation package for the Mac, contacted us and invited us to try out their plugin for the Mac. This article will explain what the Parallels agent is capable of and how well it works and how stable it’s proven to be since we’ve had it installed.

Parallels Mac Management Agent for ConfigMgr

The Parallels agent (PMA) is an ISV proxy for ConfigMgr. It acts as a bridge between your Macs and the management point in your ConfigMgr infrastructure. The agent doesn’t need HTTPS to be enabled on your ConfigMgr infrastructure, ConfigMgr sees Macs as full clients. The Parallels agent fills in a lot of the gaps which the native Microsoft agent has such as:

  1. A graphical and scriptable installer for the agent
  2. A Software Center-like application which lists available and installed software. Users can also elect to install published software if desired.
  3. Support for optional and required software installations
  4. Operating System Deployment
  5. The ability to deploy .mobileconfig files through DCM
  6. A VNC client launchable through the ConfigMgr console so you can control your Macs remotely
  7. Auto-enrollment of Macs in your enterprise
  8. Support for FileVault and storage of FileVault keys

It supports almost everything else that the native ConfigMgr client does and it doesn’t require HTTPS to be turned on across your infrastructure. In addition, if you use Parallels Desktop for Mac Enterprise Edition, you can use the plugin to manage VMs.

Installation

The PMA requires a Windows server to run on. In theory, you can have the PMA installed on the server hosting your MP or it can live on a separate server. There are separate versions of the PMA available for ConfigMgr 2007 and 2012/2012 SP1/2012 R2.

Earlier versions of the PMA didn’t support HTTPS infrastructures properly so you needed to have at least one MP and one DP running in HTTP mode. However, the latest version supports HTTPS across the board. However, you do need to have at least one DP running in anonymous mode for the Macs to download from.

IIS needs to be installed on the server along with WebDAV and BITS. Full and concise instructions are included so I won’t go over the process here. Anybody who has installed a ConfigMgr infrastructure will find it easy enough.

If you are using the OSD component, it needs to be installed on a server with a PXE enabled DP. If you have multiple subnets and/or VLANs, you will need an IP helper pointing at the server for the Macs to find it.

Software Deployment

The PMA supports two methods of deploying software. You can use either Packages or Applications.

Generally speaking, there are three ways to install a piece of software on a Mac, not counting the App Store:

  1. You can have an archive file (Usually DMG) with a .app bundle in to be copied to your /Applications or ~/Applications folder
  2. You can have an archive file with a PKG or MPKG installer to install your application
  3. You can install from a PKG or MPKG.

Installing using Packages

Unlike the Microsoft agent, you don’t need to repackage your software to deploy it with the PMA. To avoid doing so, you can deploy them using legacy style Packages. To deploy a piece of software using ConfigMgr Packages, you need to create a Package in the same way as you would for Windows software. You copy it to a Windows file share. You need to create a Program inside the package with a command line to install the package. Using the three above scenarios, the command lines would look like this:

  1. :Firefox 19.0.2.dmg/Firefox.app:/Applications:
  2. :iTunes11.0.2.dmg/Install iTunes.pkg::
  3. install.pkg

The first command tells the PMA to open the DMG and copies the Firefox.app bundle in the DMG to the /Applications folder. The second tells the PMA to open the DMG and execute the .pkg file with a switch to install it silently. The third runs an arbitrary command.

Once the package and the program inside the package have been created, you distribute to a DP and deploy it to a collection as standard.

Deploying software using this method is functional and it works nicely. The disadvantage is that there is no way to tell if a deployment has worked without either delving through the logs or looking in the Applications folder. If you deploy the Package to a collection, the PMA will try to install the Package on the Mac whether it’s already on there or not.

Installing using Applications

As of version 3.0 of the PMA, Parallels have started supporting ConfigMgr Applications as well as Packages. Parallels are using Microsoft’s cmmac file format to achieve this. This means that you need to repackage applications and add them to the ConfigMgr console using the same method as you do for the native ConfigMgr client. This is a bit of a pain but the benefits that doing so brings make it worthwhile. As with the Microsoft client, there are detection rules built into the Application meaning that the Parallels client can check to see if a piece of software is deployed on the Mac before it attempts to deploy it. If it is already there, it gets marked as installed and skips the installation.

It also brings this to the table:

pap

That is the Parallels Application Portal. Much like Software Center on Windows, this gives you a list of software that has been allocated to the Mac. It allows for optional and required installations. If a deployment is marked as optional, the software is listed with a nice Install button next to it.

As with the Microsoft agent, you need to be careful with the detection rules that you put into your Applications. The PMA runs a scan of /Applications and /Library folders looking for info.plist files. It extracts the application name and version from those PLISTs and adds them to an index. It then looks at the detection rules in the Application and compares them to the index that it builds. If there’s a match, it marks them as installed. If you don’t get the detection rules right, the PMA won’t detect the application even if it has been installed properly and then it eventually tries to reinstall it. I spent a very interesting afternoon chasing that one down. There are also some applications which don’t have info.plist files or get installed in strange places. The latest Java update doesn’t have an info.plist, it has an alias to another PLIST file called info.plist instead. The PMA didn’t pick that one up.

Operating System Deployment

Quite impressively, Parallels have managed to get an operating system deployment facility built into the PMA. It’s basic but it works.

First of all, you need to create an base image for your Macs and take a snapshot of it using the System Image Utility found in OS X at /System/Library/CoreServices/. You can create custom workflows in this utility to help you blank the hard disk before deployment and to automate the process. Make sure you tell it to make a NetRestore image, not a NetBoot image like I first did. Once you’ve done that, you tell it where you want to save your image and set it on its way. The end result is an NBI file which is a bundle with your system image and a bootstrap.

You then copy the resulting NBI file onto a PC or server with the ConfigMgr console and Parallels console addon installed. Once it’s on there, open the console and go to the Software Library workspace. Go to Operating Systems, right click on Operating System Images and choose Add Mac OS X Operating System Image. A wizard appears which asks you to point at the NBI file you generated and then to a network share where it creates a WIM file for ConfigMgr.

add image

Once the WIM has been generated, it adds itself to the console but one quirk I have noticed is that if you try to create it in a folder, it doesn’t go in there. It gets placed in the root instead. You can move it manually afterwards so it’s not a huge issue.

The next step is to create a task sequence. There is a custom wizard to this too, you need to right click on Task Sequences under Operating System Deployment in the Software Library workspace then choose Create Task Sequence for Macs.

task sequence

You give the task sequence a name, choose the image that you want to use and press the Finish button. Again, if you’re using folders to organise your task sequences and you try to create the task sequence in a folder, it will get placed in the root of the structure rather than in the folder that you tried to create it in. You can move it if you wish.

From there, it’s pretty much standard ConfigMgr. You need to distribute the image to a DP and publish the task sequence to a collection. The task sequence then appears to the Macs in that collection as a standard Netboot image with the title that you gave to it. You can access it the usual way, either through the Startup Disk pane in System Preferences or by holding down the option key on startup.

boot disk

Unfortunately, what it doesn’t do is allow for any kind of automatic, post image deployment sequences. Although in theory the System Image Utility is supposed to support custom workflows which allow software installations and the running of scripts, I haven’t managed to get it to deploy the agent automatically. I have therefore created a script which the admin deploying the Mac needs to run which (amongst other things) names the Mac and installs the PMA. From what Parallels tell me, this is being worked on.

DCM – Scripts and Profiles

The PMA supports the usage of DCM Bash scripts in the same way as the Microsoft agent does. There isn’t much to say about this, it works and it’s a useful thing to have. The other way of getting settings onto Macs with the PMA is via mobileconfig files generated either by Profile Manager in OS X Server or by a generator built into the ConfigMgr console addin. The generator looks like this:

profile

Look familiar? Unfortunately there aren’t all of the options here that are in Profile Manager so if you want to do something more advanced than what’s on offer here, you still need a copy of OS X Server and Profile Manager to generate the profile.

To deploy a mobileconfig file using the PMA, you need to go to the Assets and Compliance workspace, go to Compliance Settings and right click on Configuration Items. Go to Create Parallels Configuration Item then to Mac OS X Configuration Profile.

configprofile

You then give the configuration item a name, decide whether it’s a User or System profile and point the wizard at the mobileconfig file generated by Profile Manager. Once you’ve done that, there is a new configuration item in your console which can be added to a DCM baseline and deployed to a collection.

I have used this facility for various purposes; for configuring Airport, for joining the Mac to the AD domain, for setting up the user’s desktop settings and wallpaper, setting up Time Machine and for more besides. It’s a great facility to have and rather more user friendly than delving through PLISTS to find settings.

Other features – VNC Client, Auto-enrolment and  Inventory

Well, these do what they say on the tin. We have a VNC client:

vnc

It takes an inventory:

mac inventory

It has the ability to detect Macs on your network and automatically install the agent on them. They all work. There isn’t really much else to be said.

How well does it work?

So clearly, the PMA has far more features than the Microsoft agent does but a piece of software can have all the features in the world and still be useless if it isn’t stable. In this regard, I am delighted to say that the Parallels agent has been rock solid for us. It has been continually improved and has had feature after feature added. It doesn’t quite make a Mac a first class citizen on a Windows network but it comes very close and going by the way that Parallels have improved the product over the last two years, I’m confident that the gap will close. Parallels have been a lot quicker in getting new versions of OS X supported with their plugin too, it already has support for Yosemite.

It hasn’t been entirely problem free but when I’ve contacted Parallels Support, they’ve been quick and efficient and they’ve got the problem resolved. Most problems that I’ve come across I’ve managed to solve myself with a bit of thought.

Although Parallels claim that it the PMA can be installed on the same server as your management point, I couldn’t get it to work when I did this. I ended up putting it on its own hardware. This was with v1.0 of the product though, we’re now on v3.1 so they may have improved that since then.

Having the PMA has also meant that I no longer need a Magic Triangle set up to support and configure my OS X clients. I don’t need Profile Manager or Workgroup Manger to deploy settings, I don’t need OS X server or DeployStudio to deploy images. The only thing I need OS X Server for is Profile Manager to generate the profiles and (with the arrival of Yosemite) the System Image Utility.

The only real downside to the PMA is that it’s expensive and that you have to pay full price for it annually. That may be hard to justify if you’ve already spent a fortune on a System Center license.

Conclusion

So lets do a quick advantage/disadvantage comparison:

Microsoft client advantages:

  • Native client so no additional cost
  • Support from Microsoft

Microsoft client disadvantages:

  • Sparse feature set
  • Required installs only
  • Complicated DCM
  • Takes a long time to install multiple applications
  • Requires HTTPS
  • Slow to support new versions of OS X
  • No visible status indicators, next to impossible to see what’s going on.

Parallels client advantages

  • Includes lots of additional features, brings level of management of Macs to near parity of Windows machines
  • Optional and user initiated installs supported
  • Software Center equivalent and a System Preferences pane to check status of agent. Very thorough logs to check on what the agent is doing are available too.
  • OSD
  • Doesn’t require HTTPS
  • Supports SCCM 2007
  • Much easier to deploy settings by using mobileconfig files

Parallels client disadvantages

  • Expensive
  • Requires an additional DP
  • Probably requires an additional server to install the proxy

They’re as good as each other when it comes to running DCM scripts and taking inventories. So the answer is simple: If you can afford it, get the Parallels Management Agent. If I were the head of the System Center division at Microsoft, I’d be going cap in hand to Satya Nadella and telling him to drive all the money to Parallels HQ to acquire their product from them. Frankly, it puts Microsoft’s own efforts to shame.

Managing Macs using System Center Configuration Manager – Part One

This post is about one of my favourite subjects, namely Configuration Manager, referred to hereafter as ConfigMgr . If you don’t care about the intricacies of desktop management, I suggest you look away now cos this ain’t gonna interest you!

Before I get too far into this post, I should mention that I’ve written about this subject before on my blog at EduGeek. The content here therefore isn’t that new but I’ve rewritten some if it and hopefully improved on it a little too. I will also say that all of this was tried almost two years ago now so chances are that things have changed a little with ConfigMgr 2012 R2. From what I understand though, most of what I’ve written here is still accurate.

Anyway, I spend a lot of time at work using ConfigMgr manage the computers on our Windows network. We use it to almost its full extent; we use it for software and update deployment, operating system deployment, for auditing software usage, for configuration of workstations using DCM and a fair bit more besides.

As well as having more than 1500 Windows PCs, laptops and servers, I also have around 80 Macs to manage as well. To put it mildly, they were a nuisance. They were essentially unmanaged; whenever an update or a piece of software came along, we had to go to each individual Mac and install it by hand. The remote administration tools that we were using (Apple Remote Desktop and Profile Manager) were woefully inadequate. ARD requires far too much interaction and hasn’t had any significant update since Leopard was released. Profile Manager does an OK job of pushing down settings but for software management, it assumes that the Macs are personal devices getting all of their software from the App Store. That’s not really good enough. We were desperate to find something better.

We had been using ConfigMgr to manage our Windows PCs for a couple of years by that point and we had recently upgraded to 2012 SP1 which featured Mac management for the first time. We figured that we may as well give it a go and see what it was like. This is what I found out.

First of all, ConfigMgr treats Mac clients as Mobile devices so this means that you have to set up an HTTPS infrastructure and install an enrolment point for your Macs to talk to. Your management point needs to talk HTTPs as do your distribution points. That also means that you need to allocate certificates to your PXE points and task sequence boot media if you want them to talk to the rest of your infrastructure.

Once you have all of this set up, you need to enrol your Macs. Bear in mind that I looked at this when ConfigMgr 2012 SP1 was the current version. I understand that the process has changed a little in 2012 R2.

First of all, you need to download the Mac Management Tools from here for 2012 SP1  and here for 2012 R2. This gets you an MSI file which you need to install on your Windows PC. That MSI file contains a DMG file which you need to copy to your Mac. In turn, that DMG file contains the installer for the Mac client, the utility for enrolling your Macs in ConfigMgr and an application repackager. You have to install the client first of all from an elevated terminal session. Once that’s installed, you need to run another command to enrol your Mac into ConfigMgr. Assuming that you get this right, it will download a certificate and you’re good to go. When I was setting up the Macs to use this, I found a very good blog post by James Bannan which goes into a lot more detail.

Once your Mac has been enrolled, you will want to start doing something useful with it. At the moment, the Microsoft client has the following abilities:

  1. You can deploy software
  2. You can install operating system updates using the software deployment mechanism
  3. You can check and change settings by using DCM to modify PLIST files
  4. You can check and change settings by using DCM and Bash scripts to return values and make changes
  5. The agent takes an inventory of the software and hardware installed on your Mac and uploads it to your management point.

Deployment of Software and Updates

Deploying software on the Macs is broadly similar to doing the same process on Windows computers; you need to add the software to ConfigMgr as an application, create a deployment type with some detection rules, distribute the software to a DP and deploy the software to an application. The one difference is that you need to repackage the software into a format that ConfigMgr understands. It has a specific format for Mac software called “cmmac”. This is essentially a refactored ZIP file with either a .app, a .pkg or a .mpkg with an XML file which has an inventory of the ZIP, installation instructions and some predefined detection rules. I don’t want to make this already long post any longer than it needs to be so I’ll link to Mr. Bannan’s blog again which has a very good rundown of the entire process.

Changing settings using PLIST files

This isn’t the simplest of processes but it is quite effective. The first step is to open the ConfigMgr console on your Windows PC and go to the Assets and Compliance workspace. From there, go to Overview then Compliance Settings then Configuration Items. Right click and click Create Configuration Item. This will bring up the following window:

Untitled

This example is going to set a proxy server against a network interface so I have named it appropriately and given it a description. Make sure that you set the Configuration Item Type to Mac OS X. Press the Next button

os selection

The next box lets you target your setting to specific versions of OS X. This screenshot was taken nearly two years ago when Microsoft hadn’t got around to adding Mountain Lion support. The current version supports up to and including Mavericks but not Yosemite (yet). Choose a specific version or not depending on what you need and press Next.

create setting

You then need to tell ConfigMgr what PLIST file you’re editing and which key you want to change. You also need to tell it if the key is a string, a number, a boolean value etc. Once you’ve done that, change to the Compliance Rules tab

edit rule

You need to add a rule for each setting that you’re changing. The one in the example above is setting the network name of the HTTP proxy server for the Ethernet interface on the Mac. To complete this example, you’d also need to set one for the HTTPS proxy, the port number and any proxy exceptions. Make sure that the Remediate is checked on any rules that you create and finish the wizard.

Once your compliance rule is completed, you will need to create a DCM baseline or add it to an existing baseline and deploy that baseline to a collection. I’m not going to go through the process here as it’s largely identical to doing it for a Windows PC.

Changing settings using Bash Scripts

This is probably the more powerful way of using DCM as you’re not relying purely on PLIST files to make your changes. If you can detect and remediate the setting that you want to change by using Bash, you can use the script here. This could be a setting in an XML file, a config file somewhere, a PLIST etc. I’m sure you get the idea. The process for creating a compliant rule using a script is largely similar to creating one for a PLIST and even more similar to creating one for a Windows machine. When you get to the third window, choose Bash Script in the setting type instead of Preference File. You get the opportunity to input two scripts; one to detect the setting and one to change it.

System Inventory

Again, this works in the same manner as it does for Windows machines, albeit not quite as detailed. At the very least, you get a list of the hardware and software installed on the machine and the agent keeps track of any changes made. Asset Intelligence and Software Metering isn’t supported however.

What can’t it do?

  1. OSD
  2. Remote Control
  3. Asset Intelligence
  4. Antivirus monitoring (Although it will deploy SCEP for Mac happily enough)
  5. Software Metering
  6. Power Management (Not easily anyway)

Results

So I’ve covered how it all works. The question that you may be asking now is “How well does it work?”. The answer two years ago was “It works OK… ish. Maybe”. I shall try to explain.

The whole thing feels very much like a v0.1 beta rather than proper release software. It’s functional up to a point but there are some very rough edges and the functionality is nowhere near as strong on the Mac (and presumably Linux too) is it is on a Windows PC.

For starters, you can only deploy applications to machines and not to users. You can’t have optional installs. There is no Software Center so you can’t easily see what software has been deployed and what software is supposed to be deployed. When the agent detects a deployment, it comes up with a sixty minute countdown, the length of which can’t (or couldn’t) be changed. You can tell the Mac to start deployment when you see the countdown but if you’re deploying (say) six pieces of software and you leave the Macs unattended, the countdown comes up, expires, installs the software then the next countdown comes up, expires, installs the software and so on. It can take hours for multiple deployments to finish if you’re not paying attention.

I also found that the detection of deployments was rather erratic too. Just like with Applications for Windows PCs, there are detection rules which ConfigMgr uses to determine whether a piece of software is installed on the Mac or not. The ConfigMgr client is supposed to use the detection rules to detect whether the Application is installed or not and skip installation of deployed applications if it detects that’s it’s already present. Unfortunately the detection process seemed rather erratic and our Macs had a habit of repeatedly trying to install software that was already there. The process then fails because the installer detects that the software is there already and throws an error. The process then restarts because ConfigMgr still thinks it’s not there. This tended to happen with more complex Applications which use PKG installers to deploy rather than Applications which copy .app files. I do have a theory as to why this happens but I noticed this about two years later. When you repackage the application using CMAppUtil, it automatically generates the detection rules for you. With PKG installers, it puts a lot in there. I think that maybe it puts too many in there so it’s looking for a load of settings it can’t detect despite the software being present. Unfortunately I haven’t managed to test the theory but it makes sense to me.

Another gotcha that I’ve found with the repackager is that sometimes, it gets the installation command wrong, especially when you run it on a Mac with more than one operating system installed on it. It sometimes gets the path to install to wrong, necessitating a change in your installation command line.

DCM works nicely but finding the PLIST files or the setting that you want to change via Bash can be troublesome. That said, it’s no worse than trawling through the registry or finding an obscure PowerShell command to do what you want on a Windows machine.

Rather mysteriously, Microsoft didn’t include a remote control agent with this. Considering that a VNC daemon is baked into all versions of OS X, this would be trivial to implement,

The real bugbear that my team and I had with the Microsoft client is that Microsoft were very slow to implement support for new versions of OS X. As I’m sure you know, Apple have been working on a yearly release model for major versions of OS X since they released Lion. Microsoft didn’t support Mountain Lion for six full months after Apple had released it on the App store. The delay for Mavericks support wasn’t much better and Yosemite isn’t supported at all right now. It wouldn’t be so bad if it were a case of “Oh, it’s not supported but it’ll probably work”. Unless there is explicit support for the OS X version in the client, it won’t.

So in conclusion, the Microsoft client is better than nothing but it’s not that good either. When my friend and colleague Robert wrote a brief piece about this subject on his blog, he got a message from the lovely people at Parallels telling him about a plugin they were writing for SCCM which also happens manage Macs. Stay tuned for Part Two of this article.

*Update*

Part two of this article is now up. If you want to see how this story ends, please click here

%d bloggers like this: