Tag Archives: Apple

Building, Deploying and Automatically Configuring a Mac Image using SCCM and Parallels SCCM Agent

I touched briefly on using the Parallels Management Agent to build Macs in my overview article but I thought it might be a good idea to go through the entire process that I use when I have to create an image for a Mac, getting the image deployed and getting the Mac configured once the image is on there. At the moment, it’s not a simple process. It requires the use of several tools and, if you want the process to be completely automated, a some Bash scripting as well. The process isn’t as smooth as you would get from solutions like DeployStudio but it works and, in my opinion anyway, it works well enough for you not to have to bother with a separate product for OSD. Parallels are working hard on this part of the product and they tell me that proper task sequencing will be part of V4 of the agent. As much as I’m looking forward to that, it doesn’t change the fact that right now we’re on v3.5 and we have to use the messy process!

First of all, I should say that this is my method of doing it and mine alone. This is not Parallel’s method of doing this, it has not been sanctioned or condoned by them. There are some dangerous elements to it, you follow this procedure at your own risk and I will not be held responsible for damage caused by it if you try this out.

Requirements

You will need the following tools:

  • A Mac running OS X Server. The server needs to be set up as a Profile Manager server, an Open Directory server and, optionally, as a Netboot server. It is also needed on Yosemite for the System Image Utility.
  • A second Mac running the client version of OS X.
  • Both the server and the client need to be running the same version of OS X (Mavericks, Yosemite, whatever) and they need to be patched to the same level. Both Macs need to have either FireWire or Thunderbolt ports.
  • A FireWire or Thunderbolt cable to connect the two Macs together.
  • A SCCM infrastructure with the Parallels SCCM Mac Management Proxy and Netboot server installed.
  • This is optional but I recommend it anyway:  A copy of Xcode or another code editor to create your shell scripts in. I know you could just use TextEdit but I prefer something that has proper syntax highlighting and Xcode is at least free.
  • Patience. Lots of patience. You’ll need it. The process is time consuming and and can be infuriating when you get something wrong.

At the end of this process, you will have an OS X Image which can be deployed to your Macs. The image will automatically name its target, it will download, install and configure the Parallels SCCM agent, join itself to your Active Directory domain, attach itself to a managed wireless network and it will install any additional software that’s not in your base image. The Mac will do this without any user interaction apart from initiating the build process.

Process Overview

The overview of the process is as follows:

  1. Create an OS X profile to join your Mac to your wireless network.
  2. Create a base installation of OS X with the required software and settings.
  3. Create a Automator workflow to deploy the Parallels agent and to do other minor configuration jobs.
  4. Use the System Image Utility to create the image and a workflow to automatically configure the disk layout and computer name.
  5. (Optional) Use the Mac OS X Netboot server to deploy the image to a Mac. This is to make sure that your workflow works and that you’ve got your post-install configuration scripts right before you add the image to your ConfigMgr server. You don’t have to do this but you may find it saves you a lot of time.
  6. Convert the image to a WIM file and add it to your SCCM OSD image library
  7. Advertise the image to your Macs

I’m going to assume that you already have your SCCM infrastructure, Parallels SCCM management proxy, Parallels Netboot server and OS X Server working.

Generate an OS X Profile.

Open a browser and go to the address of your Profile Manager (usually https://{hostname.domain}/profilemanager) and go the Device Groups section. I prefer to generate a profile for each major setting that I’m pushing down. It makes for a little more work getting it set up but if one of your settings breaks something, it makes it easier to troubleshoot as you can remove a specific setting instead of the whole lot at once.

Your profile manager will look something like this:

Untitled

As you can see, I’ve already set up some profiles but I will walk through the process for creating a profile to join your Mac to a wireless network. First of all, create a new device group by pressing the + button in the middle pane. You will be prompted to give the group a name, do so.

Untitled 2

Go to the Settings tab and press the Edit button

Untitled 3

In the General section, change the download type to Manual, put a description in the description field and under the Security section, change the profile removal section to “With Authorisation”. Put a password in the box that appears. Type it in carefully, there is no confirm box.

Untitled 4

If you are using a wireless network which requires certificates, scroll down to the certificates section and copy your certificates into there by dragging and dropping them. If you have an on-site CA, you may as well put the root trust certificate for that in there as well.

Untitled 5

Go to the Networks section and set put in the settings for your network

Untitled 6

When you’re done, press the OK button. You’ll go back to the main Profile Manager screen. Make sure you press the Save button.

I would strongly suggest that you explore Profile Manager and create profiles for other settings as well. For example, you could create one to control your Mac’s energy saving settings or to set up options for your users desktop.

When you’re back on the profile manager window, press the Download button and copy the resulting .mobileconfig file to a suitable network share.

Go to a PC with the SCCM console and the PMA plugin installed. Open the Assets and Compliance workspace. Go to Compliance Settings then Configuration Items. Optionally, if you haven’t already, create a folder for Mac profiles. Right click on your folder or on Configuration Items, go to Create Parallels Configuration Item then Mac OS X Configuration Profile from File.

sccmprof

Give the profile a name and description, change the profile type to System then press the Browse button and browse to the network share where you copied the .mobileconfig file. Double click on the mobileconfig file then press the OK button. You then need to go to the Baselines section and create a baseline with your configuration item in. Deploy the baseline to an appropriate collection.

Create an image

On the Mac which doesn’t have OS X Server installed, install your software. Create any additional local users accounts that you require. Make those little tweaks and changes that you inevitably have to make. If you want to make changes to the default user profile, follow the instructions on this very fine website to do so.

Once you’ve got your software installed and have got your profile set up the way you want it, you may want to boot your Mac into Target Mode and use your Server to create a snapshot using the System Image Utility or Disk Utility. This is optional but recommended as you will need to do a lot of testing which may end up being destructive if you make a mistake. Making an image now will at least allow you to roll back without having to start from scratch.

Creating an Automator workflow to perform post-image deployment tasks

Now here comes the messy bit. When you deploy your image to your Macs, you will undoubtably want them to automatically configure themselves without any user interaction. The only way that I have found to do this reliably is pretty awful but unfortunately I’ve found it to be necessary.

First of all, you need to enable the root account. The quickest way to do so is to is to open a terminal session and type in the following command:

dsenableroot -u {user with admin rights} -p {that user's password} -r {what you want the root password to be}

Log out and log in with the root user.

Go to System Preferences and go to Users and Groups. Change the Automatic Login option to System Administrator and type in the root password when prompted. When you’ve done that, go to the Security and Privacy section and go to General. Turn on the screensaver password option and set the time to Immediately. Check the “Show a Message…” box and set the lock message to something along the lines of “This Mac is being rebuilt, please be patient”. Close System Preferences for now.

You will need to copy a script from your PMA proxy server called InstallAgentUnattended.sh. It is located in your %Programfiles(x86)%\Parallels\PMA\files folder. Copy it to the Documents folder of your Root user.

Open your code editor (Xcode if you like, something else if you don’t) and enter the following script:

#!/bin/sh

#Get computer's current name
CurrentComputerName=$(scutil --get ComputerName)

#Bring up a dialog box with computer's name in and give the user the option to change it. Time out after 30secs
ComputerName=$(/usr/bin/osascript <<EOT
tell application "System Events"
activate
set ComputerName to text returned of (display dialog "Please Input New Computer Name" default answer "$CurrentComputerName" with icon 2 giving up after 60)
end tell
EOT)

#Did the user press cancel? If so, exit the script

ExitCode=$?
echo $ExitCode

if [ $ExitCode = 1 ]
then
exit 0
fi

#Compare current computername with one set, change if different

CurrentComputerName=$(scutil --get ComputerName)
CurrentLocalHostName=$(scutil --get LocalHostName)
CurrentHostName=$(scutil --get HostName)

echo "CurrentComputerName = $CurrentComputerName"
echo "CurrentLocalHostName = $CurrentLocalHostName"
echo "CurrentHostName = $CurrentHostName"

 if [ $ComputerName = $CurrentComputerName ]
 then
 echo "ComputerName Matches"
 else
 echo "ComputerName Doesn't Match"
 scutil --set HostName $ComputerName
 echo "ComputerName Set"
 fi

 if [ $ComputerName = $CurrentHostName ]
 then
 echo "HostName Matches"
 else
 echo "HostName Doesn't Match"
 scutil --set ComputerName $ComputerName
 echo "HostName Set"
 fi

 if [ $ComputerName = $CurrentLocalHostName ]
 then
 echo "LocalHostName Matches"
 else
 echo "LocalHostName Doesn't Match"
 scutil --set LocalHostName $ComputerName
 echo "LocalHostName Set"
 fi

#Invoke Screensaver
/System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine

#Join Domain
dsconfigad -add {FQDN.of.your.AD.domain} -user {User with join privs} -password {password for user} -force

#disable automatic login
defaults delete /Library/Preferences/com.apple.loginwindow.plist autoLoginUser
rm /etc/kcpassword

#install Configuration Manager client
chmod 755 /private/var/root/Documents/InstallAgentUnattended.sh
/private/var/root/Documents/InstallAgentUnattended.sh http://FQDN.of.your.PMA.Server:8761/files/pma_agent.dmg {SCCM User} {Password for SCCM User} {FQDN.of.your.AD.Domain}
echo SCCM Client Installed

#Repair disk permissions
diskutil repairPermissions /
echo Disk Permissions Repaired

#Rename boot volume to host name
diskutil rename "Macintosh HD" $HOSTNAME

#disable root
dsenableroot -d -u {User with admin rights on Mac} -p {That user's password}

#Reboot Mac
shutdown -r +60

Obviously you will need to change this to suit your environment.

As you can see, this has several parts. It calls a bit of Applescript which prompts the user to enter the machine name. The default value is the Mac’s current hostname. The prompt times out after 30 seconds. The script gets the current hostname of the machine and compares it to what was entered in the box and changes the Mac’s name if it is different. It then invokes the Mac’s screensaver, joins the Mac to your AD domain, renames the Mac’s hard drive to the hostname of the Mac and downloads the PMA client from the PMA Proxy Server and installs it. It removes the automatic logon for the Root user, removes the saved password for Root, runs a Repair Permissions on the Mac’s hard disk then disables the Root account and sets the Mac to reboot itself after 60 minutes. The Mac is given an hour before it reboots so that the PMA can download and apply its initial policies.

At this point, you will probably want to test the script to make sure that it works. This is why I suggested taking a snapshot of your Mac beforehand. Even if you do get it right, you still need to roll back your Mac to how it was before you ran the script.

Once the script has been tested, you will need to create an Automator workflow. Open the Automator app and create a new application. Go to the Utilities section and drag Shell Script to the pane on the right hand side.

Untitled 7

At this point, you have a choice: You can either paste your entire script into there and have it all run as a big block of code or you can drag multiple shell script blocks across and break your code up into sections. I would recommend the latter approach; it makes viewing the progress of your script a lot easier and if you make a mistake in your script blocks, it makes it easier to track where the error is. When you’re finished, save the workflow application in the Documents folder. I have uploaded an anonymised version of my workflow: Login Script.

Finally, open System Preferences again and go to the Users and Groups section. Click on System Administrator and go to Login Items. Put the Automator workflow you created in as a login item. When the Mac logs in for the first time after its image is deployed, it will automatically run your workflow.

I’m sure you’re all thinking that I’m completely insane for suggesting that you do this but as I say, this is the only way I’ve found that reliably works. I tried using loginhooks and a login script set with a profile but those were infuriatingly unreliable. I considered editing the sudoers file to allow the workflow to work as Root without having to enter a password but I decided that was a long term security risk not worth taking. I have tried to minimise the risk of having Root log on automatically as much as possible; the desktop is only interactive for around 45-60 seconds before the screensaver kicks in and locks the machine out for those who don’t have the root password. Even for those who do have the root password, the Root account is only active for around 5-10 minutes until the workflow disables it after after the Repair Disk Permissions command has finished.

Anyway, once that’s all done reboot the Mac into Target mode and connect it to your Mac running OS X Server.

Use the System Image Utility to create a Netboot image of your Mac with a workflow to deploy it.

There is a surprising lack of documentation on Internet about the System Image Utility. I suppose that’s because it’s so bare bones and that most people use other solutions such as DeployStudio to deploy their Macs. I eventually managed to find some and this is what I’ve managed to cobble together.

On the Mac running OS X Server, open the Server utility and enter your username and password when prompted. When the OS X Server app finishes loading, go to the Tools menu and click on System Image Utility. This will open another app which will appear in your dock; if you see yourself using this app a lot, you can right click on it and tell it to stay in your dock.

siu 1

Anyway, once the System Image Utility loads click on the Customize button. That will bring up a workflow window similar to Automator’s.

SIU 2

The default workflow has two actions in it: Define Image Source and Create Image. Just using these will create a working image but it will not have any kind of automation; the Mac won’t partition its hard drive or name itself automatically. To get this to work, you need to add a few more actions.

There will be a floating window with the possible actions for the System Image Utility open. Find the following three actions and add them to the workflow between the Define Image Source and Create Image actions. Make sure that you add them in the following order:

  1. Partition Disk
  2. Enable Automated Installation
  3. Apply System Configuration Settings

You can now configure the workflow actions themselves.

For the Define Image Source action, change the Source option to the Firewire/Thunderbolt target drive.

For the Partition Disk action, choose the “1 Partition” option and check the “Partition the first disk found” and, optionally, “Display confirmation dialog before partitioning”. Checking the second box will give you a 30 second opportunity to create a custom partition scheme when you start the imaging process on your Mac clients. Choose a suitable name for the boot volume and make sure that the disk format is “Mac OS Extended (Journaled)”

For the Enable Automated Installation action, put the name of the volume that you want the OS to be installed to into the box and check the “Erase before installing” box. Change the main language if you don’t want your Macs to install in English.

The Apply System Configuration Settings action is a little more complicated. This is the section which names your Macs. To do this, you need to provide a properly formatted text file with the Mac’s MAC address and its name. Each field is separated with a tab and there is no header line. Save the file somewhere (I’d suggest in your user’s Documents folder) and put the full path to the file including the file name into the “Apply computer name…” box. There is an option in this action which is also supposed to join your Mac to a directory server but I could never get this to work no matter what I tried so leave that one alone.

The last action is Create Image. Make sure that the Type is NetRestore and check the Include Recovery Partition box. You need to put something into the Installed Volume box but it doesn’t appear to matter what. Put a name for the image into the Image Name and Network Disk boxes and choose a destination to save the image to. I would suggest saving it directly to the /{volume}/Library/Netboot/NetbootSP0 folder as it will appear as a bootable image as soon as the image snapshot has been taken without you having to move or copy it to the correct location.

Once you’ve filled out the form, press the Save button to save your workflow then press Run. The System Image Utility will then generate your image ready for you to test. Do your best to make sure that you get all of this right; if you make any mistakes you will have to correct them and run the image creation workflow again, even if it is just a single setting or something in your script that’s wrong. The other problem with this is that if you add any new Macs to your estate you’ll have to update the text file with the Mac’s names and MAC addresses in and re-create the image again. This is why I put the “Name your Mac” section into the script.

Test the image

The next step now is to test your Netboot image. To do so, connect your Client Mac to the same network segment as your Server. Boot it to the desktop and open System Preferences. Go to to the Startup Disk pane and you should see the image that you just created as an option

boot

Click on it and press the Restart button. The Mac will boot into the installation environment and run through its workflow. When it’s finished, it will automatically log on as the Root user and run the login script that you created in a previous step.

Convert the image to a WIM and add it to your OSD Image Library

Once you’re satisfied that the image and the login script runs to your satisfaction, you need to add your image to the ConfigMgr image library. Unfortunately, ConfigMgr doesn’t understand what an NBI is so we need to wrap it up into a WIM file.

To convert the image to a WIM file, first of all copy the NBI file to a suitable location on your PMA Proxy Server. Log onto the PMA Proxy using Remote Desktop and open the ConfigMgr client. Go to the Software Library workspace and Operating Systems then Operating System Images. Right click on Operating System Images and click on “Add Mac OS X Operating System Image”.

nbi convert

Click on the first browse button and go the location where you copied the NBI file to. This must be a local path, not a UNC.

Click on the second browse button and go to the share that you defined when you installed the Netboot agent on your PMA Proxy. This must be a UNC, not a local path. Press the Next button and wait patiently while the NBI image is wrapped up into a WIM file. When the process is finished, the image will be in your Operating System Images library. There is a minor bug here: If you click on a folder underneath the Image library, the image will still be added to the root of the library and not in the folder you selected. There’s nothing stopping you moving it afterwards but this did confuse me a little the first time I came across it. Once the image is added, you should copy it to a distribution point.

Advertise the image to your Macs

Nearly finished!

The final steps are to create a task sequence then deploy the task sequence to a collection. To create the task sequence, open the ConfigMgr console on a PC which has the Parallels console extension installed. Go to the Software Library workspace and Operating Systems. Under there, go to Task Sequences and right click on Task Sequences. Select “Create Task Sequence for Macs” and this will appear:

tasksequence

Put in a name for the task sequence then press the Browse button. After a small delay, a list of the available OS X images will appear. Choose the one that you want and press the Finish button. The task sequence will then appear in your sequence library but like with the images, it will appear in the root rather than in a specific folder. The only step left is to deploy the task sequence to a collection; the process for this is identical to the one for Windows PCs. I don’t know if it’s necessary but I always deploy the sequence to the Unknown Computers collection as well as the collections that the Macs sit in, just to be sure that new Macs get it as well.

Assuming that you have set up the Netboot server on the PMA Proxy properly, all of the Macs which are in the collection(s) you advertised the image to will have your image as a boot option. Good luck and have fun!

Bootnote

Despite me spending literally weeks writing this almost 4,000 word long blog post when I had the time and inclination to do so, it is worth mentioning again that all of this is going to be obsolete very soon. The next version of the Parallels agent is going to support for proper task sequencing in it. My contact within Parallels tells me that they are mimicking Microsoft’s task sequence UI so that you can deploy software and settings during the build process and that there will be a task sequence wizard on the Mac side which will allow you to select a task sequence to run. I’m guessing (hoping!) that will be in the existing Parallels Application Portal where you can install optional applications from.

DCM Script – Detect if a Mac is a Member of the Domain and Join If Not

As I’ve said before, Macs can be a pain in the backside when you’re trying to manage a lot of them. One of the particular bugbears that I’ve found is that they have a habit of unbinding themselves from your Active Directory domain for no apparent reason. Usually this would mean a helpdesk call because someone can’t log on and disruption and annoyance and well, you get the idea.

This script is a bit of a kludge. My Bash isn’t the best by any stretch of the imagination and I’ve put detection and remediation into the same snippet as for some reason, I couldn’t get a separate remediation script to work. No matter. It’s not ideal but it still works. Anyway, the script looks like this:


DOMAIN_STATUS=$(dsconfigad -show | awk "/Active Directory Forest/" | cut -d "=" -f 2)"_Member"
if [[ ${DOMAIN_STATUS} == "{{domain.fqdn}}_Member" ]]; then

echo "OK"

exit 2 # already a domain member, exit script

fi

dsconfigad -add {{domain.fqdn}} -user {{user}} -password {{password}} -force
EXIT_CODE=$(echo $?)

if [[ ${EXIT_CODE} != 0 ]]; then

echo "There was an error. Code is " $EXIT_CODE
exit ${EXIT_CODE}

fi

echo "OK"

Change anything in dual braces to reflect your environment.

The script runs a command called dsconfigad which gets information about the Active Directory domain that the Mac belongs to. It trims out the FQDN of the domain, appends _Member onto the end of it and adds it to a variable. I’m adding _Member to the end of the string because if the Mac isn’t a member of a domain, dsconfigad returns a null value and the variable doesn’t get created.

The script compares the output with what it should be. If it matches, it returns “OK” to ConfigMgr and exits. If not, it joins the Mac to the domain and returns “OK” to ConfigMgr. If for some reason the domain join fails, the script sends the error code back to ConfigMgr.

As always, you set the detection rule to look for a string called “OK”, add the rule to a new or pre-existing baseline and deploy the baseline to a collection. After you do, any Mac which is managed by ConfigMgr but which is not a member of your domain will find itself joined.

As I say, I know that my Bash scripting skills are fairly minimal so if you see a better way for this script to work, please feel free to contact me. The usual “I’m not responsible if this script hoses your Mac and network” disclaimers apply.

1378102096

Mac Servers – What I’m Doing

Hopefully this should be the last post about Macs for the time being! I don’t know why I’ve written so many recently, I guess it’s just because the bloody things take up so much of my time at work. Don’t get me wrong, I like Macs a lot for my own purposes. I own two, my girlfriend has one, we both have iPhones, I have an Apple TV and an AirPort Extreme router. I even subscribed to MobileMe when you had to pay for it. It’s just that professionally, they’re a pain in the arse and if you really want I’ll put that on a certificate for you to put on your wall.

Anyway, my last post was originally supposed to be about the Apple server solution that we’re using at work. However, it kind of turned into a rant about how Apple has abandoned the server market. I stand by the post entirely but I thought I’d try to write the post I was originally intending to write!

A few months ago I got a call from the technician in the Media Studies department who looks after their Macs. As far as he could see, the Mac had stopped working entirely and he was getting into a bit of a panic about it (The Mac was actually fine, it had just buggered up its own disk permissions preventing anything from launching). The reason he was panicking was because he thought that all of the student work stored on it had been lost. These Macs are used for editing video using Final Cut Pro; because of the demands that video editing puts on a computer’s storage subsystem, it is totally impractical to edit video over a network on one computer let alone 40 at once so therefore the student has to store any work that he or she does on the Mac itself. If that Mac fails then the user kisses goodbye to all of the work done that year. That wasn’t acceptable but he and the college had put up with it up until that point. He wanted to know if there was a way to back them up so that if one does go kaboom, we have a way of recovering their work.

This turned into a fairly interesting project as I had to investigate viable solutions to achieve this. I came up with four:

  1. Backing the Macs up using an external hard drive and Time Machine
  2. Backing the Macs up using a server running OS X Server and Time Machine
  3. Backing the Macs up using a server from another manufacturer using some other piece of backup software
  4. Backing the Macs up to a cloud provider

First of all, I investigated and completely discounted the cloud solution. We would need in the order of 20TB of space and a huge amount of bandwidth from the cloud provider for the backups to work. The Macs would need to be left on at night as there would be no way we’d be able to back then up during the day and maintain normal internet operations. It all ended up costing too much money, if we had gone for a cloud solution we would have spent as much money in six months as it would have cost for a fairly meaty server.

There was also quite some thought put into using USB disks and Time Machine to back them up. This certainly would have worked, it would have been nice and simple and relatively cheap. Unfortunately there were some big downsides too. Securing them would have been almost impossible, it would be far too easy for someone to disconnect one and walk off with it. If we wanted any decent kind of speed and capacity on them, we probably would have had to of used enclosures with 3.5″ drives in them which would have meant another power supply to plug in. Finally, I couldn’t see a way to stop students just using them as an additional space to store their stuff on completely negating the point of them in the first place.

So that left having a network storage for the Macs to store their backups on. First of all, I looked at Dell (Other server vendors are available) to see how much a suitable server would cost. Dell’s range of servers with lots of 3.5″ drives is frustratingly small but eventually I managed to spec a T620 with 2x1TB and 8x4TB drives with a quad core CPU and 16GB RAM, a half decent RAID controller and a three year warranty for about £7500 retail. Add on top the cost of a suitable backup daemon and it would cost somewhere in the region of £9000-£9500. Truth be told, that probably would have been my preferred option but a couple of things kept me from recommending it. First of all, £9500 is a lot of money! Secondly, although Apple have been deprecating AFP and promoting SMB v2/v3 with Mavericks and Yosemite, Apple’s implementations of SMB have not been the best since they started migrating away from SAMBA. AFP generally is still the fastest and most reliable networking protocol that a Mac can use. With this in mind, I decided to have a look at what we could get from Apple.

The only Server in Apple’s range at the time was the Mac Mini Server. This had 8GB of RAM installed, a quad core i7 CPU at roughly 2.6GHz and 2x1TB hard disks. Obviously this would be pretty useless as a backup server on its own but that Thunderbolt port made for some interesting possibilities.

At first, I looked at a Sonnet enclosure which put the Mac into a 1U mount and had a Thunderbolt to PCI Express bridge. I thought that I could put a PCIe RAID controller in there and attach that to some kind of external SAS JBOD array and then daisychain a Thunderbolt 10Gbe ethernet card from that. I’m sure it would have worked but it would have been incredibly messy.

Nevertheless, it was going to be my suggested Apple solution until I saw an advert on a website somewhere which lead me to HighPoint’s website. I remembered HighPoint from days of old when they sold dodgy software RAID controllers on Abit and Gigabyte motherboards. A Thunderbolt storage enclosure that they promote on their site intrigued me. It was a 24 disk enclosure with a Thunderbolt to PCIe bridge with three PCIe slots in it and, crucially, a bay in the back for a Mac Mini to be mounted. Perfect! One box, no daisy chaining, no mess.

Googling the part code lead me to a local reseller called Span where I found out that what HighPoint were selling was in fact a rebadged NetStor box. This made me happy as I didn’t want to be dealing with esoteric HighPoint driver issues should I buy one. Netstor in turn recommended an Areca ARC-1882ix-24 RAID controller to drive the disks in the array and an ATTO Fastframe NS12 10Gbe SFP+ network card to connect it to the core network. Both of these brands are reasonably well known and well supported on the Mac. We also put in 8x4TB Western Digital Reds into the enclosure. I knew that the Reds probably weren’t the best solution for this but considering how much more per drive the Blues and the Blacks cost, we decided to take the risk. The cost of this bundle including a Mac Mini Server was quoted at less than £5000. Since OS X Server has a facility to allow Time Machine on client Macs to back up to it, there would have been no additional cost for a backup agent.

Considering how squeezed for money the public sector is at the moment, the Mac Mini Server plus Thunderbolt array was the chosen solution. We placed the order with Span for the storage and networking components and an order for a Mac Mini Server from our favourite Apple resellers, Toucan.

Putting it all together was trivial, it was just like assembling a PC. Screw the hard drives into their sledges, put the cards in the slots, put the Mac Mini into the mount at the back and that’s it. After I connected the array to the Mac, I had to install the kexts for the RAID controller and the NIC and in no time at all, we had a working backup server sitting on our network. In terms of CPU and memory performance it knocked the spots off our 2010 Xserve which surprised me a bit. We created a RAID 6 array giving us 24TB of usable storage or 22.35TiB if you want to use the modern terminology. The RAID array benchmarked at about 800MB/sec read and 680MB/sec write which is more than adequate for backups. Network performance was also considerably better than using the on-board NIC too. Not as good as having a 10Gbe card in its own PCIe slot but you can’t have everything. The array is quite intelligent in that it powers on and off with the Mac Mini; you don’t have to remember to power the array up before the Mac like you have to with external SAS arrays.

I know that it’s not an ideal solution. There is a part of me which finds the whole idea of using a Mac Mini as a server rather abhorrent. At the same time, it was the best value solution and the hardware works very well. The only thing that I have reservations about is Time Machine. It seems to work OK-ish so far but I’m not sure how reliable it will be in the long term. However I’m going to see how it goes.

apple_xserve_2009_nehalem

Mac Servers in a Post Xserve World

About three years ago, Apple discontinued the Xserve line of servers. This presented a problem. While the Xserve never was top tier hardware, it was at least designed to go into a server room; you could rack mount it, it had proper ILO and it had redundant power supplies. You would never run an Apache farm on the things but along with the Xserve RAID and similar units from Promise and Active, it made a pretty good storage server for your Macs and it was commonly used to act as an Open Directory Server and a Workgroup Manager server to manage them too.

Discontinuing it was a blow for the Enterprise sector who had came to rely on the things as Apple didn’t really produce a suitable replacement. The only “servers” left in the line were the Mac Pro Server and the Mac Mini server. The only real difference between the Server lines and their peers were that the Servers came with an additional hard drive and a copy of OS X Server preinstalled. The Mac Mini Server was underpowered, didn’t have redundant PSUs, it only had one network interface, it didn’t have ILO and it couldn’t be racked without a third party adapter. The Mac Pro was a bit better in terms of spec, it at least had two network ports and in terms of hardware it was pretty much identical internally to its contemporary Xserve so it could at least do everything an Xserve could do. However, it couldn’t be laid down in a cabinet as it was too tall so Apple suggested you stood two side by side on a shelf. That effectively meant that you had to use 10U to house four CPU sockets and eight hard drives. Not a very efficient use of space and the things still didn’t come with ILO or redundant power supplies and it was hideously expensive, even more so than the Xserve. It also didn’t help that Apple didn’t update the Mac Pro for a very long time and they were getting rapidly outclassed by contemporary hardware from other manufacturers, both in terms of hardware and price.

Things improved somewhat when Thunderbolt enabled Mac Mini Servers came onto the scene. They came with additional RAM which could be expanded, an extra hard drive and another two CPU cores. Thunderbolt is essentially an externally presented pair of PCI Express lanes. It gives you a bi-directional interface providing 10Gbps of bandwidth to external peripherals. Companies like Sonnet and NetStor started manufacturing rack mountable enclosures into which you could put one or more Mac Minis. A lot of them included ThunderBolt to PCI Express bridges with actual PCIe slots which meant you could connect RAID cards, additional network cards, faster network cards, fibre channel cards and all sorts of exciting serverish type things. It meant for a while, a Mac Mini Server attached to one of these could actually act as a semi-respectable server. They still didn’t have ILO or redundant PSUs but Mac Servers could be at least be reasonably easily expanded and the performance of them wasn’t too bad.

Of course, Apple being Apple, this state of affairs couldn’t continue. First of all they released the updated Mac Pro. On paper, it sounds wonderful; up to twelve CPU cores, up to 64GB RAM, fast solid state storage, fast GPUs, two NICs and six(!) Thunderbolt 2 ports. It makes an excellent workstation. Unfortunately it doesn’t make such a good server; it’s a cylinder which makes it even more of a challenge to rack. It only has one CPU socket, four memory slots, one storage device and there is no internal expansion. There is still no ILO or redundant power supply. The ultra powerful GPUs are no use for most server applications and it’s even more expensive than the old Mac Pro was. The Mac Pro Server got discontinued.

Apple then announced the long awaited update for the Mac Mini. It was overdue by a year and much anticipated in some circles. When Apple finally announced it in their keynote speech, it sounded brilliant. They said it it was going to come with an updated CPU, a PCI Express SSD and an additional Thunderbolt port. Sounds good! People’s enthusiasm for the line was somewhat dampened when they appeared on the store though. While the hybrid SSD/hard drive was still an option, Apple discontinued the option for two hard drives. They soldered the RAM to the logic board. The Mac Mini Server was killed off entirely so that means that you have to have a dual core CPU or nothing. It also means no memory expansion, no RAIDed boot drive and the amount of CPU resources available being cut in half. Not so good if you’re managing a lot of iPads and Macs using Profile Manager or if you have a busy file server. On the plus side, they did put in an extra Thunderbolt port and upgraded to Thunderbolt 2 which would help if you were using more external peripherals.

Despite all of this, Apple still continue to maintain and develop OS X Server. It got a visual overhaul similar to Yosemite and it even got one or two new features so it clearly matters to somebody at Apple. So bearing this in mind, I really don’t understand why Apple have discontinued the Mac Mini Server. Fair enough them getting rid of the Mac Pro Server, the new hardware isn’t suitable for the server room under any guise and it’s too expensive. You wouldn’t want to put an iMac or a Macbook into a server room either. But considering what you’d want to use OS X Server for (Profile Manager, NetRestore, Open Directory, Xcode), the current Mac Mini is really too underpowered and unexpandable. OS X Server needs some complimentary hardware to go with it and there isn’t any now. There is literally no Apple product being sold at this point that I’d want to put into a server room and that’s a real shame.

At this point, I hope that Apple do one of two things. Either:

Reintroduce a quad or hex core Mac Mini with expandable memory available in the next Mac Mini refresh

Or

Start selling a version of OS X Server which can be installed on hypervisors running on hardware from other manufacturers. OS X can already be run on VMware ESXi, the only restriction that stops people doing this already is licensing. This would solve so many problems, people would be able to run OS X on server class hardware with whatever they want attached to it again. It wouldn’t cause any additional work for Apple as VMware and others already have support for OS X in their consumer and enterprise products. And it’d make Apple even more money. Not much perhaps but some.

So Tim Cook, if you’re reading this (unlikely I know), give your licensing people a slap and tell them to get on it. kthxbye

20140428164902_rsz_psb_topbannerheader_pmm

Managing Macs using System Center Configuration Manager – Part Two

In my previous article, I described the agent that Microsoft have put into System Center Configuration Manager to manage Macs with. Overall, while I was happy to have some kind of management facility for our Macs I found it to be somewhat inadequate for our needs and I wished it was better. I also mentioned that Parallels, the company behind the famous Parallels Desktop virtualisation package for the Mac, contacted us and invited us to try out their plugin for the Mac. This article will explain what the Parallels agent is capable of and how well it works and how stable it’s proven to be since we’ve had it installed.

Parallels Mac Management Agent for ConfigMgr

The Parallels agent (PMA) is an ISV proxy for ConfigMgr. It acts as a bridge between your Macs and the management point in your ConfigMgr infrastructure. The agent doesn’t need HTTPS to be enabled on your ConfigMgr infrastructure, ConfigMgr sees Macs as full clients. The Parallels agent fills in a lot of the gaps which the native Microsoft agent has such as:

  1. A graphical and scriptable installer for the agent
  2. A Software Center-like application which lists available and installed software. Users can also elect to install published software if desired.
  3. Support for optional and required software installations
  4. Operating System Deployment
  5. The ability to deploy .mobileconfig files through DCM
  6. A VNC client launchable through the ConfigMgr console so you can control your Macs remotely
  7. Auto-enrollment of Macs in your enterprise
  8. Support for FileVault and storage of FileVault keys

It supports almost everything else that the native ConfigMgr client does and it doesn’t require HTTPS to be turned on across your infrastructure. In addition, if you use Parallels Desktop for Mac Enterprise Edition, you can use the plugin to manage VMs.

Installation

The PMA requires a Windows server to run on. In theory, you can have the PMA installed on the server hosting your MP or it can live on a separate server. There are separate versions of the PMA available for ConfigMgr 2007 and 2012/2012 SP1/2012 R2.

Earlier versions of the PMA didn’t support HTTPS infrastructures properly so you needed to have at least one MP and one DP running in HTTP mode. However, the latest version supports HTTPS across the board. However, you do need to have at least one DP running in anonymous mode for the Macs to download from.

IIS needs to be installed on the server along with WebDAV and BITS. Full and concise instructions are included so I won’t go over the process here. Anybody who has installed a ConfigMgr infrastructure will find it easy enough.

If you are using the OSD component, it needs to be installed on a server with a PXE enabled DP. If you have multiple subnets and/or VLANs, you will need an IP helper pointing at the server for the Macs to find it.

Software Deployment

The PMA supports two methods of deploying software. You can use either Packages or Applications.

Generally speaking, there are three ways to install a piece of software on a Mac, not counting the App Store:

  1. You can have an archive file (Usually DMG) with a .app bundle in to be copied to your /Applications or ~/Applications folder
  2. You can have an archive file with a PKG or MPKG installer to install your application
  3. You can install from a PKG or MPKG.

Installing using Packages

Unlike the Microsoft agent, you don’t need to repackage your software to deploy it with the PMA. To avoid doing so, you can deploy them using legacy style Packages. To deploy a piece of software using ConfigMgr Packages, you need to create a Package in the same way as you would for Windows software. You copy it to a Windows file share. You need to create a Program inside the package with a command line to install the package. Using the three above scenarios, the command lines would look like this:

  1. :Firefox 19.0.2.dmg/Firefox.app:/Applications:
  2. :iTunes11.0.2.dmg/Install iTunes.pkg::
  3. install.pkg

The first command tells the PMA to open the DMG and copies the Firefox.app bundle in the DMG to the /Applications folder. The second tells the PMA to open the DMG and execute the .pkg file with a switch to install it silently. The third runs an arbitrary command.

Once the package and the program inside the package have been created, you distribute to a DP and deploy it to a collection as standard.

Deploying software using this method is functional and it works nicely. The disadvantage is that there is no way to tell if a deployment has worked without either delving through the logs or looking in the Applications folder. If you deploy the Package to a collection, the PMA will try to install the Package on the Mac whether it’s already on there or not.

Installing using Applications

As of version 3.0 of the PMA, Parallels have started supporting ConfigMgr Applications as well as Packages. Parallels are using Microsoft’s cmmac file format to achieve this. This means that you need to repackage applications and add them to the ConfigMgr console using the same method as you do for the native ConfigMgr client. This is a bit of a pain but the benefits that doing so brings make it worthwhile. As with the Microsoft client, there are detection rules built into the Application meaning that the Parallels client can check to see if a piece of software is deployed on the Mac before it attempts to deploy it. If it is already there, it gets marked as installed and skips the installation.

It also brings this to the table:

pap

That is the Parallels Application Portal. Much like Software Center on Windows, this gives you a list of software that has been allocated to the Mac. It allows for optional and required installations. If a deployment is marked as optional, the software is listed with a nice Install button next to it.

As with the Microsoft agent, you need to be careful with the detection rules that you put into your Applications. The PMA runs a scan of /Applications and /Library folders looking for info.plist files. It extracts the application name and version from those PLISTs and adds them to an index. It then looks at the detection rules in the Application and compares them to the index that it builds. If there’s a match, it marks them as installed. If you don’t get the detection rules right, the PMA won’t detect the application even if it has been installed properly and then it eventually tries to reinstall it. I spent a very interesting afternoon chasing that one down. There are also some applications which don’t have info.plist files or get installed in strange places. The latest Java update doesn’t have an info.plist, it has an alias to another PLIST file called info.plist instead. The PMA didn’t pick that one up.

Operating System Deployment

Quite impressively, Parallels have managed to get an operating system deployment facility built into the PMA. It’s basic but it works.

First of all, you need to create an base image for your Macs and take a snapshot of it using the System Image Utility found in OS X at /System/Library/CoreServices/. You can create custom workflows in this utility to help you blank the hard disk before deployment and to automate the process. Make sure you tell it to make a NetRestore image, not a NetBoot image like I first did. Once you’ve done that, you tell it where you want to save your image and set it on its way. The end result is an NBI file which is a bundle with your system image and a bootstrap.

You then copy the resulting NBI file onto a PC or server with the ConfigMgr console and Parallels console addon installed. Once it’s on there, open the console and go to the Software Library workspace. Go to Operating Systems, right click on Operating System Images and choose Add Mac OS X Operating System Image. A wizard appears which asks you to point at the NBI file you generated and then to a network share where it creates a WIM file for ConfigMgr.

add image

Once the WIM has been generated, it adds itself to the console but one quirk I have noticed is that if you try to create it in a folder, it doesn’t go in there. It gets placed in the root instead. You can move it manually afterwards so it’s not a huge issue.

The next step is to create a task sequence. There is a custom wizard to this too, you need to right click on Task Sequences under Operating System Deployment in the Software Library workspace then choose Create Task Sequence for Macs.

task sequence

You give the task sequence a name, choose the image that you want to use and press the Finish button. Again, if you’re using folders to organise your task sequences and you try to create the task sequence in a folder, it will get placed in the root of the structure rather than in the folder that you tried to create it in. You can move it if you wish.

From there, it’s pretty much standard ConfigMgr. You need to distribute the image to a DP and publish the task sequence to a collection. The task sequence then appears to the Macs in that collection as a standard Netboot image with the title that you gave to it. You can access it the usual way, either through the Startup Disk pane in System Preferences or by holding down the option key on startup.

boot disk

Unfortunately, what it doesn’t do is allow for any kind of automatic, post image deployment sequences. Although in theory the System Image Utility is supposed to support custom workflows which allow software installations and the running of scripts, I haven’t managed to get it to deploy the agent automatically. I have therefore created a script which the admin deploying the Mac needs to run which (amongst other things) names the Mac and installs the PMA. From what Parallels tell me, this is being worked on.

DCM – Scripts and Profiles

The PMA supports the usage of DCM Bash scripts in the same way as the Microsoft agent does. There isn’t much to say about this, it works and it’s a useful thing to have. The other way of getting settings onto Macs with the PMA is via mobileconfig files generated either by Profile Manager in OS X Server or by a generator built into the ConfigMgr console addin. The generator looks like this:

profile

Look familiar? Unfortunately there aren’t all of the options here that are in Profile Manager so if you want to do something more advanced than what’s on offer here, you still need a copy of OS X Server and Profile Manager to generate the profile.

To deploy a mobileconfig file using the PMA, you need to go to the Assets and Compliance workspace, go to Compliance Settings and right click on Configuration Items. Go to Create Parallels Configuration Item then to Mac OS X Configuration Profile.

configprofile

You then give the configuration item a name, decide whether it’s a User or System profile and point the wizard at the mobileconfig file generated by Profile Manager. Once you’ve done that, there is a new configuration item in your console which can be added to a DCM baseline and deployed to a collection.

I have used this facility for various purposes; for configuring Airport, for joining the Mac to the AD domain, for setting up the user’s desktop settings and wallpaper, setting up Time Machine and for more besides. It’s a great facility to have and rather more user friendly than delving through PLISTS to find settings.

Other features – VNC Client, Auto-enrolment and  Inventory

Well, these do what they say on the tin. We have a VNC client:

vnc

It takes an inventory:

mac inventory

It has the ability to detect Macs on your network and automatically install the agent on them. They all work. There isn’t really much else to be said.

How well does it work?

So clearly, the PMA has far more features than the Microsoft agent does but a piece of software can have all the features in the world and still be useless if it isn’t stable. In this regard, I am delighted to say that the Parallels agent has been rock solid for us. It has been continually improved and has had feature after feature added. It doesn’t quite make a Mac a first class citizen on a Windows network but it comes very close and going by the way that Parallels have improved the product over the last two years, I’m confident that the gap will close. Parallels have been a lot quicker in getting new versions of OS X supported with their plugin too, it already has support for Yosemite.

It hasn’t been entirely problem free but when I’ve contacted Parallels Support, they’ve been quick and efficient and they’ve got the problem resolved. Most problems that I’ve come across I’ve managed to solve myself with a bit of thought.

Although Parallels claim that it the PMA can be installed on the same server as your management point, I couldn’t get it to work when I did this. I ended up putting it on its own hardware. This was with v1.0 of the product though, we’re now on v3.1 so they may have improved that since then.

Having the PMA has also meant that I no longer need a Magic Triangle set up to support and configure my OS X clients. I don’t need Profile Manager or Workgroup Manger to deploy settings, I don’t need OS X server or DeployStudio to deploy images. The only thing I need OS X Server for is Profile Manager to generate the profiles and (with the arrival of Yosemite) the System Image Utility.

The only real downside to the PMA is that it’s expensive and that you have to pay full price for it annually. That may be hard to justify if you’ve already spent a fortune on a System Center license.

Conclusion

So lets do a quick advantage/disadvantage comparison:

Microsoft client advantages:

  • Native client so no additional cost
  • Support from Microsoft

Microsoft client disadvantages:

  • Sparse feature set
  • Required installs only
  • Complicated DCM
  • Takes a long time to install multiple applications
  • Requires HTTPS
  • Slow to support new versions of OS X
  • No visible status indicators, next to impossible to see what’s going on.

Parallels client advantages

  • Includes lots of additional features, brings level of management of Macs to near parity of Windows machines
  • Optional and user initiated installs supported
  • Software Center equivalent and a System Preferences pane to check status of agent. Very thorough logs to check on what the agent is doing are available too.
  • OSD
  • Doesn’t require HTTPS
  • Supports SCCM 2007
  • Much easier to deploy settings by using mobileconfig files

Parallels client disadvantages

  • Expensive
  • Requires an additional DP
  • Probably requires an additional server to install the proxy

They’re as good as each other when it comes to running DCM scripts and taking inventories. So the answer is simple: If you can afford it, get the Parallels Management Agent. If I were the head of the System Center division at Microsoft, I’d be going cap in hand to Satya Nadella and telling him to drive all the money to Parallels HQ to acquire their product from them. Frankly, it puts Microsoft’s own efforts to shame.

0763.sccm2012

Managing Macs using System Center Configuration Manager – Part One

This post is about one of my favourite subjects, namely Configuration Manager, referred to hereafter as ConfigMgr . If you don’t care about the intricacies of desktop management, I suggest you look away now cos this ain’t gonna interest you!

Before I get too far into this post, I should mention that I’ve written about this subject before on my blog at EduGeek. The content here therefore isn’t that new but I’ve rewritten some if it and hopefully improved on it a little too. I will also say that all of this was tried almost two years ago now so chances are that things have changed a little with ConfigMgr 2012 R2. From what I understand though, most of what I’ve written here is still accurate.

Anyway, I spend a lot of time at work using ConfigMgr manage the computers on our Windows network. We use it to almost its full extent; we use it for software and update deployment, operating system deployment, for auditing software usage, for configuration of workstations using DCM and a fair bit more besides.

As well as having more than 1500 Windows PCs, laptops and servers, I also have around 80 Macs to manage as well. To put it mildly, they were a nuisance. They were essentially unmanaged; whenever an update or a piece of software came along, we had to go to each individual Mac and install it by hand. The remote administration tools that we were using (Apple Remote Desktop and Profile Manager) were woefully inadequate. ARD requires far too much interaction and hasn’t had any significant update since Leopard was released. Profile Manager does an OK job of pushing down settings but for software management, it assumes that the Macs are personal devices getting all of their software from the App Store. That’s not really good enough. We were desperate to find something better.

We had been using ConfigMgr to manage our Windows PCs for a couple of years by that point and we had recently upgraded to 2012 SP1 which featured Mac management for the first time. We figured that we may as well give it a go and see what it was like. This is what I found out.

First of all, ConfigMgr treats Mac clients as Mobile devices so this means that you have to set up an HTTPS infrastructure and install an enrolment point for your Macs to talk to. Your management point needs to talk HTTPs as do your distribution points. That also means that you need to allocate certificates to your PXE points and task sequence boot media if you want them to talk to the rest of your infrastructure.

Once you have all of this set up, you need to enrol your Macs. Bear in mind that I looked at this when ConfigMgr 2012 SP1 was the current version. I understand that the process has changed a little in 2012 R2.

First of all, you need to download the Mac Management Tools from here for 2012 SP1  and here for 2012 R2. This gets you an MSI file which you need to install on your Windows PC. That MSI file contains a DMG file which you need to copy to your Mac. In turn, that DMG file contains the installer for the Mac client, the utility for enrolling your Macs in ConfigMgr and an application repackager. You have to install the client first of all from an elevated terminal session. Once that’s installed, you need to run another command to enrol your Mac into ConfigMgr. Assuming that you get this right, it will download a certificate and you’re good to go. When I was setting up the Macs to use this, I found a very good blog post by James Bannan which goes into a lot more detail.

Once your Mac has been enrolled, you will want to start doing something useful with it. At the moment, the Microsoft client has the following abilities:

  1. You can deploy software
  2. You can install operating system updates using the software deployment mechanism
  3. You can check and change settings by using DCM to modify PLIST files
  4. You can check and change settings by using DCM and Bash scripts to return values and make changes
  5. The agent takes an inventory of the software and hardware installed on your Mac and uploads it to your management point.

Deployment of Software and Updates

Deploying software on the Macs is broadly similar to doing the same process on Windows computers; you need to add the software to ConfigMgr as an application, create a deployment type with some detection rules, distribute the software to a DP and deploy the software to an application. The one difference is that you need to repackage the software into a format that ConfigMgr understands. It has a specific format for Mac software called “cmmac”. This is essentially a refactored ZIP file with either a .app, a .pkg or a .mpkg with an XML file which has an inventory of the ZIP, installation instructions and some predefined detection rules. I don’t want to make this already long post any longer than it needs to be so I’ll link to Mr. Bannan’s blog again which has a very good rundown of the entire process.

Changing settings using PLIST files

This isn’t the simplest of processes but it is quite effective. The first step is to open the ConfigMgr console on your Windows PC and go to the Assets and Compliance workspace. From there, go to Overview then Compliance Settings then Configuration Items. Right click and click Create Configuration Item. This will bring up the following window:

Untitled

This example is going to set a proxy server against a network interface so I have named it appropriately and given it a description. Make sure that you set the Configuration Item Type to Mac OS X. Press the Next button

os selection

The next box lets you target your setting to specific versions of OS X. This screenshot was taken nearly two years ago when Microsoft hadn’t got around to adding Mountain Lion support. The current version supports up to and including Mavericks but not Yosemite (yet). Choose a specific version or not depending on what you need and press Next.

create setting

You then need to tell ConfigMgr what PLIST file you’re editing and which key you want to change. You also need to tell it if the key is a string, a number, a boolean value etc. Once you’ve done that, change to the Compliance Rules tab

edit rule

You need to add a rule for each setting that you’re changing. The one in the example above is setting the network name of the HTTP proxy server for the Ethernet interface on the Mac. To complete this example, you’d also need to set one for the HTTPS proxy, the port number and any proxy exceptions. Make sure that the Remediate is checked on any rules that you create and finish the wizard.

Once your compliance rule is completed, you will need to create a DCM baseline or add it to an existing baseline and deploy that baseline to a collection. I’m not going to go through the process here as it’s largely identical to doing it for a Windows PC.

Changing settings using Bash Scripts

This is probably the more powerful way of using DCM as you’re not relying purely on PLIST files to make your changes. If you can detect and remediate the setting that you want to change by using Bash, you can use the script here. This could be a setting in an XML file, a config file somewhere, a PLIST etc. I’m sure you get the idea. The process for creating a compliant rule using a script is largely similar to creating one for a PLIST and even more similar to creating one for a Windows machine. When you get to the third window, choose Bash Script in the setting type instead of Preference File. You get the opportunity to input two scripts; one to detect the setting and one to change it.

System Inventory

Again, this works in the same manner as it does for Windows machines, albeit not quite as detailed. At the very least, you get a list of the hardware and software installed on the machine and the agent keeps track of any changes made. Asset Intelligence and Software Metering isn’t supported however.

What can’t it do?

  1. OSD
  2. Remote Control
  3. Asset Intelligence
  4. Antivirus monitoring (Although it will deploy SCEP for Mac happily enough)
  5. Software Metering
  6. Power Management (Not easily anyway)

Results

So I’ve covered how it all works. The question that you may be asking now is “How well does it work?”. The answer two years ago was “It works OK… ish. Maybe”. I shall try to explain.

The whole thing feels very much like a v0.1 beta rather than proper release software. It’s functional up to a point but there are some very rough edges and the functionality is nowhere near as strong on the Mac (and presumably Linux too) is it is on a Windows PC.

For starters, you can only deploy applications to machines and not to users. You can’t have optional installs. There is no Software Center so you can’t easily see what software has been deployed and what software is supposed to be deployed. When the agent detects a deployment, it comes up with a sixty minute countdown, the length of which can’t (or couldn’t) be changed. You can tell the Mac to start deployment when you see the countdown but if you’re deploying (say) six pieces of software and you leave the Macs unattended, the countdown comes up, expires, installs the software then the next countdown comes up, expires, installs the software and so on. It can take hours for multiple deployments to finish if you’re not paying attention.

I also found that the detection of deployments was rather erratic too. Just like with Applications for Windows PCs, there are detection rules which ConfigMgr uses to determine whether a piece of software is installed on the Mac or not. The ConfigMgr client is supposed to use the detection rules to detect whether the Application is installed or not and skip installation of deployed applications if it detects that’s it’s already present. Unfortunately the detection process seemed rather erratic and our Macs had a habit of repeatedly trying to install software that was already there. The process then fails because the installer detects that the software is there already and throws an error. The process then restarts because ConfigMgr still thinks it’s not there. This tended to happen with more complex Applications which use PKG installers to deploy rather than Applications which copy .app files. I do have a theory as to why this happens but I noticed this about two years later. When you repackage the application using CMAppUtil, it automatically generates the detection rules for you. With PKG installers, it puts a lot in there. I think that maybe it puts too many in there so it’s looking for a load of settings it can’t detect despite the software being present. Unfortunately I haven’t managed to test the theory but it makes sense to me.

Another gotcha that I’ve found with the repackager is that sometimes, it gets the installation command wrong, especially when you run it on a Mac with more than one operating system installed on it. It sometimes gets the path to install to wrong, necessitating a change in your installation command line.

DCM works nicely but finding the PLIST files or the setting that you want to change via Bash can be troublesome. That said, it’s no worse than trawling through the registry or finding an obscure PowerShell command to do what you want on a Windows machine.

Rather mysteriously, Microsoft didn’t include a remote control agent with this. Considering that a VNC daemon is baked into all versions of OS X, this would be trivial to implement,

The real bugbear that my team and I had with the Microsoft client is that Microsoft were very slow to implement support for new versions of OS X. As I’m sure you know, Apple have been working on a yearly release model for major versions of OS X since they released Lion. Microsoft didn’t support Mountain Lion for six full months after Apple had released it on the App store. The delay for Mavericks support wasn’t much better and Yosemite isn’t supported at all right now. It wouldn’t be so bad if it were a case of “Oh, it’s not supported but it’ll probably work”. Unless there is explicit support for the OS X version in the client, it won’t.

So in conclusion, the Microsoft client is better than nothing but it’s not that good either. When my friend and colleague Robert wrote a brief piece about this subject on his blog, he got a message from the lovely people at Parallels telling him about a plugin they were writing for SCCM which also happens manage Macs. Stay tuned for Part Two of this article.

*Update*

Part two of this article is now up. If you want to see how this story ends, please click here

%d bloggers like this: