Dell 13″ Ultrabooks

The first few drafts of my last article touched on this but in the end, I edited it out because it was off topic. Nevertheless, this subject is worth a few words I think.

As I mentioned in that article, not long after I joined my current workplace we standardised on Dell for our mobile hardware. There are plenty of advantages for us in doing so; Dell generally give excellent pricing to the Education sector, they sell direct so you don’t have to go through resellers and for their Enterprise level kit and Business line equipment at least, their support is generally excellent. I have never had problems with Dell Business technical support and without exception, every Dell or Dell contracted engineer I’ve received to do work on a server, SAN, laptop or desktop has been excellent. But that’s my experience, I’m sure yours is different.

Although we decided to standardise on Dell, we hadn’t decided which specific line until relatively recently. This means that we have a couple of different models of 13″ Ultrabook knocking around so I thought I’d write a quick piece comparing them. Specifically, we have the Dell XPS 13 (9560) and the Dell Latitude 7000 (7390 and 7390). I’m not going to do any kind of benchmarking with them but I’m going to compare the specifications of the two lines, attempt to look at their build quality and say which one I prefer.

Dimensions and Weight

The XPS 13 is shaped like a wedge of cheese; it’s taller at the back than it is at the front. At its thickest point, it is 15mm/0.6″ thick. It is 304mm/12″ wide and 200mm/7.9″ deep. The weight of the device is dependant on the spec that you choose but it starts at 1.2KG/2.7lbs

The Latitude 7390 is more traditionally shaped, it’s as thick at the front as it is at the back. It is 16mm/0.64″ high, 304mm/12″ wide and 208mm/8.2″ deep. Again, the weight of the device depends on the spec that you buy but it starts at 1.17kg/2.6lbs.

Winner –  XPS. Just.

The two systems weigh the same but the XPS is slightly but smaller. However, it’s barely 1/2cm smaller on the height and depth and the same width so it’s not really significant.

Screen

Both laptops are available with touch and non touch options. Both come with 1080P screens as their default option but the XPS can be bought with a 3200×1800 screen.

The XPS 13 has what Dell call an InfinityEdge screen. They boast that they’ve managed to squeeze a 13″ screen into what would otherwise be a 11″ frame. This is undeniably true; the laptop does have very narrow bezels and they are a uniform size on both sides and at the top. The laptop is certainly smaller and sleeker because of that.

The Latitude has the same screen and it also has equally narrow bezels on the sides of the screens but it has a more standard sized bezel on the top of the screen than the XPS does. For this model, Dell claim to have put a 13″ screen into the same sized chassis as a 12″ notebook.

Winner – XPS

The XPS has smaller bezels and has a higher resolution screen available for it.

Webcam and Biometrics

The XPS has its webcam on the bottom edge of the screen. It is a “Standard” webcam. The positioning of it is downright stupid, anyone who you talk to with it ends up looking up your nose. It can’t be used with Windows Hello to unlock the laptop. Windows Hello fingerprint scanners are an optional extra.

Since the Latitude has the thicker bezel at the top of the screen, it has room for the webcam in a more sensible position. Infra-red cameras which are compatible with Windows Hello are also an option with this line of laptop, as are fingerprint scanners.

Winner – Latitude

The webcam is in a better place and it has more biometric options. Easy win for the Latitude here I think

Connectivity

The XPS has:

  • an SD card slot
  • Two USB 3.0 Type A ports
  • A Thunderbolt/USB Type C port
  • A headphone port
  • A Wireless card manufactured by Killer

The Latitude has:

  • A headphone port
  • A uSIM port for optional WLAN
  • a Micro-SD card slot
  • Two USB 3.0 Type A ports
  • An Ethernet Port
  • When bought with 1.6GHz or above CPUs, a Thunderbolt3/USB C port
  • An HDMI port
  • A Smartcard slot
  • A Wireless card manufactured by Intel

Winner – Latitude

From the point of view of a consumer device, the ports that the XPS has are probably good enough, although if you want to connect to an external monitor or a wired network you need a dongle. The uniform thickness and that extra 5mm that the Latitude has certainly gives you gets you some useful additions. The Latitude also has a slot for an optional WLAN card so you can connect to the internet on the move without having to tether it to a mobile phone. I mention the WiFi card because Killer don’t exactly have a reputation for high quality drivers which may be a concern. From an Enterprise point of view, the Latitude is the clear winner.

Specifications

The XPS is available with 8th Gen Core i5 and i7 CPUs. It comes with an SSD on an M2 slot and the SSDs that they will sell with it use the NVMe bus. It can come with up to 16GB of LPDDR3 RAM which is soldered to the laptop’s motherboard.

The Latitude is available with the same 8th Gen Core i5 and i7 CPUs. It also comes with SSDs on an M2 slot which can be either SATA or NVMe. The maximum amount of RAM that it can take is 16GB of DDR4. However, the RAM on a Latitude is a standard DIMM so it can be expanded later on if you so choose. As mentioned in the connectivity part of the article, the Latitude comes with an extra M2 slot in which a WLAN card can be fitted.

Dell sell both with SSDs up to 512GB but I would be seriously surprised if you couldn’t expand that later on if you wanted to.

Winner – Latitude

Out of the box, it comes with very similar hardware internally. They both use the same CPU lines. However, the fact that you can upgrade the RAM in the Latitude and fit it with a WLAN card later on if you want to means that it wins this category for me. It’s amazing what an extra 5mm of thickness gets you.

Battery

I can’t really do any real world comparisons of battery life with these laptops for two reasons. The first reason is that all of the XPS laptops that we have are assigned to someone so asking for them back to run some battery benchmarks would result in some funny looks. Secondly, the XPS laptops that we have are a couple of years old while the Latitudes are a lot newer. It would be unfair to compare a two year old battery with a brand new one, not to mention that each new generation of Core CPU generally improves on battery life anyway. So I’m just going to quote Dell’s figures here and make a judgement on that.

The XPS has a 60WHr battery built into it. Dell claim you should be able to work for 22 hours from a full charge.

The Latitude has either a 42WHr or 60WHr battery in it. Dell claim up to 19 hours of working life from a full charge. I would imagine that’s with the 60WHr battery fitted.

Winner – XPS

That LPDDR3 memory probably counts for something – the XPS is claimed to have longer battery life.

Touchpad and Keyboard

Both laptops have pretty similar keyboards, they have the same kind of chiclet keys that manufacturers have been using for the past eight to ten years. They both have backlit keyboards. They’re much of a muchness. I’ve used better keyboards but I’ve also used much worse. They’re both on a par with the scissor style keys you got on Unibody MacBooks. The layout of the keyboards are the same with the CTRL and FN keys in the correct places. The top row of keys on both laptops double as traditional function keys (F1 etc) and as keys to control the brightness of the screen, keyboard backlight, wireless, volume and media playback.

They do have different touchpads however. The XPS has a touchpad similar to one on a Mac where the entire surface is a button whereas the Latitude has two separate hardware buttons. Both touchpads are recognised by Windows as being Precision Touchpads so they support the Windows multi-touch gestures.

Winner – Draw

There is no clear winner here. The keyboards are near enough identical and the touchpads are a matter of personal preference. I prefer the ones on the XPS ever so slightly but there isn’t enough in it to declare an overall winner. That said, both completely suck compared to the touchpad on a Mac using macOS. Seriously PC manufacturers, Apple got the touchpad and touchpad gestures just perfect with the first generation of unibody MacBooks and Snow Leopard. That was coming on for ten years ago. For God’s sake, just copy that already.

Finishes

Both laptops come with a range of different colours and finishes.

The XPS can come in silver, white or rose gold.

The Latitude can come in aluminium finish, a carbon fibre finish or a dark grey magnesium one.

Winner – Draw

The XPS looks more like a consumer device while the Latitude looks more like a business device. You wouldn’t be ashamed to get either out at a meeting but the XPS would look better at a LAN party!

Support

Out of the box, the XPS comes with 1 year On-Site warranty while the Latitude comes with three years. You can buy up to four years support on the XPS and up to five on the Latitude

Winner – Latitude

It comes with a longer warranty and can be warrantied for longer as well.

Durability

This last one is harder to quantify as the XPS laptops that we have are older so it’s harder to say which is the more rugged laptop. However. At least one of our XPS laptops appears to be coming apart, there are visible gaps at its seams. I don’t know if we have a dud or if there is a quality control issue with these things. From hands on experience though, I would personally say that the Latitude feels like the more solid laptop. We will see.

Winner – Not enough data

We don’t have as many XPS laptops as we do Latitudes and the XPS laptops that we do have are older. My thoughts about the Latitude feeling like a more solid laptop are fairly preliminary so I’m not going to say one way or the other.

Price

I’m not going to quote the exact pricing that I get from Dell for these laptops as it won’t be the same as what’s on their website or the same as what your account manager will give you. That said, generally speaking, whenever I get a quote for an equivalently specified Latitude and XPS with the same warranty length, the Latitude is around 3/4 to 4/5 of the price of the XPS.

Winner – Latitude

The Latitude line is always the cheaper one when I ask Dell for a quote. It is on the Dell website when you ask for them with the same warranty as well.

Conclusion

Well, this “few words” has turned into more than two thousand words! The easy thing to do here would be to count how many wins each line has and declare that line the winner. For the record, that’s the Latitude with five wins to three and two draws. I don’t that it’s quite as clear cut as that though. My personal preference would undeniably be the Latitude. It’s more expandable. It has better support. It has more ports. Its webcam is in a more sensible place and you don’t need dongles to connect it to a monitor or a wired network. It’s also cheaper. However, that’s what’s important to me. You might have other ideas. But from my point of view, buy a Latitude. If nothing else, you get more laptop for your money.

Dell Business Docking Stations

There are a few posts on this site called “The Grand(ish) Experiment” where I talk about using Dell Venue tablets with their respective docking stations, exploring their potential to replace desktop machines. The idea was that every office desk and classroom would have a docking station, teachers would have their own tablets with the software and files that they needed and that they would be able to use any classroom in the building and not have to book a specific one. It didn’t really pan out; the docks were temperamental, the tablets were either underpowered or too big and heavy, the optional hardware keyboards had issues and the current OS at the time was Windows 8. People didn’t like the hardware that we were trying so there wasn’t really the interest to keep the experiment going and it all got forgotten about, as, sadly, did the article series.

A few years and two jobs later, I’m looking at something similar again, albeit under very different circumstances. I now work in the Central Services department for a Multi Academy Trust of schools. The organisation has nine different sites and while I spend most of my time at one particular one, I do spend at least one day a week at two of the other eight and have to visit the other six periodically. The situation is the same for a lot of my colleagues in Central Services, not just the IT people.

The computers in our main office are pretty old. The most modern one is an HP Compaq machine with a second generation Core i5 CPU, 4GB of RAM and a standard magnetic hard drive so it’s safe to say that the hardware is due to be updated. A large proportion of the staff in our Central Services department are like me in that they have to attend the other sites as well so the majority of them also have laptops, again which are pretty old and are due for replacement.

Bearing that in mind, to replace all of the desktops and issue these people with new laptops which get used but not very heavily seemed like a pretty major waste of money. It seemed to be a much better idea to issue everyone with a new laptop and put a docking station on everyone’s desk. That way, everyone still gets a new, faster machine but the offices become a lot more flexible because wherever the person ends up sitting, they get the resources that they need. We’ve recently standardised on Dell hardware, in particular the Dell Latitude 7390 laptop for our Central staff. With a fast SSD, a reasonable amount of RAM and a quad core CPU, there is no reason why these laptops couldn’t function as desktop replacements. Issuing a laptop to everyone would also end the, um, disputes between the people in our Finance department and the people in our other departments as only a couple of the computers in our satellite offices have the Finance software installed. The idea is that if people have a laptop with the software they need already installed, they can hook up to a docking station anywhere and do what they need. Failing that, even just find a desk somewhere and do their work, even if they can’t dock.

With that in mind, I approached our account manager at Dell and asked her what she would suggest for us. The laptops that we’re using don’t have old fashioned docking ports on the bottom of them so we had to look at their USB docks. Dell suggested three:

First of all, the D3100. This dock  is based around a Displaylink chipset. Because of this, everything on it (video, network, audio) is driven using the USB 3.0 bus on your machine. It can drive up to three displays, one of which can be 4K. It connects to your computer with a USB 3.0 A plug. It won’t charge your laptop.

Next up, the WD15. This is a USB 3.1 Gen 2 dock which can drive up to two screens up to 1080P. It also has an Ethernet and audio ports and you can charge your laptop with it, making it a one cable solution. Unlike the D3100, this dock acts as a DisplayPort MST hub so the displays that it drives are driven from your laptops own GPU or APU, rather than from a chip connected to your USB port. This should improve video performance, especially if your laptop has a discrete GPU. It is available with two sizes of power adapter (120W and 180W), the bigger of which is required if you have one of Dell’s larger laptops and want to charge it from the dock.

Lastly, the WD16. This is a Thunderbolt 3 dock, again connected by USB C connector. Again, it functions as a DisplayPort MST Hub but unlike the WD15, it can drive up to three displays at 2560×1600 at 60Hz, up to two 4K displays at 60Hz or one 5K display at 60Hz. It also has another Thunderbolt 3 port as a pass through and the usual Ethernet, audio and additional USB ports. This dock also can come with one of two power adapters (180W and 240W) and again, the bigger PSU is required if you have one of Dell’s larger laptops and want to charge it from the dock.

I have used two of these docks, the D3100 and the WD16. I’ve not used the WD15 so I will say up front that anything I say about it here is conjecture based on its spec and appearance.

D3100

So first of all, the D3100. As I say, it’s based around a DisplayLink chipset. It has a full sized DisplayPort for your 4K display, two HDMI ports, two USB 2.0 ports on the back and three USB 3.0 ports on the front. Along with that, there is a headphone port on the front and a Line-Out port on the back. The dock is nicely laid out with everything in the place you’d expect it to be. Along with that, it features a PXE boot ROM so you can built workstations from it (Rarer than you’d think on a USB networking device) and it supports WOL.

All of the the ports are run from your USB 3.0 port, including the displays. That means that the Ethernet port, any other USB peripherals that you connect to it and in theory, a 4K display and a pair of 1080P displays and therefore 12.5 million pixels being refreshed 60 times a second, all have to share that 5Gbps of bandwidth that the port provides. Considering that an uncompressed 4K stream at 60Hz would use almost 9Gbps of bandwidth, I was sceptical that this was going to work very well and, well, so it turned out.

At the time, I was running a mid 2014 MacBook Pro with Windows 10 and a Core i5 4278U CPU. When I first started using it, I was using a pair of 20″ monitors running with a 1600×900 resolution. With these monitors, it worked well. However, the size and resolution of those two monitors was too low for me and I asked for some bigger monitors. I put on a pair of 1080P monitors and that’s when I started having performance issues. As soon as I started using those monitors, the displays started glitching, the refresh rates were variable and using it was just annoying. CPU usage was all over the place with no obvious culprit. The Mac that I was using at the time had a lot of external ports (Two Thunderbolt 2 ports and an HDMI port) so I connected the monitors directly to the laptop to see if the performance issues would go away. They did so that was how I used the laptop until it came time to replace it.

I don’t know if the laptop I was using was underpowered for the compression tricks that DisplayLink must have to use to drive more than 4 million pixels over 5Gbps, if I had a bad dock or if there is an inherent problem with the DisplayLink chipset. As I say, it worked fine with two lower resolution monitors so I expect that the Mac was too old to run this properly. I can quite happily recommend the dock from that perspective,  i.e. if your running a single monitor or two lower resolution displays from it on reasonably modern hardware, but for our purposes we decided that it was inadequate.

WD15 and TB16

So, instead, the two USB C docks running as DisplayPort MST hubs. When I originally looked at the spec of these devices, I thought it was pretty cut and dried. The TB16 wasn’t significantly more expensive. It’s Thunderbolt so it’s likely to be a faster performer. So we ordered some. This is what we found.

First of all, considering what this thing is, it’s bloody huge! It’s only half an inch smaller on the width and length than the original Mac Mini and significantly larger than an Intel NUC. The Thunderbolt cable is about half a metre long and is built into the docking station.

It has four monitor outputs: A full-sized DisplayPort, a Mini DisplayPort, an HDMI port and a VGA port. It has a pair of USB 2 ports, a USB 3 Type A port, a Thunderbolt 3/USB C port, a Gigabit Ethernet port, an audio line-out port and the power input on the back of the device and two further USB 3 Type A ports and a combo headphone/microphone socket on the front. The aforementioned captive Thunderbolt cable is on the left hand side of the device when you’re looking at it from the front.

This is where things start to go a little bit wrong for this thing. The port layout is fine and generous but all of the Dell laptops that I’ve seen (various XPSes, Latitudes and a Precision) have their Thunderbolt 3 ports on the left hand side of the device as well. This, along with the relative shortness of the docking cable and the size of the USB plug into the laptop (about 1.5″!), makes positioning the dock very awkward. Ideally, with a docking station, you want it stuffed out of the way somewhere at the back of your desk but the length of the cable and its position on the left makes that difficult to achieve. I tried various positions to find what worked best. Most people seem to prefer putting the dock on the left hand side of their desk with the front ports facing forward (funny that!) and having the Thunderbolt cable loop round. The problem that I found with that was that the laptop has to be within 25cm of the dock because the bend radius of the Thunderbolt cable is quite large and that makes the solution take up a lot of space on the desk and makes it hard to access the ports of the front of the dock. The second position that I tried was to put my laptop in the middle of my desk, underneath my monitors, put the dock on the left side of my desk and have the left side of the dock point right towards the laptop. I didn’t like this solution very much either as it meant that the back ports were facing the front which was messy. I did try turning the dock upside-down so that the front ports were on the front but this just made the dock slide all over the desk and it meant I couldn’t get at the power button on the top of the dock.

Eventually, I found the best way that I could set up the dock for me was to rest the dock on top of one of my speakers with the Thunderbolt cable pointing down like a tail. The ports on the back point to the left, the ports on the front point right and I can position the dock where it’s reasonably accessible. It’s nowhere near ideal but I found it was the best way for me.

Awkward cable positioning aside, how good is this thing otherwise? Well, lets see. The thing I found most disappointing about this dock is that the audio and Ethernet ports are USB devices. Considering that Thunderbolt is essentially an extension of a computer’s PCI Express bus, it seems a bit, well, cheap to saddle this thing with a USB NIC. A PCI Express one would be better as it would take up less system resources and it would able to share up to 40Gbps of bandwidth with the host system, rather than the 5 or 10Gbps that shoving it on the USB bus restricts it to. Yes, that 5 or 10Gbps for the Ethernet port by itself is fine but as the D3100 proves, contention starts to become an issue as you add more USB peripherals to a system.

Moaning about that aside, when you first open the box for one of these docks, there is a big piece of paper that tells you in no uncertain terms to make sure that you go to the Dell website and download the latest drivers and firmware that are available for this dock and, if you’re using a Dell laptop, to make sure that you’re using the latest BIOS for it.

Do NOT ignore any of these instructions

All of the docks that we received from Dell had a pretty old firmware on them and when using the docks with the OOB firmware, they were a nightmare. They constantly disconnected from their host and when they were connected, they were (somehow) laggy and made the laptop CPU usage spike. Updating the firmware on the dock and the laptop resolved these issues immediately. With that installed, the laptop behaved exactly how you would expect it to. Docking and undocking is simple. You don’t have to jump through any hoops in Windows, just pull the cable out and continue working. When you go back to your desk, you put the cable back in and away you go. All of the laptops that I’ve tried with these docks (Dell Latitude 7380 and 7390, Dell XPS 13 and 15, Dell Precision 5530) work perfectly and support USB charging.

I’ve managed to PXE boot and build laptops with these docks attached and this includes laptops which don’t have built-in Ethernet ports. With the latest firmwares, they do exactly what they’re supposed to do and that in itself is high praise.

So now we come back to the WD15. As I say, I’ve not used the WD15 dock so this is conjecture. However, I’m going to assume that it works as well as the TB16 does. In that case, considering that internally, the WD15 and TB16 both have very similar hardware, I’m actually struggling to justify the extra expense for the TB16. They both have the same audio and Ethernet connectivity, both driven from the machine’s USB 3.1 bus. They both act as DisplayPort MST hubs so the extra monitors are driven from the laptop’s GPU. The only advantage that the TB16 gives you is that you can drive more monitors from it and those that you can drive can also have higher resolutions. That’s great but in a general office environment, it isn’t actually that big of an advantage. Very few people in our organisation has a monitor with a resolution higher than 1080p and no-one has more than two monitors so most people get precisely zero benefit if they use a TB16 instead of a WD15. If anything, the WD15 might be the better choice under some circumstances because it has a longer connection cable, albeit still a daft captive one on the left side of the dock. So I guess I’m saying, unless you see a need to drive more than two monitors or monitors with higher resolutions than 1080P, don’t bother with the Thunderbolt dock and get the USB one instead. I think in future, that’s what I’m going to suggest.

Conclusions

It’s still too early to draw any conclusions but the feedback I’ve been getting from staff about the laptop/docking stations has so far been positive. They’re happy that they can just rock up to a desk and be using their own machines. I’m also using a laptop and dock and I couldn’t be happier with the arrangement. I’ll try to revisit this article in another three or four months and see if there’s anything interesting to say.

Basic User Editing Script

Let’s start with a little history, it will hopefully put this script into a bit of context. When I started in my job, one of my first large projects was a change to our Office 365 tenant. When I started there, it was being managed by a system called OpenHive. The vendor that looked after OpenHive was Capita so anyone who has the misfortune of having to work with their services will have an inkling as to why we wanted to move away from them. OpenHive was an Active Directory domain hosted by Capita which used ADFS servers at Capita to authenticate people. This meant that we had to maintain two user databases and people had to remember at least two passwords, one for the local domain and one for their email.

We ended up giving Capita notice that we no longer wished to use their service. We evicted them from our Office 365 tenant, de-federated it from their ADFS and moved management of it in-house. We also installed Azure AD Connect to synchronise users and passwords with Office 365 so people didn’t have to remember two passwords. Existing users were matched using SMTP matching, new users were synced across. One thing I didn’t realise was that the user accounts in Azure AD needed the ImmutableID field removed from them before sync would work but I found that one out eventually.

One problem that we had was the quality and consistency of the data that was being transmitted over to Azure AD which was making our address book look messy to say the least. Another more significant problem was with the UPN suffix of the users: Our domain name uses a non-routable domain suffix (.internal in this case) so whenever a user was getting synced to Office 365, it was getting created with a username with the tenant’s onmicrosoft.com address instead of the default domain name. This was a nuisance.

The system that we use to manage our users is RM Community Connect 4 (CC4). To put it politely, CC4 is a bit shit. That aside, CC4 is basically an interface on top of Active Directory; in theory you’re supposed to create and edit users in there. It creates the AD account, the user area, a roaming profile and other things. However, the AD attributes that CC4 is capable of editing are very limited and one of things that it can’t change is the user’s UPN.

Admittedly, all of this can be changed relatively easily using the ADUC MMC that comes with the Remote Server Administration Tools but while this would solve the UPN problem, it would still be hard to enforce a consistent pattern of data to be transmitted to Azure. I therefore decided we needed a tool to help us with this.

I’m no programmer, it’s been a long time since I did any kind of programming in a serious manner and that was with Turbo Pascal and Delphi. However, I found out that PowerShell has quite a strong forms library so I decided to give it a go using that. This is what I came up with:

It’s nothing fancy but I’m quite pleased nevertheless. Most of the fields are entered manually but the UPN suffix and the School Worked At fields are dropdown menus to make sure that consistent data is entered. The bit that I really like is that when you choose a school from the dropdown menu, the rest of the School fields are automatically populated. The “O365 licence” box populates the ExtensionAttribute15 attribute inside the user’s account, I’m using this for another script which licenses users inside Office 365. I’ll post that one another time.

The script is almost 400 lines long so I’m not going to post it into the body of this article. Instead, I’ll attach a zip file for you download.

I don’t know how useful people will find this but I thought I’d put it up anyway in the hope that someone might like it. This has been tested on Windows 7, 8.1, 10, Server 2012 and 2016. It seems to work with PowerShell 2.0 and newer. You will need the Active Directory PowerShell module that comes with RSAT for this to work. Do what you like with it, reuse it, repost it, whatever. It’d be nice if you gave me a little credit if you do but other than that, I’m not too fussed. The usual “I’m not responsible if this hoses your system” disclaimer applies.

Download the script from here

Parallels Mac Management – What’s been happening?

Yet another post about the Parallels Mac Management plugin for Configuration Manager. Anyone would think they pay to write this crap! Well, unfortunately for me, they don’t. I have to admit, this post has been an embarrassingly long time in the making; in between work, parenting and being absolutely cream crackered all the time, I just haven’t had time to work on this.

Despite having not used ConfigMgr or the PMM in almost two years, I’m still very enthusiastic about both subjects. They combine my favourite kind of computer with my favourite professional application and by God, I’ve just realised what a complete spod I am. Oh well. Anyway, I was very curious about the progress Parallels have made with their product. The first release of the PMM was almost unrecognisable (in a good way) compared to the version that I last used at my old college (v3.5) and I knew that Parallels were working on some pretty cool stuff for version 4. In the end, my curiosity got the better of me so I reached out to one of my contacts at Parallels and asked if I could take a look at the latest version. He said “Yes” and the seeds for this article were sown; I downloaded the latest version of the product (v5 at the time) and started writing this article.

Quite a lot of time later, the Multi Academy Trust (MAT) of schools that I work for decided that they were going change the management system that it uses for its computers. The schools all run RM Community Connect 4 and I would say that it’s likely that the vast majority of those who have ever worked in a school and have had the misfortune of working with a version of RM Community Connect will most likely understand why. One day, I may write an article about what we’ve done and I’m sure a lot of people will call us insane for doing so but never mind. We chose System Center Configuration Manager to replace the computer management aspect of CC4 because it’s part of our Microsoft OVS-ES license and because of my experience of installing and using it.

As well as using Configuration Manager to manage our Windows machines, we also needed something to manage our Macs. There are about 200 of them dotted around the organisation and at the moment, they’re just about completely unmanaged. Yes, there are some which are connected to one of three instances of Profile Manager and there are a couple of Munki instances in-place but it’s (ugh) organic, unplanned and barely functional. The Macs are taking up a disproportionate amount of our IT Support team’s time compared to the Windows machines and we needed something to manage them properly. There seem to be a relative lack of decent management tools out there for Macs but in the end, we looked at JAMF (formerly Casper and probably the market leader for Mac management), the PMM and KACE, formally Dell but now spun out of Dell as part of Quest Software. KACE was the cheapest, the PMM was in the middle and JAMF was the most expensive. I got trials for all three but the PMM won the business because it integrates into ConfigMgr seamlessly which in turn means that we will be properly manage our entire environment through a single pane of glass. Admittedly, my experience with the product also helped but I will say that the other two are top notch products and under different circumstances, I would be happy to use either of them. However, using the PMM along with InTune and Configuration Manager means that everything that we need to manage is managed using one console which hopefully will keep things relatively simple for our IT teams. Time will tell.

So what’s new? We are now on v6.1 of the PMM so there are a fair few new features to look at.

V4’s headline feature was task sequencing for OSD. Instead of having to follow that pain in the arse procedure I developed for OSD, there is a better one built-in to the PMM now. You no longer need to create operating system images using Apple’s tools and put in a myriad of scripts to automate the rest of the process. Parallels have put in a proper task sequence wizard similar to the one built into ConfigMgr for Windows machines. They’ve also given you the tools you require to build and capture a master Mac image. You no longer need a copy of OS X Server to capture your image or to create a workflow.

V4.5’s new feature was update management and deployment. While you could always deploy updates to Macs using by downloading the update and deploying it with a Package, it wasn’t an especially elegant way of going about it. There was a lot of manual processing involved and personally I found it fairly awkward to deploy large OS X updates using this method. Well, it’s no longer required. Parallels have done something I never would have thought to do in a million years: they’ve leveraged WSUS to deploy updates to Macs. I’ll just wait for a moment to let you recover from that one.

V5 introduced support for Apple’s Device Enrolment Protocol so that new Macs can be automatically enrolled into your environment. They also put in a license manager for the PMM in this release too.

In V6, they started to support Software Metering, they’ve put in the ability to lock and wipe a Mac remotely, they’ve started using Maintenance Windows and they added a few options to OSD task sequences, the main one being the ability to deploy ConfigMgr Applications during a task sequence as well as Packages.

Finally, V6.1 added support for macOS High Sierra. This has always been a strength of the PMM; High Sierra was released on the 25th September 2017 and Parallels put in official support for the OS on October 10th. The version of KACE that I was evaluating (v8.0) wouldn’t deploy High Sierra to a Mac and Microsoft’s native Configuration Manager client didn’t get support for it until the end of November 2017. Parallels have always been very fast to officially support new version of macOS and, generally speaking, I’ve found that the PMM has worked even when unofficial support hasn’t been there.

Operating System Deployment

When I last used it, Operating System Deployment in the PMM was something of a weak point. It was technically very impressive that they managed to get it into V3 and it was much better than nothing but if you wanted a fully automated workflow, you couldn’t do it without a lot of hacking about. Essentially, you had to create a Netboot image (NBI) with third-party tools then upload that NBI to ConfigMgr. You then published that image to a collection and any Mac in that collection was able to boot from it and do a deployment. If you wanted to deploy more than one image, you had to repeat the process again. For each image, there was a separate Netboot image published and an extra entry for the Mac to boot from. There’s a fairly long article about it on this site that got linked by Parallels. It’s here if you want to read it.

The situation is a lot better now. Instead of having to mess about as described in the linked article, Parallels have implemented proper task sequencing for your Macs. It is very much like the task sequence wizard that Windows machines use. To use it, you need to create a boot image for your Macs to boot from then an operating system image for them to download and deploy.

To create a boot image, you need to download a DMG file from the server that your PMM is running on. Inside that DMG file is a command line utility that generates your boot image. The utility uses the version of macOS that’s installed on the Mac it’s run from to create an NBI file. The NBI contains a cut down image of macOS and a single app which contacts the PMM and requests and runs task sequences. Once the NBI file is generated, you need to copy it to the server hosting the PMM and add it to the Configuration Manager console using a wizard. This is good; instead of having multiple NBIs published to your Macs, you can have just one. The app in the NBI contacts the PMM and decides whether it has a task sequence it needs to run.

You can then start creating task sequences. Again, like Windows, task sequences created by the PMM can either capture or deploy an operating system image. I expect that the first task sequence that you will create is one to capture an operating system.

Capture a macOS Image

To do so, you open the Configuration Manager console, you go to the Software Library workspace, Operating Systems then Task Sequences. Right click on Task Sequences and select “Create OS X Task Sequence”

This brings up a window which asks you to give the task sequence a name:

The “Steps” tab is well named, it contains all of the steps of the task sequence:

In this case, we are only going to have one step in the sequence, we’re going to capture an image. Unlike Windows, macOS doesn’t require any kind of pre-preperation such as Sysprep before an image is created.

The options here are pretty simple; it wants to know where it should save the captured image to and it wants to know the credentials that it can use to connect to the server. Don’t do what I did here in my development environment and use the domain admin account, that would be bad. Use a proper network access account instead.

Once the form is filled out, press the OK button and the sequence will be saved. You can then deploy the task sequence to a collection just like you do with a Windows one.

Deploy a macOS Image

The other side of this coin is to deploy an image to a Mac. You use the same wizard to create a Deployment task sequence as well so lets look at that screenshot with the possible steps again:

As you can see, there are nowhere near as many possible steps here as there are for Windows machines but that said, there are probably enough cover most people. You don’t need to install drivers on a Mac as they’re already all included in the OS image. There isn’t an equivalent of the USMT for macOS that I’m aware of so there are no options for that. Really, the only thing that I’d say is missing is the ability to deploy any updates that are available. Lets look at an example task sequence:

The sequence that I’m showing is as basic as you can get but it works. It partitions the disk (Only HFS+ is supported at the moment, I don’t know how Parallels are going to handle APFS. I’ll ask the question when this article is published and update it later on), downloads and applies the OS, sets the Mac’s hostname, joins it to a domain and installs a mobileconfig file. You can also install software during the task sequence; when I first looked at this in the V4 beta, you could only install software using legacy packages. I found this disappointing and told Parallels so. However, since V6 you can now use modern style Applications as well. The only slight snag with deploying Applications with a task sequence is the detection routine that Parallels uses to detect whether an installation was successful or not. It frequently does not update itself quickly enough before it moves onto the next Application to install so therefore, more often than not, it thinks that the installation of the software hasn’t been successful. Parallels therefore recommend that you check the “Continue on Error” option when deploying applications otherwise the task sequence will fail. The downside to this is that if the deployment of an Application actually does fail, it’s not going to be immediately obvious that it has and you may find it a little harder to troubleshoot if it does. Parallels acknowledge that this is a problem in their documentation but I suspect that it’s probably not something that’s going to be solved without rearchitecting their solution.

Once again, you the task sequence is finished, you press the OK button to save it and you can deploy it to a collection in the usual manner. There is now an “Unknown Macs” collection created inside Configuration Manager when the PMM is installed; this means that a Mac doesn’t have to be known to Configuration Manager before it can run a task sequence.

The PMM’s Task Sequence engine also supports task sequence variables. This means that settings such as the Host Name can be automatically assigned and if you wish, you can assign blank variables to a collection as well. If the wizard detects a blank variable, it will let you fill it in.

 

macOS Operating System Updates

V4.5 of the PMM brought the ability to manage updates. The way that they’ve achieved this is quite interesting. They have leveraged the Local Updates Publishing facility of WSUS to catalog updates from Apple. Because the updates from Apple are cataloged in WSUS, they are entered into the Configuration Manager console in the Updates section.

Because the updates are now in the console, you can create Software Update Groups and deploy them to the collections that your Macs are members of. However, the software update point just catalogues the updates; it doesn’t send them to distribution points and your Macs don’t download and install them from your Configuration Manager servers. Instead, the Parallels Update Point tells the Mac which update it wants the Mac to download then the Mac downloads them either directly from Apple’s servers (which is the default option) or from a local Mac update server.

The process to set this up is very involved and because of this, I haven’t yet had the opportunity to evaluate how well the process works. It’s on my to-do list and I will update this article when I get the opportunity to try it. If you want a more detailed overview of how this works, the Parallels documentation is very thorough and clear. You can find it here: Parallels Administrator Guide from page 153 to 166.

Device Enrolment Protocol (DEP)

I’m afraid I don’t have much to say about this. In theory, I expect that this would be very useful as it gives you the ability to automatically enrol any new Mac that you buy into your Configuration Manager instance. However, all of the Macs that we own are relatively old and none have been bought using the DEP program so I can’t evaluate how well this facility works. I wish I could say more but I can’t.

License Manager

This one is for Parallel’s benefit more than anything else but I understand why they’ve implemented it. The PMM has always been licensed per Mac and it’s always been a timed license. However, older versions of the PMM had no way of tracking usage or expiry dates. You could buy a license for X amount of Macs and use it for as many as you wanted with no recourse. I imagine that a lot of people did as well so Parallels have put in this facility to limit you to managing the amount of Macs that you’ve bought licenses for.

There isn’t much to say about it really. It keeps track of the amount of Macs that are enrolled in Configuration Manager and I would imagine that it does a good job of stopping the PMM from working once your licenses expire. As I say, I understand why they’ve done this. Anyway, this is probably the most important part of the license manager that shows in the console:

Seamless…

Software Metering

It works. There isn’t much more to say about it. It works in exactly the same way as it does for Windows machines, you configure it in the same way and you use the same reports as you do for Windows software to view the data. This is a good thing.

Remote Lock and Wipe

This does what it says on the tin, it allows you to remotely lock and wipe a Mac which is lost or stolen. This works by enrolling your Mac into an MDM which is part of the PMM. This is achieved either through DEP or with a separate MDM component inside the PMM.

Again, I haven’t had the opportunity to try this facility but considering that a huge amount of the laptops that we have are MacBooks which are used by a lot of the senior people in our organisation and that GDPR is going to be a thing, this is something I’m going to take a closer look at very soon. According to the documentation, you can tell the PMM to automatically enrol any Mac in a particular collection into its MDM. Once you’ve done that, you can send a remote wipe or lock command to the Mac as soon as it connects itself to the Internet. Once I’ve implemented this, I’ll post an update to this article.

Maintenance Windows

As with Software Metering, this just works. You can now use the same method of assigning maintenance windows to your Macs as you do for your Windows machines and your Macs will respect them. Again, a good thing.

Room for improvement?

It’s safe to say that the PMM, already a very good product, has improved considerably in the last two years. They’ve added new features and added support for more and more features that were Windows only before. The product still seems to be stable and it works well. I’m pleased with what I’ve seen and have been happy to spend a not inconsiderable amount of money on the PMM once again. However, that doesn’t mean that I don’t think that there is room for improvement in some regards.

Multi-site scenarios

My MAT has three school clusters. Two of the school clusters have three geographical sites, the other one has two. All of these sites are connected by WAN links, either leased lines or point-to-point wireless connections. The point is, although a computer can access any other computer or server at any location in our organisation, it might not have the fastest or lowest latency link to that other computer. Therefore, all of the separate sites that are in the organisation have at least some local resources to use such as a domain controller, a file server and Configuration Manager infrastructure.

My original plan when implementing Configuration Manager was to have a single primary site covering all of our organisation. I would have at least one Management Point and Distribution Point per geographical site and clients would choose which MP and DP to use with its boundary group. This made sense as I’m not managing anything like 150,000 clients so a CAS and multiple primary sites were deemed unnecessary. The hope was that I would be able to install an instance of the PMM at each of the secondary schools and have those PMMs integrate into the site.

Unfortunately, this wasn’t to be. Each Configuration Manager site can only support a single instance of the PMM. It doesn’t matter if you have multiple management points in your site; if you try to install more than one PMM in a single SCCM primary site, the PMM configuration program detects that there is already a PMM in that site and refuses to configure the PMM. So this left me with three choices:

  1. Have a single PMM across the entire organisation and have at least two thirds of our Macs download their policies over a WAN connection
  2. Have a single primary site in one school cluster and deploy secondary sites to the other school clusters
  3. Deploy a Central Administration Site (CAS) and a primary site in each of the school clusters.

The IT teams discussed this and we came to the following conclusions:

  1. We didn’t want management traffic of any kind going over the WAN links. They have limited bandwidth, relatively high latencies and if one of them stopped working for some reason, the Mac management would stop working for at least some of the Macs. We decided that this was unacceptable
  2. Secondary sites are designed for very slow WAN connections and they don’t support all of the available roles for Configuration Manager. While our WAN connections are relatively slow, they’re not that slow and we wanted to be able to install all possible roles locally.
  3. To allow us to have a PMM per cluster and for the reasons above, we decided to have a CAS and a primary site per school cluster. This generated quite a lot of debate but this configuration won out eventually. It does mean that we have a lot more servers set aside for Configuration Manager than we originally intended to have but everything is in place now.

It would be nice, however, if all of this wasn’t necessary and if Parallels would allow you to have multiple PMMs per primary site. Configuration Manager has no issue with multiple management points per site so it can’t be that hard.

Mac App Store

Love it or loathe it, the App Store is here and it isn’t going anywhere. I can’t buy Final Cut Pro, Logic, Pages, Numbers, Keynote or any other application that’s only available through the App Store by any other means. The PMM has an MDM built into it for Remote Lock and Wipe. I’d like to leverage it to deploy App Store apps to my Macs. I don’t want to use Profile Manager to do this because Profile Manager sucks and macOS server is being severely deprecated in its next release. I also don’t want to use InTune or another MDM because I don’t want my Macs to be managed in two places. Pretty please Parallels?

Operating System Deployment

So, OSD has been much improved with some nice wizards but there are a couple of areas where it still falls a bit short. The first thing that I’d like to see improved is the “Execute Script” action inside the task sequence wizard. Let me illustrate what I mean:

Don’t get me wrong, it’s adequate but personally I don’t like the fact that it’s just a box to copy and paste a script into. I would much rather that it mirrored what Microsoft does and have the script in a package which the task sequence then downloads from a DP and executes. You might think that I’m being rather petty saying this but consider this: You might have 10/15/20 task sequences and you might want them all to execute the same script. If you needed to change that script, you have to go through each task sequence and edit each one in turn. You don’t have to do that with a Windows task sequence, all you do is edit the script stored in your definitive software library and update your distribution point. I think it’s obvious which method scales better. The same gripe applies to the “Apply Configuration Profile” step as well, the mobileconfig file is imported into the task sequence and stored in there instead of being retrieved from a package.

The second issue that I have with this is that this operation system deployment system still requires more action from a technician than I would like. To build or rebuild a Mac, a tech needs to visit the Mac, set it to boot from the network by holding down the option key or by using the Startup Disk preference pane, waiting for the NBI to boot, authenticating, selecting a task sequence then setting it going. When you have an entire room of Macs to rebuild, this is time consuming. It would be so much better if task sequences appeared in the Parallels Application Portal as well as to boot media and it would be even better if you could schedule when the task sequence ran, just like what happens with Windows and Software Center. This way a tech could set up a job in the Configuration Manager console and leave it at that. Even if you couldn’t schedule it, having a single-click action inside the Parallels Application Portal which automatically selected your task sequence and authenticated you against the PMM would also be a vast improvement. Maybe in V7?

The last thing I’d like to see is a method of creating a bootable preinstallation media. It’s not always possible to netboot so the ability to boot into the preinstallation environment with a USB stick could be useful.

The End. For Now.

This article has been coming for a downright embarrassing amount of time for which I can only apologise. I’ve wanted to get this completed for a very long time and I’ve only just recently found the time to do so. I hope it’s been informative and useful for you. When Parallels updates the PMM with some new functionality, assuming the functionality merits it, I will post a new article talking about it. However, I don’t want this blog to be a Parallels mouthpiece so I am going to try to post more frequently about other things but they’re probably going to be computer related. Hey, I’m a geek, this is what interests me.

I don’t consider any of the shortcomings above to be show-stoppers. The PMM is a very capable product that’s come a very, very long way in the last four years, I have no hesitation in recommending it for use. I invite Parallels to contact me about this article and inform me of any inaccuracies or mistakes that I’ve made. I’m happy to post corrections to the article if need be. With their permission, I will also publish any comments that they have to make about it as well.

If you’ve read this and have any questions that you’d like to ask, please post a comment in the section below and I will do my best to answer.

Almost two years in…

This is bad. I’m paying money each year to host this blog and for the domain name. OK, I’m not paying much but even so, I’m basically ignoring my blog. I said two years ago that this couldn’t stand yet here we are. I must do better.

So what’s been happening in my job?

Well, I’m happy in my work again. They’re keeping me busy. I’ve done a lot since I’ve started, too much to list here but here are some highlights:

  • I’ve implemented two new VMware farms
  • I’ve commissioned two new SANs
  • I’ve helped commission two new very fast Internet connections
  • I’ve installed two new firewalls (Smoothwall)
  • I’ve migrated us away from an Office 365 solution (badly) managed by Capita Openhive to one that’s managed in-house
  • I’ve designed a new Active Directory domain for my workplace and I’m in the process of implementing it
  • I’ve deployed Configuration Manager to replace RM CC4
  • I’ve redesigned our Wireless authentication system
  • I’ve helped install and configure a ton of new HP switches in three of our schools
  • I’ve been swearing an awful lot at some of the design decisions made by my predecessors and the support companies the MAT have employed and have been slowly but surely correcting them

So yes, they’ve certainly been keeping me busy. I plan to post a few articles about some of this and put some some of the PowerShell scripts I’ve written to do certain things. Like I say, I must do better.

Almost two weeks in…

Well, I’m almost two weeks into my new job. It feels good to be back in a school again. It feels even better to be back in a job with a wider range of responsibilities. It feels brilliant to be in a job where I feel like I actually have something to do. 

My new boss seems like a good guy so far. He’s very happy to listen to my thoughts and ideas and he is encouraging me to look at the way things are and to suggest improvements. He is quite new in the position too, he’s only been there for about three months. I suspect that if I’d been invited for interview in the first round, I’d probably have started at around the same time as him. 

There is so much to do. With all due respect to the people looking after the network there before me, I think there have been several questionable design choices. Some of the security policies are downright scary. 

The first thing that needs to be done is to install the latest version of Smoothwall. Unfortunately, the virtual farms that the current instances are stored on are too old to install the latest version. This means that either we have to install some Smoothwall appliances or we need to update the virtual farms. I’m in the process of getting pricing for both options. 

Other things I’ve been doing are:

  • Enabling deduplication on one of their shared network drives. More than 350GB of savings on 1.2TB of data!
  • Attempting to wrap my head around the instance of Veeam they have backing up their virtual farms. 
  • Installing Lansweeper to get an idea of what hardware and software we have in the place. 
  • Installing a KMS server. Seriously, more than 1200 machines using MAK keys is nuts!
  • Installing PasswordState, a locally installed password management system similar to LastPass and Dashlane. 
  • Installing some RADIUS servers to handle wireless authentication. 
  • Planning to deploy WSUS to manage updates on servers. 

It may sound hyperbolic but I feel I’ve done more in two weeks there than I did in an entire year at Westminster. I hope it continues like this. I’m sure it will. 

By the way, as and when I get the time, I’m working on a new article about the Parallels Mac Management product for ConfigMgr. There have been some pretty big updates for it in the last year and one of my contacts at Parallels very kindly let me download it to have a look. Suffice to say, I’m very impressed with what I see there. 

SCOM – SQL Management Pack Configuration

SCOM is a bastard of a product, isn’t it? It’s even more so when you’re trying to monitor a SQL instance or two. It’s also quite amusing that Chrome doesn’t recognise SCOM as a word in its dictionary and that it suggests SCAM as a possible word 🙂

My major project at work for the past few months has been SCOM. I am monitoring about 300 Windows VMs, about a third of which have a SQL database instances on them. I’ve kept with using the LocalSystem account as the SCOM action account and for the majority of the time, that’s enough. However, there have been a few times where it hasn’t been enough. It’s always a permissions issue, the LocalSystem account doesn’t have access to one or more of the databases so the discovery and monitoring scripts can’t run and you get a myriad of alerts.

When it comes to adding a management pack into SCOM, always read the damn documentation that comes with the MP. I know it’s tedious but it’s necessary. Reading the documentation for the SQL Management pack found at Microsoft’s website gives you some interesting recommendations. They suggest that you have three action accounts for SQL:

  1. A discovery account
  2. A default action account
  3. A monitoring account

They also recommend that you put the monitoring and discovery account into an additional AD group. Once you do that, you have to add the users to SQL, assign them specific permissions to databases, give them access to parts of the Windows registry, assign them permissions to various WMI namespaces, grant them local logon privileges and more. I’m not going to go over the whole process, if you really want to see it look at Microsoft’s documentation.

The point is, it’s a lot of work. Wouldn’t it be nice if we could automate it? Well, I’ve written a script that does precisely that. It’s a big one:


Function Set-WmiNamespaceSecurity
{
[cmdletbinding()]
Param ( [parameter(Mandatory=$true,Position=0)][string] $namespace,
[parameter(Mandatory=$true,Position=1)][string] $operation,
[parameter(Mandatory=$true,Position=2)][string] $account,
[parameter(Position=3)][string[]] $permissions = $null,
[bool] $allowInherit = $false,
[bool] $deny = $false,
[string] $computerName = ".",
[System.Management.Automation.PSCredential] $credential = $null)

Process {
#$ErrorActionPreference = "Stop"

Function Get-AccessMaskFromPermission($permissions) {
$WBEM_ENABLE = 1
$WBEM_METHOD_EXECUTE = 2
$WBEM_FULL_WRITE_REP = 4
$WBEM_PARTIAL_WRITE_REP = 8
$WBEM_WRITE_PROVIDER = 0x10
$WBEM_REMOTE_ACCESS = 0x20
$WBEM_RIGHT_SUBSCRIBE = 0x40
$WBEM_RIGHT_PUBLISH = 0x80
$READ_CONTROL = 0x20000
$WRITE_DAC = 0x40000

$WBEM_RIGHTS_FLAGS = $WBEM_ENABLE,$WBEM_METHOD_EXECUTE,$WBEM_FULL_WRITE_REP,`
$WBEM_PARTIAL_WRITE_REP,$WBEM_WRITE_PROVIDER,$WBEM_REMOTE_ACCESS,`
$READ_CONTROL,$WRITE_DAC
$WBEM_RIGHTS_STRINGS = "Enable","MethodExecute","FullWrite","PartialWrite",`
"ProviderWrite","RemoteAccess","ReadSecurity","WriteSecurity"

$permissionTable = @{}

for ($i = 0; $i -lt $WBEM_RIGHTS_FLAGS.Length; $i++) {
$permissionTable.Add($WBEM_RIGHTS_STRINGS[$i].ToLower(), $WBEM_RIGHTS_FLAGS[$i])
}

$accessMask = 0

foreach ($permission in $permissions) {
if (-not $permissionTable.ContainsKey($permission.ToLower())) {
throw "Unknown permission: $permission`nValid permissions: $($permissionTable.Keys)"
}
$accessMask += $permissionTable[$permission.ToLower()]
}

$accessMask
}

if ($PSBoundParameters.ContainsKey("Credential")) {
$remoteparams = @{ComputerName=$computer;Credential=$credential}
} else {
$remoteparams = @{ComputerName=$computerName}
}

$invokeparams = @{Namespace=$namespace;Path="__systemsecurity=@"} + $remoteParams

$output = Invoke-WmiMethod @invokeparams -Name GetSecurityDescriptor
if ($output.ReturnValue -ne 0) {
throw "GetSecurityDescriptor failed: $($output.ReturnValue)"
}

$acl = $output.Descriptor
$OBJECT_INHERIT_ACE_FLAG = 0x1
$CONTAINER_INHERIT_ACE_FLAG = 0x2

$computerName = (Get-WmiObject @remoteparams Win32_ComputerSystem).Name

if ($account.Contains('\')) {
$domainaccount = $account.Split('\')
$domain = $domainaccount[0]
if (($domain -eq ".") -or ($domain -eq "BUILTIN")) {
$domain = $computerName
}
$accountname = $domainaccount[1]
} elseif ($account.Contains('@')) {
$domainaccount = $account.Split('@')
$domain = $domainaccount[1].Split('.')[0]
$accountname = $domainaccount[0]
} else {
$domain = $computerName
$accountname = $account
}

$getparams = @{Class="Win32_Account";Filter="Domain='$domain' and Name='$accountname'"}

$win32account = Get-WmiObject @getparams

if ($win32account -eq $null) {
throw "Account was not found: $account"
}

switch ($operation) {
"add" {
if ($permissions -eq $null) {
throw "-Permissions must be specified for an add operation"
}
$accessMask = Get-AccessMaskFromPermission($permissions)

$ace = (New-Object System.Management.ManagementClass("win32_Ace")).CreateInstance()
$ace.AccessMask = $accessMask
if ($allowInherit) {
$ace.AceFlags = $OBJECT_INHERIT_ACE_FLAG + $CONTAINER_INHERIT_ACE_FLAG
} else {
$ace.AceFlags = 0
}

$trustee = (New-Object System.Management.ManagementClass("win32_Trustee")).CreateInstance()
$trustee.SidString = $win32account.Sid
$ace.Trustee = $trustee

$ACCESS_ALLOWED_ACE_TYPE = 0x0
$ACCESS_DENIED_ACE_TYPE = 0x1

if ($deny) {
$ace.AceType = $ACCESS_DENIED_ACE_TYPE
} else {
$ace.AceType = $ACCESS_ALLOWED_ACE_TYPE
}

$acl.DACL += $ace.psobject.immediateBaseObject
}

"delete" {
if ($permissions -ne $null) {
throw "Permissions cannot be specified for a delete operation"
}

[System.Management.ManagementBaseObject[]]$newDACL = @()
foreach ($ace in $acl.DACL) {
if ($ace.Trustee.SidString -ne $win32account.Sid) {
$newDACL += $ace.psobject.immediateBaseObject
}
}

$acl.DACL = $newDACL.psobject.immediateBaseObject
}

default {
throw "Unknown operation: $operation`nAllowed operations: add delete"
}
}

$setparams = @{Name="SetSecurityDescriptor";ArgumentList=$acl.psobject.immediateBaseObject} + $invokeParams

$output = Invoke-WmiMethod @setparams
if ($output.ReturnValue -ne 0) {
throw "SetSecurityDescriptor failed: $($output.ReturnValue)"
}
}
}

Function Add-DomainUserToLocalGroup
{
[cmdletBinding()]
Param(
[Parameter(Mandatory=$True)]
[string]$computer,
[Parameter(Mandatory=$True)]
[string]$group,
[Parameter(Mandatory=$True)]
[string]$domain,
[Parameter(Mandatory=$True)]
[string]$user
)
$de = [ADSI]“WinNT://$computer/$Group,group”
$de.psbase.Invoke(“Add”,([ADSI]“WinNT://$domain/$user”).path)
} #end function Add-DomainUserToLocalGroup

Function Add-UserToLocalLogon
{
[cmdletBinding()]
Param(
[Parameter(Mandatory=$True)]
[string]$UserSID
)
$tmp = [System.IO.Path]::GetTempFileName()
secedit.exe /export /cfg "$($tmp)"
$c = Get-Content -Path $tmp
$currentSetting = ""

foreach($s in $c) {
if( $s -like "SeInteractiveLogonRight*") {
$x = $s.split("=",[System.StringSplitOptions]::RemoveEmptyEntries)
$currentSetting = $x[1].Trim()
}
}

if( $currentSetting -notlike "*$($UserSID)*" ) {
if( [string]::IsNullOrEmpty($currentSetting) ) {
$currentSetting = "*$($UserSID)"
} else {
$currentSetting = "*$($UserSID),$($currentSetting)"
}

$outfile = @"
[Unicode]
Unicode=yes
[Version]
signature="`$CHICAGO`$"
Revision=1
[Privilege Rights]
SeInteractiveLogonRight = $($currentSetting)
"@

$tmp2 = [System.IO.Path]::GetTempFileName()

$outfile | Set-Content -Path $tmp2 -Encoding Unicode -Force

Push-Location (Split-Path $tmp2)

try {
secedit.exe /configure /db "secedit.sdb" /cfg "$($tmp2)" /areas USER_RIGHTS

} finally {
Pop-Location
}
}
}

#Set Global Variables

$Default_Action_Account = "om_aa_sql_da"
$Discovery_Action_Account = "om_aa_sql_disc"
$Monitoring_Action_Account = "om_aa_sql_mon"
$LowPrivGroup = "SQLMPLowPriv"

$WindowsDomain = "Intranet"
#Add users to local groups

Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Performance Monitor Users" -user $Monitoring_Action_Account -domain $WindowsDomain
Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Performance Monitor Users" -user $Default_Action_Account -domain $WindowsDomain
Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Event Log Readers" -user $Monitoring_Action_Account -domain $WindowsDomain
Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Event Log Readers" -user $Default_Action_Account -domain $WindowsDomain
Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Users" -user $LowPrivGroup -domain $WindowsDomain
Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Users" -user $Default_Action_Account -domain $WindowsDomain
#
#AD SIDs for Default Action Account user and Low Priv group - required for adding users to local groups and for service security settings.

#Define SIDs for Default Action and Low Priv group. To get a SID, use the following command:
#Get-ADUser -identity [user] | select SID
#and
#Get-ADGroup -identity [group] | select SID
#Those commands are part of the AD management pack which is why they're not in this script, I can't assume that this script is being run on a DC or on
#a machine with the AD management shell installed
#>

$SQLDASID = "S-1-5-21-949506055-860247811-1542849698-1419242"
$SQLMPLowPrivsid = "S-1-5-21-949506055-860247811-1542849698-1419239"

Add-UserToLocalLogon -UserSID $SQLDASID
Add-UserToLocalLogon -UserSID $SQLMPLowPrivsid

#Set WMI Namespace Security

Set-WmiNamespaceSecurity root add $WindowsDomain\$Default_Action_Account MethodExecute,Enable,RemoteAccess,Readsecurity
Set-WmiNamespaceSecurity root\cimv2 add $WindowsDomain\$Default_Action_Account MethodExecute,Enable,RemoteAccess,Readsecurity
Set-WmiNamespaceSecurity root\default add $WindowsDomain\$Default_Action_Account MethodExecute,Enable,RemoteAccess,Readsecurity
if (Get-WMIObject -class __Namespace -namespace root\microsoft\sqlserver -filter "name='ComputerManagement10'") {
Set-WmiNamespaceSecurity root\Microsoft\SqlServer\ComputerManagement10 add $WindowsDomain\$Default_Action_Account MethodExecute,Enable,RemoteAccess,Readsecurity }
if (Get-WMIObject -class __Namespace -namespace root\microsoft\sqlserver -filter "name='ComputerManagement11'") {
Set-WmiNamespaceSecurity root\Microsoft\SqlServer\ComputerManagement11 add $WindowsDomain\$Default_Action_Account MethodExecute,Enable,RemoteAccess,Readsecurity }

Set-WmiNamespaceSecurity root add $WindowsDomain\$LowPrivGroup MethodExecute,Enable,RemoteAccess,Readsecurity
Set-WmiNamespaceSecurity root\cimv2 add $WindowsDomain\$LowPrivGroup MethodExecute,Enable,RemoteAccess,Readsecurity
Set-WmiNamespaceSecurity root\default add $WindowsDomain\$LowPrivGroup MethodExecute,Enable,RemoteAccess,Readsecurity
if (Get-WMIObject -class __Namespace -namespace root\microsoft\sqlserver -filter "name='ComputerManagement10'") {
Set-WmiNamespaceSecurity root\Microsoft\SqlServer\ComputerManagement10 add $WindowsDomain\$LowPrivGroup MethodExecute,Enable,RemoteAccess,Readsecurity }
if (Get-WMIObject -class __Namespace -namespace root\microsoft\sqlserver -filter "name='ComputerManagement11'") {
Set-WmiNamespaceSecurity root\Microsoft\SqlServer\ComputerManagement11 add $WindowsDomain\$LowPrivGroup MethodExecute,Enable,RemoteAccess,Readsecurity }

#Set Registry Permissions

$acl = Get-Acl 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$Rule = New-Object System.Security.AccessControl.RegistryAccessRule ("$($WindowsDomain)\$($Default_Action_Account)","readkey","ContainerInherit","None","Allow")
$acl.SetAccessRule($Rule)
$acl | Set-Acl -Path 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$acl = $null
$acl = Get-Acl 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$Rule = New-Object System.Security.AccessControl.RegistryAccessRule ("$($WindowsDomain)\$($LowPrivGroup)","readkey","ContainerInherit","None","Allow")
$acl.SetAccessRule($Rule)
$acl | Set-Acl -Path 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$acl = $null

$SQLInstances = Get-ChildItem 'registry::hklm\SOFTWARE\Microsoft\Microsoft SQL Server' | ForEach-Object {Get-ItemProperty $_.pspath } | Where-Object {$_.pspath -like "*MSSQL1*" }

$SQLInstances | Foreach {
$acl = Get-Acl "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\$($_.PSChildName)\MSSQLSERVER\Parameters"
$Rule = New-Object System.Security.AccessControl.RegistryAccessRule ("$($WindowsDomain)\$($LowPrivGroup)","readkey","ContainerInherit","None","Allow")
$acl.SetAccessRule($Rule)
$acl | Set-Acl -Path "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\$($_.PSChildName)\MSSQLSERVER\Parameters"
$acl = $null

$acl = Get-Acl "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\$($_.PSChildName)\MSSQLSERVER\Parameters"
$Rule = New-Object System.Security.AccessControl.RegistryAccessRule ("$($WindowsDomain)\$($Default_Action_Account)","readkey","ContainerInherit","None","Allow")
$acl.SetAccessRule($Rule)
$acl | Set-Acl -Path "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\$($_.PSChildName)\MSSQLSERVER\Parameters"
$acl = $null

}

#Set SQL Permissions

#Get SQL Version
if ($SQLInstances.Count -eq $null) {

$version = Get-ItemProperty "registry::HKLM\Software\Microsoft\Microsoft SQL Server\$($SQLInstances.PSChildName)\MSSQLSERVER\CurrentVersion"

} else {

$version = Get-ItemProperty "registry::HKLM\Software\Microsoft\Microsoft SQL Server\$($SQLInstances[0].PSChildName)\MSSQLSERVER\CurrentVersion"

}
#Import appropriate SQL PowerShell module

if ($version.CurrentVersion -ge 11) {
#Import SQL 2012 Module
Import-Module sqlps
#change out of sql context
c:
} else {
#Add SQL 2008 Snap-in
Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
}

#Create database users and assign permissions

$CreateDatabaseUsers = "use master
go

create login [$($WindowsDomain)\$($LowPrivGroup)] from windows
go

grant view server state to [$($WindowsDomain)\$($LowPrivGroup)]
grant view any definition to [$($WindowsDomain)\$($LowPrivGroup)]
grant view any database to [$($WindowsDomain)\$($LowPrivGroup)]
grant select on sys.database_mirroring_witnesses to [$($WindowsDomain)\$($LowPrivGroup)]
go

create login [$($WindowsDomain)\$($Default_Action_Account)] from windows
go

grant view server state to [$($WindowsDomain)\$($Default_Action_Account)]
grant view any definition to [$($WindowsDomain)\$($Default_Action_Account)]
grant view any database to [$($WindowsDomain)\$($Default_Action_Account)]
grant alter any database to [$($WindowsDomain)\$($Default_Action_Account)]
grant select on sys.database_mirroring_witnesses to [$($WindowsDomain)\$($Default_Action_Account)]
go"

#Generate query to assign users and permissions to databases
$DatabaseUsers1 = "SELECT 'use ' + name + ' ;'
+ char(13) + char(10)
+ 'create user [$($WindowsDomain)\$($LowPrivGroup)] FROM login [$($WindowsDomain)\$($LowPrivGroup)];'
+ char(13) + char(10) + 'go' + char(13) + char(10)
FROM sys.databases WHERE database_id = 1 OR database_id >= 3
UNION
SELECT 'use msdb; exec sp_addrolemember @rolename=''SQLAgentReaderRole'', @membername=''$($WindowsDomain)\$($LowPrivGroup)'''
+ char(13) + char(10) + 'go' + char(13) + char(10)
UNION
SELECT 'use msdb; exec sp_addrolemember @rolename=''PolicyAdministratorRole'', @membername=''$($WindowsDomain)\$($LowPrivGroup)'''
+ char(13) + char(10) + 'go' + char(13) + char(10)
"

$DatabaseUsers2 = "SELECT 'use ' + name + ' ;'
+ char(13) + char(10)
+ 'create user [$($WindowsDomain)\$($Default_Action_Account)] FROM login [$($WindowsDomain)\$($Default_Action_Account)];'
+ 'exec sp_addrolemember @rolename=''db_owner'', @membername=''$($WindowsDomain)\$($Default_Action_Account)'';'
+ 'grant alter to [$($WindowsDomain)\$($Default_Action_Account)];'
+ char(13) + char(10) + 'go' + char(13) + char(10)
FROM sys.databases WHERE database_id = 1 OR database_id >= 3
UNION
SELECT 'use msdb; exec sp_addrolemember @rolename=''SQLAgentReaderRole'', @membername=''$($WindowsDomain)\$($Default_Action_Account)'''
+ char(13) + char(10) + 'go' + char(13) + char(10)
UNION
SELECT 'use msdb; exec sp_addrolemember @rolename=''PolicyAdministratorRole'', @membername=''$($WindowsDomain)\$($Default_Action_Account)'''
+ char(13) + char(10) + 'go' + char(13) + char(10)
"

#
$SQLInstances | Foreach {
if ($_.PSChildName.split('.')[-1] -eq "MSSQLSERVER") {
$InstanceName = $env:COMPUTERNAME
} else {
$InstanceName = "$($env:COMPUTERNAME)\$($_.PSChildName.split('.')[-1])" }

Invoke-Sqlcmd -ServerInstance $InstanceName $CreateDatabaseUsers
$Provision1 = Invoke-Sqlcmd -ServerInstance $InstanceName $DatabaseUsers1
$Provision2 = Invoke-Sqlcmd -ServerInstance $InstanceName $DatabaseUsers2

$Provision1 | foreach {
Invoke-Sqlcmd -ServerInstance $InstanceName $_.ItemArray[0]
}
$Provision2 | foreach {
Invoke-Sqlcmd -ServerInstance $InstanceName $_.ItemArray[0]
}
}

#Grant Default Action account rights to start and stop SQL Services

$SQLServices = Get-Service -DisplayName "*SQL*"

$SQLServices | Foreach {
& c:\windows\system32\sc.exe sdset $_.Name D`:`(A`;`;GRRPWP`;`;`;$($SQLDASID)`)`(A`;`;CCLCSWRPWPDTLOCRRC`;`;`;SY`)`(A`;`;CCDCLCSWRPWPDTLOCRSDRCWDWO`;`;`;BA`)`(A`;`;CCLCSWLOCRRC`;`;`;IU`)`(A`;`;CCLCSWLOCRRC`;`;`;SU`)`S`:`(AU`;FA`;CCDCLCSWRPWPDTLOCRSDRCWDWO`;`;`;WD`)
}

There are huge swathes of this script that I can not take credit for, mostly the functions.

The SetWMINameSpaceSecurity function was pilfered directly from here: https://live.paloaltonetworks.com/t5/Management-Articles/PowerShell-Script-for-setting-WMI-Permissions-for-User-ID/ta-p/53646. I got it from Palo Alto’s website but it appears to have been written by Microsoft themselves

The Add-DomainUserToLocalGroup function was stolen from the Hey, Scripting Guy! Blog, found here: https://blogs.technet.microsoft.com/heyscriptingguy/2010/08/19/use-powershell-to-add-domain-users-to-a-local-group/

The Add-UserToLocalLogon function was lifted wholesale from here: https://ikarstein.wordpress.com/2012/10/12/powershell-script-to-add-account-to-allow-logon-locally-privilege-on-local-security-policy/

The rest, however, is all mine which you can probably tell from the quality of the code. You will need to change some variables from line 223 to match your environment. That said, it works and that’s all I care about. Enjoy!

Sigh, sometimes, software can be too bloody clever for its own good. The Code Block module that I’m using isn’t doing a very good job of formatting this script and it’s replaced some characters such as &, < and > with their HTML equivalents. I think I’ve weeded them all out but I may not have. If not, let me know.

OMG! It’s been a year!

I have really been neglecting this blog. Considering that I’m paying to have the damn thing hosted, this can’t stand.

So what has happened in the last year? Quite a lot really.

First of all, the new job mentioned in the last post. If I’m honest, I’m not happy in that job. I’m not going to go too far into specifics. Suffice to say, the University of Westminster is a good place to work; they are very generous towards their staff, I am well paid there, I get a ludicrous amount of leave and the benefits package is very good. However, the job isn’t for me. I feel far too pigeon-holed. I miss the depth of work from my old job, the amount of responsibility that I had. I don’t like being third-line only very much. I don’t like being sat at my desk all day, every day. I don’t like not being able to get my hands on hardware occasionally. Perhaps unbelievably, I even miss the small amount of interaction that I had with users. I’ve been keeping my eyes open for new jobs and I saw a new one about two months ago. In fact, I even applied twice; the first time around I didn’t get through to the interview stage. It seems that they didn’t recruit in the first run so they re-advertised. I got some feedback from the man who is in charge of recruitment and I applied again. The second time, I got invited to interview.

I went to the interview. I thought that I had messed it up entirely. The first part of the interview was the technical part. I was expecting a test along the lines of almost every other technical test I’ve taken, i.e. “What is a FSMO Role?”, “Why can’t this printer print?”, “Who do you prioritise, the headmaster or a classroom that can’t work?”, you know the sort of thing. Instead, they gave me a series of scenarios and twenty minutes to put some thoughts down on them. While obviously they were interested in my technical skill set, they were more interested in how I approach problems and how my thought processes worked. When they were analysing my answers, there was one question that they asked which I really made a pig’s ear of and which I couldn’t answer. A very awkward silence ensued while I desperately tried to understand what they wanted from me but in the end, they put me out of my misery. Truth be told, I was almost ready to walk away at that point.

The panel interview came after with the ICT Manager and the Head of Performance and Recruitment. That went a little better although I did give a very arrogant answer to a question: They asked if I thought I could do the job and I said that I wouldn’t be sitting there if I didn’t. I cringed almost immediately. Argh.

So anyway, I couldn’t have done too badly as I got the job. On 1st August 2016 I shall be starting work for the Haberdashers’ Aske’s multi-academy trust in New Cross, London. My job title will be IT Systems Administrator. It’s closer to home, it’s more money, there is going to be a lot more variety and responsibility and it’s in an environment that I think I’ll be much more comfortable in. I’m a lot more optimistic about this job than I was about Westminster and I’m looking forward to starting tremendously.

So what else has happened in the last year? Well, I changed the hosting provider of this site. I originally bought space from GoDaddy as it was cheap for the first year. However, subsequent years were stupidly expensive so I said “Bugger that” and changed host. My site is now hosted by SGIS Hosting who are a lot more reasonable and migrated this site to their servers for me. They give you less space and bandwidth than GoDaddy but I have more than enough for my purposes and I have some space with which I can mess about with outside of WordPress if I want to. The only major disadvantage is that I now have to keep WordPress up to date manually.

More significantly, my girlfriend and I have moved away from Hertfordshire. This made me sad, I really loved it in Harpenden and I miss the place a lot. However, it was necessary for reasons that will probably become clear later on in this article. We moved into a nice flat in Beckenham. The rent is a lot more but travel costs are considerably less so it more or less evens out. With this new job, travel costs will be lower still.

The last major thing that has happened is that I have become a father! My son was born on the 25th May 2016 at 10.47. The preceding month and the first week of his life was by far the most stressful time of my life! Towards the end of my girlfriend’s pregnancy, there were complications and he ended up being born early and very small. He was taken to the Special Baby Care Unit (SCBU) at our local hospital where he was looked after for about a week. He was in an incubator for about two days with a glucose drip in his hand. After that, the drip came out and he was put into a cot. In the end, all they were doing was feeding him so they decided that it would be best to send him home. The birth was also very hard on my girlfriend, she ended up staying in the hospital for as long as our son.

If there was ever a cause worth donating money to, it is a SCBU. If you’re reading this and are feeling generous, please feel free to have a look at the Princess Royal University Hospital SCBU Fund’s JustGiving page and chuck a few quid their way. If you don’t want to donate to my local one, please look one up closer to you and donate to them instead. They all (not just the PRUH’s, all of them everywhere) do wonderful work under extremely difficult circumstances and they all deserve far more support than they get.

Anyway, our son is the primary reason we have moved to Beckenham. My girlfriend’s family is from around here and her sister lives nearby. My girlfriend wanted that support network close to her for when our baby arrived. I understand that and support it so here we are! On the whole, it’s a good thing as my girlfriend and our son are both getting much better care from the hospitals down here. In addition, I wouldn’t have been able to get the job in New Cross if we still lived in Harpenden so I’d probably still be stuck as Westminster for the foreseeable future.

So in summary, on a professional level, the last year has been pretty mediocre. On a personal level, despite the stresses and heartache, it’s been awesome. Once again, I toast the future! I’m looking forward to it once again!

The Future

I have a new job.

As of the 1st September, I am going to be working for a university in central London as a “Windows Specialist”. If I’m entirely honest, I’m not entirely sure what my day to day duties are going to be but I have inferred that it’s going to involve helping to migrate from Novell eDirectory to Active Directory, some SQL Server stuff and Commvault.

I have spent almost eight years in my current job. I am happy in it and I wasn’t looking to move on. However, sometimes you see an opportunity and you just have to grab it. I’m going to moving onto a network that’s going to be spanning a large part of London. They have multiple campuses. Their IT has separate teams for Windows, Unix, Infrastructure and Desktop. It’s going to be second and third line mostly I think, the amount of interaction that I have with users is going to be less than what it is at the moment. No more desktop support! It is, probably literally, an order of magnitude bigger than anything I’ve ever done before and I’m simultaneously excited and completely bricking it at the same time.

Perhaps unusually, I asked to extend my notice period. I wanted to work one final summer at the college and get my projects finished and loose ends tied up. They are in the final planning stages now and I’ll be putting them in place in a weeks time. Additionally, I wanted to get some proper handover documentation written too. So far, the document is more than 8000 words long and there’s plenty more to do. It’s a shame I couldn’t have met my successor to hand over to them in person but that’s the way things go sometimes.

The other thing that this extended notice period has done for me is given me a chance to get my head around the idea of leaving where I am and moving on. The difference between moving on this time and the last time is that the last time, I was desperate to go. This time around, I’m upset to be leaving and I’m still a little worried that I’m moving on before I’m ready to go. Don’t get me wrong, I know that I’m capable of doing the job, that’s not my concern. My concern is that I’ve been happy where I am and more than a bit settled and that moving on is going to be an upheaval.

Anyway, the end of term has come and I was one of 16 members of staff leaving this summer. I was mentioned in the principal’s end of year speech and he said some extremely kind words, comparing me to Scotty in Star Trek saying that I worked in the background, quietly and methodically keeping things going and fixing them when they blew up. He also said I’d be incredibly hard to replace which is always nice to hear.

Anyway, to the future! I’m looking forward to it.

Building, Deploying and Automatically Configuring a Mac Image using SCCM and Parallels SCCM Agent

I touched briefly on using the Parallels Management Agent to build Macs in my overview article but I thought it might be a good idea to go through the entire process that I use when I have to create an image for a Mac, getting the image deployed and getting the Mac configured once the image is on there. At the moment, it’s not a simple process. It requires the use of several tools and, if you want the process to be completely automated, a some Bash scripting as well. The process isn’t as smooth as you would get from solutions like DeployStudio but it works and, in my opinion anyway, it works well enough for you not to have to bother with a separate product for OSD. Parallels are working hard on this part of the product and they tell me that proper task sequencing will be part of V4 of the agent. As much as I’m looking forward to that, it doesn’t change the fact that right now we’re on v3.5 and we have to use the messy process!

First of all, I should say that this is my method of doing it and mine alone. This is not Parallel’s method of doing this, it has not been sanctioned or condoned by them. There are some dangerous elements to it, you follow this procedure at your own risk and I will not be held responsible for damage caused by it if you try this out.

Requirements

You will need the following tools:

  • A Mac running OS X Server. The server needs to be set up as a Profile Manager server, an Open Directory server and, optionally, as a Netboot server. It is also needed on Yosemite for the System Image Utility.
  • A second Mac running the client version of OS X.
  • Both the server and the client need to be running the same version of OS X (Mavericks, Yosemite, whatever) and they need to be patched to the same level. Both Macs need to have either FireWire or Thunderbolt ports.
  • A FireWire or Thunderbolt cable to connect the two Macs together.
  • A SCCM infrastructure with the Parallels SCCM Mac Management Proxy and Netboot server installed.
  • This is optional but I recommend it anyway:  A copy of Xcode or another code editor to create your shell scripts in. I know you could just use TextEdit but I prefer something that has proper syntax highlighting and Xcode is at least free.
  • Patience. Lots of patience. You’ll need it. The process is time consuming and and can be infuriating when you get something wrong.

At the end of this process, you will have an OS X Image which can be deployed to your Macs. The image will automatically name its target, it will download, install and configure the Parallels SCCM agent, join itself to your Active Directory domain, attach itself to a managed wireless network and it will install any additional software that’s not in your base image. The Mac will do this without any user interaction apart from initiating the build process.

Process Overview

The overview of the process is as follows:

  1. Create an OS X profile to join your Mac to your wireless network.
  2. Create a base installation of OS X with the required software and settings.
  3. Create a Automator workflow to deploy the Parallels agent and to do other minor configuration jobs.
  4. Use the System Image Utility to create the image and a workflow to automatically configure the disk layout and computer name.
  5. (Optional) Use the Mac OS X Netboot server to deploy the image to a Mac. This is to make sure that your workflow works and that you’ve got your post-install configuration scripts right before you add the image to your ConfigMgr server. You don’t have to do this but you may find it saves you a lot of time.
  6. Convert the image to a WIM file and add it to your SCCM OSD image library
  7. Advertise the image to your Macs

I’m going to assume that you already have your SCCM infrastructure, Parallels SCCM management proxy, Parallels Netboot server and OS X Server working.

Generate an OS X Profile.

Open a browser and go to the address of your Profile Manager (usually https://{hostname.domain}/profilemanager) and go the Device Groups section. I prefer to generate a profile for each major setting that I’m pushing down. It makes for a little more work getting it set up but if one of your settings breaks something, it makes it easier to troubleshoot as you can remove a specific setting instead of the whole lot at once.

Your profile manager will look something like this:

Untitled

As you can see, I’ve already set up some profiles but I will walk through the process for creating a profile to join your Mac to a wireless network. First of all, create a new device group by pressing the + button in the middle pane. You will be prompted to give the group a name, do so.

Untitled 2

Go to the Settings tab and press the Edit button

Untitled 3

In the General section, change the download type to Manual, put a description in the description field and under the Security section, change the profile removal section to “With Authorisation”. Put a password in the box that appears. Type it in carefully, there is no confirm box.

Untitled 4

If you are using a wireless network which requires certificates, scroll down to the certificates section and copy your certificates into there by dragging and dropping them. If you have an on-site CA, you may as well put the root trust certificate for that in there as well.

Untitled 5

Go to the Networks section and set put in the settings for your network

Untitled 6

When you’re done, press the OK button. You’ll go back to the main Profile Manager screen. Make sure you press the Save button.

I would strongly suggest that you explore Profile Manager and create profiles for other settings as well. For example, you could create one to control your Mac’s energy saving settings or to set up options for your users desktop.

When you’re back on the profile manager window, press the Download button and copy the resulting .mobileconfig file to a suitable network share.

Go to a PC with the SCCM console and the PMA plugin installed. Open the Assets and Compliance workspace. Go to Compliance Settings then Configuration Items. Optionally, if you haven’t already, create a folder for Mac profiles. Right click on your folder or on Configuration Items, go to Create Parallels Configuration Item then Mac OS X Configuration Profile from File.

sccmprof

Give the profile a name and description, change the profile type to System then press the Browse button and browse to the network share where you copied the .mobileconfig file. Double click on the mobileconfig file then press the OK button. You then need to go to the Baselines section and create a baseline with your configuration item in. Deploy the baseline to an appropriate collection.

Create an image

On the Mac which doesn’t have OS X Server installed, install your software. Create any additional local users accounts that you require. Make those little tweaks and changes that you inevitably have to make. If you want to make changes to the default user profile, follow the instructions on this very fine website to do so.

Once you’ve got your software installed and have got your profile set up the way you want it, you may want to boot your Mac into Target Mode and use your Server to create a snapshot using the System Image Utility or Disk Utility. This is optional but recommended as you will need to do a lot of testing which may end up being destructive if you make a mistake. Making an image now will at least allow you to roll back without having to start from scratch.

Creating an Automator workflow to perform post-image deployment tasks

Now here comes the messy bit. When you deploy your image to your Macs, you will undoubtably want them to automatically configure themselves without any user interaction. The only way that I have found to do this reliably is pretty awful but unfortunately I’ve found it to be necessary.

First of all, you need to enable the root account. The quickest way to do so is to is to open a terminal session and type in the following command:

dsenableroot -u {user with admin rights} -p {that user's password} -r {what you want the root password to be}

Log out and log in with the root user.

Go to System Preferences and go to Users and Groups. Change the Automatic Login option to System Administrator and type in the root password when prompted. When you’ve done that, go to the Security and Privacy section and go to General. Turn on the screensaver password option and set the time to Immediately. Check the “Show a Message…” box and set the lock message to something along the lines of “This Mac is being rebuilt, please be patient”. Close System Preferences for now.

You will need to copy a script from your PMA proxy server called InstallAgentUnattended.sh. It is located in your %Programfiles(x86)%\Parallels\PMA\files folder. Copy it to the Documents folder of your Root user.

Open your code editor (Xcode if you like, something else if you don’t) and enter the following script:

#!/bin/sh

#Get computer's current name
CurrentComputerName=$(scutil --get ComputerName)

#Bring up a dialog box with computer's name in and give the user the option to change it. Time out after 30secs
ComputerName=$(/usr/bin/osascript <<EOT
tell application "System Events"
activate
set ComputerName to text returned of (display dialog "Please Input New Computer Name" default answer "$CurrentComputerName" with icon 2 giving up after 60)
end tell
EOT)

#Did the user press cancel? If so, exit the script

ExitCode=$?
echo $ExitCode

if [ $ExitCode = 1 ]
then
exit 0
fi

#Compare current computername with one set, change if different

CurrentComputerName=$(scutil --get ComputerName)
CurrentLocalHostName=$(scutil --get LocalHostName)
CurrentHostName=$(scutil --get HostName)

echo "CurrentComputerName = $CurrentComputerName"
echo "CurrentLocalHostName = $CurrentLocalHostName"
echo "CurrentHostName = $CurrentHostName"

 if [ $ComputerName = $CurrentComputerName ]
 then
 echo "ComputerName Matches"
 else
 echo "ComputerName Doesn't Match"
 scutil --set HostName $ComputerName
 echo "ComputerName Set"
 fi

 if [ $ComputerName = $CurrentHostName ]
 then
 echo "HostName Matches"
 else
 echo "HostName Doesn't Match"
 scutil --set ComputerName $ComputerName
 echo "HostName Set"
 fi

 if [ $ComputerName = $CurrentLocalHostName ]
 then
 echo "LocalHostName Matches"
 else
 echo "LocalHostName Doesn't Match"
 scutil --set LocalHostName $ComputerName
 echo "LocalHostName Set"
 fi

#Invoke Screensaver
/System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine

#Join Domain
dsconfigad -add {FQDN.of.your.AD.domain} -user {User with join privs} -password {password for user} -force

#disable automatic login
defaults delete /Library/Preferences/com.apple.loginwindow.plist autoLoginUser
rm /etc/kcpassword

#install Configuration Manager client
chmod 755 /private/var/root/Documents/InstallAgentUnattended.sh
/private/var/root/Documents/InstallAgentUnattended.sh http://FQDN.of.your.PMA.Server:8761/files/pma_agent.dmg {SCCM User} {Password for SCCM User} {FQDN.of.your.AD.Domain}
echo SCCM Client Installed

#Repair disk permissions
diskutil repairPermissions /
echo Disk Permissions Repaired

#Rename boot volume to host name
diskutil rename "Macintosh HD" $HOSTNAME

#disable root
dsenableroot -d -u {User with admin rights on Mac} -p {That user's password}

#Reboot Mac
shutdown -r +60

Obviously you will need to change this to suit your environment.

As you can see, this has several parts. It calls a bit of Applescript which prompts the user to enter the machine name. The default value is the Mac’s current hostname. The prompt times out after 30 seconds. The script gets the current hostname of the machine and compares it to what was entered in the box and changes the Mac’s name if it is different. It then invokes the Mac’s screensaver, joins the Mac to your AD domain, renames the Mac’s hard drive to the hostname of the Mac and downloads the PMA client from the PMA Proxy Server and installs it. It removes the automatic logon for the Root user, removes the saved password for Root, runs a Repair Permissions on the Mac’s hard disk then disables the Root account and sets the Mac to reboot itself after 60 minutes. The Mac is given an hour before it reboots so that the PMA can download and apply its initial policies.

At this point, you will probably want to test the script to make sure that it works. This is why I suggested taking a snapshot of your Mac beforehand. Even if you do get it right, you still need to roll back your Mac to how it was before you ran the script.

Once the script has been tested, you will need to create an Automator workflow. Open the Automator app and create a new application. Go to the Utilities section and drag Shell Script to the pane on the right hand side.

Untitled 7

At this point, you have a choice: You can either paste your entire script into there and have it all run as a big block of code or you can drag multiple shell script blocks across and break your code up into sections. I would recommend the latter approach; it makes viewing the progress of your script a lot easier and if you make a mistake in your script blocks, it makes it easier to track where the error is. When you’re finished, save the workflow application in the Documents folder. I have uploaded an anonymised version of my workflow: Login Script.

Finally, open System Preferences again and go to the Users and Groups section. Click on System Administrator and go to Login Items. Put the Automator workflow you created in as a login item. When the Mac logs in for the first time after its image is deployed, it will automatically run your workflow.

I’m sure you’re all thinking that I’m completely insane for suggesting that you do this but as I say, this is the only way I’ve found that reliably works. I tried using loginhooks and a login script set with a profile but those were infuriatingly unreliable. I considered editing the sudoers file to allow the workflow to work as Root without having to enter a password but I decided that was a long term security risk not worth taking. I have tried to minimise the risk of having Root log on automatically as much as possible; the desktop is only interactive for around 45-60 seconds before the screensaver kicks in and locks the machine out for those who don’t have the root password. Even for those who do have the root password, the Root account is only active for around 5-10 minutes until the workflow disables it after after the Repair Disk Permissions command has finished.

Anyway, once that’s all done reboot the Mac into Target mode and connect it to your Mac running OS X Server.

Use the System Image Utility to create a Netboot image of your Mac with a workflow to deploy it.

There is a surprising lack of documentation on Internet about the System Image Utility. I suppose that’s because it’s so bare bones and that most people use other solutions such as DeployStudio to deploy their Macs. I eventually managed to find some and this is what I’ve managed to cobble together.

On the Mac running OS X Server, open the Server utility and enter your username and password when prompted. When the OS X Server app finishes loading, go to the Tools menu and click on System Image Utility. This will open another app which will appear in your dock; if you see yourself using this app a lot, you can right click on it and tell it to stay in your dock.

siu 1

Anyway, once the System Image Utility loads click on the Customize button. That will bring up a workflow window similar to Automator’s.

SIU 2

The default workflow has two actions in it: Define Image Source and Create Image. Just using these will create a working image but it will not have any kind of automation; the Mac won’t partition its hard drive or name itself automatically. To get this to work, you need to add a few more actions.

There will be a floating window with the possible actions for the System Image Utility open. Find the following three actions and add them to the workflow between the Define Image Source and Create Image actions. Make sure that you add them in the following order:

  1. Partition Disk
  2. Enable Automated Installation
  3. Apply System Configuration Settings

You can now configure the workflow actions themselves.

For the Define Image Source action, change the Source option to the Firewire/Thunderbolt target drive.

For the Partition Disk action, choose the “1 Partition” option and check the “Partition the first disk found” and, optionally, “Display confirmation dialog before partitioning”. Checking the second box will give you a 30 second opportunity to create a custom partition scheme when you start the imaging process on your Mac clients. Choose a suitable name for the boot volume and make sure that the disk format is “Mac OS Extended (Journaled)”

For the Enable Automated Installation action, put the name of the volume that you want the OS to be installed to into the box and check the “Erase before installing” box. Change the main language if you don’t want your Macs to install in English.

The Apply System Configuration Settings action is a little more complicated. This is the section which names your Macs. To do this, you need to provide a properly formatted text file with the Mac’s MAC address and its name. Each field is separated with a tab and there is no header line. Save the file somewhere (I’d suggest in your user’s Documents folder) and put the full path to the file including the file name into the “Apply computer name…” box. There is an option in this action which is also supposed to join your Mac to a directory server but I could never get this to work no matter what I tried so leave that one alone.

The last action is Create Image. Make sure that the Type is NetRestore and check the Include Recovery Partition box. You need to put something into the Installed Volume box but it doesn’t appear to matter what. Put a name for the image into the Image Name and Network Disk boxes and choose a destination to save the image to. I would suggest saving it directly to the /{volume}/Library/Netboot/NetbootSP0 folder as it will appear as a bootable image as soon as the image snapshot has been taken without you having to move or copy it to the correct location.

Once you’ve filled out the form, press the Save button to save your workflow then press Run. The System Image Utility will then generate your image ready for you to test. Do your best to make sure that you get all of this right; if you make any mistakes you will have to correct them and run the image creation workflow again, even if it is just a single setting or something in your script that’s wrong. The other problem with this is that if you add any new Macs to your estate you’ll have to update the text file with the Mac’s names and MAC addresses in and re-create the image again. This is why I put the “Name your Mac” section into the script.

Test the image

The next step now is to test your Netboot image. To do so, connect your Client Mac to the same network segment as your Server. Boot it to the desktop and open System Preferences. Go to to the Startup Disk pane and you should see the image that you just created as an option

boot

Click on it and press the Restart button. The Mac will boot into the installation environment and run through its workflow. When it’s finished, it will automatically log on as the Root user and run the login script that you created in a previous step.

Convert the image to a WIM and add it to your OSD Image Library

Once you’re satisfied that the image and the login script runs to your satisfaction, you need to add your image to the ConfigMgr image library. Unfortunately, ConfigMgr doesn’t understand what an NBI is so we need to wrap it up into a WIM file.

To convert the image to a WIM file, first of all copy the NBI file to a suitable location on your PMA Proxy Server. Log onto the PMA Proxy using Remote Desktop and open the ConfigMgr client. Go to the Software Library workspace and Operating Systems then Operating System Images. Right click on Operating System Images and click on “Add Mac OS X Operating System Image”.

nbi convert

Click on the first browse button and go the location where you copied the NBI file to. This must be a local path, not a UNC.

Click on the second browse button and go to the share that you defined when you installed the Netboot agent on your PMA Proxy. This must be a UNC, not a local path. Press the Next button and wait patiently while the NBI image is wrapped up into a WIM file. When the process is finished, the image will be in your Operating System Images library. There is a minor bug here: If you click on a folder underneath the Image library, the image will still be added to the root of the library and not in the folder you selected. There’s nothing stopping you moving it afterwards but this did confuse me a little the first time I came across it. Once the image is added, you should copy it to a distribution point.

Advertise the image to your Macs

Nearly finished!

The final steps are to create a task sequence then deploy the task sequence to a collection. To create the task sequence, open the ConfigMgr console on a PC which has the Parallels console extension installed. Go to the Software Library workspace and Operating Systems. Under there, go to Task Sequences and right click on Task Sequences. Select “Create Task Sequence for Macs” and this will appear:

tasksequence

Put in a name for the task sequence then press the Browse button. After a small delay, a list of the available OS X images will appear. Choose the one that you want and press the Finish button. The task sequence will then appear in your sequence library but like with the images, it will appear in the root rather than in a specific folder. The only step left is to deploy the task sequence to a collection; the process for this is identical to the one for Windows PCs. I don’t know if it’s necessary but I always deploy the sequence to the Unknown Computers collection as well as the collections that the Macs sit in, just to be sure that new Macs get it as well.

Assuming that you have set up the Netboot server on the PMA Proxy properly, all of the Macs which are in the collection(s) you advertised the image to will have your image as a boot option. Good luck and have fun!

Bootnote

Despite me spending literally weeks writing this almost 4,000 word long blog post when I had the time and inclination to do so, it is worth mentioning again that all of this is going to be obsolete very soon. The next version of the Parallels agent is going to support for proper task sequencing in it. My contact within Parallels tells me that they are mimicking Microsoft’s task sequence UI so that you can deploy software and settings during the build process and that there will be a task sequence wizard on the Mac side which will allow you to select a task sequence to run. I’m guessing (hoping!) that will be in the existing Parallels Application Portal where you can install optional applications from.