Category Archives: Uncategorized

Dell 13″ Ultrabooks

The first few drafts of my last article touched on this but in the end, I edited it out because it was off topic. Nevertheless, this subject is worth a few words I think.

As I mentioned in that article, not long after I joined my current workplace we standardised on Dell for our mobile hardware. There are plenty of advantages for us in doing so; Dell generally give excellent pricing to the Education sector, they sell direct so you don’t have to go through resellers and for their Enterprise level kit and Business line equipment at least, their support is generally excellent. I have never had problems with Dell Business technical support and without exception, every Dell or Dell contracted engineer I’ve received to do work on a server, SAN, laptop or desktop has been excellent. But that’s my experience, I’m sure yours is different.

Although we decided to standardise on Dell, we hadn’t decided which specific line until relatively recently. This means that we have a couple of different models of 13″ Ultrabook knocking around so I thought I’d write a quick piece comparing them. Specifically, we have the Dell XPS 13 (9560) and the Dell Latitude 7000 (7390 and 7390). I’m not going to do any kind of benchmarking with them but I’m going to compare the specifications of the two lines, attempt to look at their build quality and say which one I prefer.

Dimensions and Weight

The XPS 13 is shaped like a wedge of cheese; it’s taller at the back than it is at the front. At its thickest point, it is 15mm/0.6″ thick. It is 304mm/12″ wide and 200mm/7.9″ deep. The weight of the device is dependant on the spec that you choose but it starts at 1.2KG/2.7lbs

The Latitude 7390 is more traditionally shaped, it’s as thick at the front as it is at the back. It is 16mm/0.64″ high, 304mm/12″ wide and 208mm/8.2″ deep. Again, the weight of the device depends on the spec that you buy but it starts at 1.17kg/2.6lbs.

Winner –  XPS. Just.

The two systems weigh the same but the XPS is slightly but smaller. However, it’s barely 1/2cm smaller on the height and depth and the same width so it’s not really significant.

Screen

Both laptops are available with touch and non touch options. Both come with 1080P screens as their default option but the XPS can be bought with a 3200×1800 screen.

The XPS 13 has what Dell call an InfinityEdge screen. They boast that they’ve managed to squeeze a 13″ screen into what would otherwise be a 11″ frame. This is undeniably true; the laptop does have very narrow bezels and they are a uniform size on both sides and at the top. The laptop is certainly smaller and sleeker because of that.

The Latitude has the same screen and it also has equally narrow bezels on the sides of the screens but it has a more standard sized bezel on the top of the screen than the XPS does. For this model, Dell claim to have put a 13″ screen into the same sized chassis as a 12″ notebook.

Winner – XPS

The XPS has smaller bezels and has a higher resolution screen available for it.

Webcam and Biometrics

The XPS has its webcam on the bottom edge of the screen. It is a “Standard” webcam. The positioning of it is downright stupid, anyone who you talk to with it ends up looking up your nose. It can’t be used with Windows Hello to unlock the laptop. Windows Hello fingerprint scanners are an optional extra.

Since the Latitude has the thicker bezel at the top of the screen, it has room for the webcam in a more sensible position. Infra-red cameras which are compatible with Windows Hello are also an option with this line of laptop, as are fingerprint scanners.

Winner – Latitude

The webcam is in a better place and it has more biometric options. Easy win for the Latitude here I think

Connectivity

The XPS has:

  • an SD card slot
  • Two USB 3.0 Type A ports
  • A Thunderbolt/USB Type C port
  • A headphone port
  • A Wireless card manufactured by Killer

The Latitude has:

  • A headphone port
  • A uSIM port for optional WLAN
  • a Micro-SD card slot
  • Two USB 3.0 Type A ports
  • An Ethernet Port
  • When bought with 1.6GHz or above CPUs, a Thunderbolt3/USB C port
  • An HDMI port
  • A Smartcard slot
  • A Wireless card manufactured by Intel

Winner – Latitude

From the point of view of a consumer device, the ports that the XPS has are probably good enough, although if you want to connect to an external monitor or a wired network you need a dongle. The uniform thickness and that extra 5mm that the Latitude has certainly gives you gets you some useful additions. The Latitude also has a slot for an optional WLAN card so you can connect to the internet on the move without having to tether it to a mobile phone. I mention the WiFi card because Killer don’t exactly have a reputation for high quality drivers which may be a concern. From an Enterprise point of view, the Latitude is the clear winner.

Specifications

The XPS is available with 8th Gen Core i5 and i7 CPUs. It comes with an SSD on an M2 slot and the SSDs that they will sell with it use the NVMe bus. It can come with up to 16GB of LPDDR3 RAM which is soldered to the laptop’s motherboard.

The Latitude is available with the same 8th Gen Core i5 and i7 CPUs. It also comes with SSDs on an M2 slot which can be either SATA or NVMe. The maximum amount of RAM that it can take is 16GB of DDR4. However, the RAM on a Latitude is a standard DIMM so it can be expanded later on if you so choose. As mentioned in the connectivity part of the article, the Latitude comes with an extra M2 slot in which a WLAN card can be fitted.

Dell sell both with SSDs up to 512GB but I would be seriously surprised if you couldn’t expand that later on if you wanted to.

Winner – Latitude

Out of the box, it comes with very similar hardware internally. They both use the same CPU lines. However, the fact that you can upgrade the RAM in the Latitude and fit it with a WLAN card later on if you want to means that it wins this category for me. It’s amazing what an extra 5mm of thickness gets you.

Battery

I can’t really do any real world comparisons of battery life with these laptops for two reasons. The first reason is that all of the XPS laptops that we have are assigned to someone so asking for them back to run some battery benchmarks would result in some funny looks. Secondly, the XPS laptops that we have are a couple of years old while the Latitudes are a lot newer. It would be unfair to compare a two year old battery with a brand new one, not to mention that each new generation of Core CPU generally improves on battery life anyway. So I’m just going to quote Dell’s figures here and make a judgement on that.

The XPS has a 60WHr battery built into it. Dell claim you should be able to work for 22 hours from a full charge.

The Latitude has either a 42WHr or 60WHr battery in it. Dell claim up to 19 hours of working life from a full charge. I would imagine that’s with the 60WHr battery fitted.

Winner – XPS

That LPDDR3 memory probably counts for something – the XPS is claimed to have longer battery life.

Touchpad and Keyboard

Both laptops have pretty similar keyboards, they have the same kind of chiclet keys that manufacturers have been using for the past eight to ten years. They both have backlit keyboards. They’re much of a muchness. I’ve used better keyboards but I’ve also used much worse. They’re both on a par with the scissor style keys you got on Unibody MacBooks. The layout of the keyboards are the same with the CTRL and FN keys in the correct places. The top row of keys on both laptops double as traditional function keys (F1 etc) and as keys to control the brightness of the screen, keyboard backlight, wireless, volume and media playback.

They do have different touchpads however. The XPS has a touchpad similar to one on a Mac where the entire surface is a button whereas the Latitude has two separate hardware buttons. Both touchpads are recognised by Windows as being Precision Touchpads so they support the Windows multi-touch gestures.

Winner – Draw

There is no clear winner here. The keyboards are near enough identical and the touchpads are a matter of personal preference. I prefer the ones on the XPS ever so slightly but there isn’t enough in it to declare an overall winner. That said, both completely suck compared to the touchpad on a Mac using macOS. Seriously PC manufacturers, Apple got the touchpad and touchpad gestures just perfect with the first generation of unibody MacBooks and Snow Leopard. That was coming on for ten years ago. For God’s sake, just copy that already.

Finishes

Both laptops come with a range of different colours and finishes.

The XPS can come in silver, white or rose gold.

The Latitude can come in aluminium finish, a carbon fibre finish or a dark grey magnesium one.

Winner – Draw

The XPS looks more like a consumer device while the Latitude looks more like a business device. You wouldn’t be ashamed to get either out at a meeting but the XPS would look better at a LAN party!

Support

Out of the box, the XPS comes with 1 year On-Site warranty while the Latitude comes with three years. You can buy up to four years support on the XPS and up to five on the Latitude

Winner – Latitude

It comes with a longer warranty and can be warrantied for longer as well.

Durability

This last one is harder to quantify as the XPS laptops that we have are older so it’s harder to say which is the more rugged laptop. However. At least one of our XPS laptops appears to be coming apart, there are visible gaps at its seams. I don’t know if we have a dud or if there is a quality control issue with these things. From hands on experience though, I would personally say that the Latitude feels like the more solid laptop. We will see.

Winner – Not enough data

We don’t have as many XPS laptops as we do Latitudes and the XPS laptops that we do have are older. My thoughts about the Latitude feeling like a more solid laptop are fairly preliminary so I’m not going to say one way or the other.

Price

I’m not going to quote the exact pricing that I get from Dell for these laptops as it won’t be the same as what’s on their website or the same as what your account manager will give you. That said, generally speaking, whenever I get a quote for an equivalently specified Latitude and XPS with the same warranty length, the Latitude is around 3/4 to 4/5 of the price of the XPS.

Winner – Latitude

The Latitude line is always the cheaper one when I ask Dell for a quote. It is on the Dell website when you ask for them with the same warranty as well.

Conclusion

Well, this “few words” has turned into more than two thousand words! The easy thing to do here would be to count how many wins each line has and declare that line the winner. For the record, that’s the Latitude with five wins to three and two draws. I don’t that it’s quite as clear cut as that though. My personal preference would undeniably be the Latitude. It’s more expandable. It has better support. It has more ports. Its webcam is in a more sensible place and you don’t need dongles to connect it to a monitor or a wired network. It’s also cheaper. However, that’s what’s important to me. You might have other ideas. But from my point of view, buy a Latitude. If nothing else, you get more laptop for your money.

Dell Business Docking Stations

There are a few posts on this site called “The Grand(ish) Experiment” where I talk about using Dell Venue tablets with their respective docking stations, exploring their potential to replace desktop machines. The idea was that every office desk and classroom would have a docking station, teachers would have their own tablets with the software and files that they needed and that they would be able to use any classroom in the building and not have to book a specific one. It didn’t really pan out; the docks were temperamental, the tablets were either underpowered or too big and heavy, the optional hardware keyboards had issues and the current OS at the time was Windows 8. People didn’t like the hardware that we were trying so there wasn’t really the interest to keep the experiment going and it all got forgotten about, as, sadly, did the article series.

A few years and two jobs later, I’m looking at something similar again, albeit under very different circumstances. I now work in the Central Services department for a Multi Academy Trust of schools. The organisation has nine different sites and while I spend most of my time at one particular one, I do spend at least one day a week at two of the other eight and have to visit the other six periodically. The situation is the same for a lot of my colleagues in Central Services, not just the IT people.

The computers in our main office are pretty old. The most modern one is an HP Compaq machine with a second generation Core i5 CPU, 4GB of RAM and a standard magnetic hard drive so it’s safe to say that the hardware is due to be updated. A large proportion of the staff in our Central Services department are like me in that they have to attend the other sites as well so the majority of them also have laptops, again which are pretty old and are due for replacement.

Bearing that in mind, to replace all of the desktops and issue these people with new laptops which get used but not very heavily seemed like a pretty major waste of money. It seemed to be a much better idea to issue everyone with a new laptop and put a docking station on everyone’s desk. That way, everyone still gets a new, faster machine but the offices become a lot more flexible because wherever the person ends up sitting, they get the resources that they need. We’ve recently standardised on Dell hardware, in particular the Dell Latitude 7390 laptop for our Central staff. With a fast SSD, a reasonable amount of RAM and a quad core CPU, there is no reason why these laptops couldn’t function as desktop replacements. Issuing a laptop to everyone would also end the, um, disputes between the people in our Finance department and the people in our other departments as only a couple of the computers in our satellite offices have the Finance software installed. The idea is that if people have a laptop with the software they need already installed, they can hook up to a docking station anywhere and do what they need. Failing that, even just find a desk somewhere and do their work, even if they can’t dock.

With that in mind, I approached our account manager at Dell and asked her what she would suggest for us. The laptops that we’re using don’t have old fashioned docking ports on the bottom of them so we had to look at their USB docks. Dell suggested three:

First of all, the D3100. This dock  is based around a Displaylink chipset. Because of this, everything on it (video, network, audio) is driven using the USB 3.0 bus on your machine. It can drive up to three displays, one of which can be 4K. It connects to your computer with a USB 3.0 A plug. It won’t charge your laptop.

Next up, the WD15. This is a USB 3.1 Gen 2 dock which can drive up to two screens up to 1080P. It also has an Ethernet and audio ports and you can charge your laptop with it, making it a one cable solution. Unlike the D3100, this dock acts as a DisplayPort MST hub so the displays that it drives are driven from your laptops own GPU or APU, rather than from a chip connected to your USB port. This should improve video performance, especially if your laptop has a discrete GPU. It is available with two sizes of power adapter (120W and 180W), the bigger of which is required if you have one of Dell’s larger laptops and want to charge it from the dock.

Lastly, the WD16. This is a Thunderbolt 3 dock, again connected by USB C connector. Again, it functions as a DisplayPort MST Hub but unlike the WD15, it can drive up to three displays at 2560×1600 at 60Hz, up to two 4K displays at 60Hz or one 5K display at 60Hz. It also has another Thunderbolt 3 port as a pass through and the usual Ethernet, audio and additional USB ports. This dock also can come with one of two power adapters (180W and 240W) and again, the bigger PSU is required if you have one of Dell’s larger laptops and want to charge it from the dock.

I have used two of these docks, the D3100 and the WD16. I’ve not used the WD15 so I will say up front that anything I say about it here is conjecture based on its spec and appearance.

D3100

So first of all, the D3100. As I say, it’s based around a DisplayLink chipset. It has a full sized DisplayPort for your 4K display, two HDMI ports, two USB 2.0 ports on the back and three USB 3.0 ports on the front. Along with that, there is a headphone port on the front and a Line-Out port on the back. The dock is nicely laid out with everything in the place you’d expect it to be. Along with that, it features a PXE boot ROM so you can built workstations from it (Rarer than you’d think on a USB networking device) and it supports WOL.

All of the the ports are run from your USB 3.0 port, including the displays. That means that the Ethernet port, any other USB peripherals that you connect to it and in theory, a 4K display and a pair of 1080P displays and therefore 12.5 million pixels being refreshed 60 times a second, all have to share that 5Gbps of bandwidth that the port provides. Considering that an uncompressed 4K stream at 60Hz would use almost 9Gbps of bandwidth, I was sceptical that this was going to work very well and, well, so it turned out.

At the time, I was running a mid 2014 MacBook Pro with Windows 10 and a Core i5 4278U CPU. When I first started using it, I was using a pair of 20″ monitors running with a 1600×900 resolution. With these monitors, it worked well. However, the size and resolution of those two monitors was too low for me and I asked for some bigger monitors. I put on a pair of 1080P monitors and that’s when I started having performance issues. As soon as I started using those monitors, the displays started glitching, the refresh rates were variable and using it was just annoying. CPU usage was all over the place with no obvious culprit. The Mac that I was using at the time had a lot of external ports (Two Thunderbolt 2 ports and an HDMI port) so I connected the monitors directly to the laptop to see if the performance issues would go away. They did so that was how I used the laptop until it came time to replace it.

I don’t know if the laptop I was using was underpowered for the compression tricks that DisplayLink must have to use to drive more than 4 million pixels over 5Gbps, if I had a bad dock or if there is an inherent problem with the DisplayLink chipset. As I say, it worked fine with two lower resolution monitors so I expect that the Mac was too old to run this properly. I can quite happily recommend the dock from that perspective,  i.e. if your running a single monitor or two lower resolution displays from it on reasonably modern hardware, but for our purposes we decided that it was inadequate.

WD15 and TB16

So, instead, the two USB C docks running as DisplayPort MST hubs. When I originally looked at the spec of these devices, I thought it was pretty cut and dried. The TB16 wasn’t significantly more expensive. It’s Thunderbolt so it’s likely to be a faster performer. So we ordered some. This is what we found.

First of all, considering what this thing is, it’s bloody huge! It’s only half an inch smaller on the width and length than the original Mac Mini and significantly larger than an Intel NUC. The Thunderbolt cable is about half a metre long and is built into the docking station.

It has four monitor outputs: A full-sized DisplayPort, a Mini DisplayPort, an HDMI port and a VGA port. It has a pair of USB 2 ports, a USB 3 Type A port, a Thunderbolt 3/USB C port, a Gigabit Ethernet port, an audio line-out port and the power input on the back of the device and two further USB 3 Type A ports and a combo headphone/microphone socket on the front. The aforementioned captive Thunderbolt cable is on the left hand side of the device when you’re looking at it from the front.

This is where things start to go a little bit wrong for this thing. The port layout is fine and generous but all of the Dell laptops that I’ve seen (various XPSes, Latitudes and a Precision) have their Thunderbolt 3 ports on the left hand side of the device as well. This, along with the relative shortness of the docking cable and the size of the USB plug into the laptop (about 1.5″!), makes positioning the dock very awkward. Ideally, with a docking station, you want it stuffed out of the way somewhere at the back of your desk but the length of the cable and its position on the left makes that difficult to achieve. I tried various positions to find what worked best. Most people seem to prefer putting the dock on the left hand side of their desk with the front ports facing forward (funny that!) and having the Thunderbolt cable loop round. The problem that I found with that was that the laptop has to be within 25cm of the dock because the bend radius of the Thunderbolt cable is quite large and that makes the solution take up a lot of space on the desk and makes it hard to access the ports of the front of the dock. The second position that I tried was to put my laptop in the middle of my desk, underneath my monitors, put the dock on the left side of my desk and have the left side of the dock point right towards the laptop. I didn’t like this solution very much either as it meant that the back ports were facing the front which was messy. I did try turning the dock upside-down so that the front ports were on the front but this just made the dock slide all over the desk and it meant I couldn’t get at the power button on the top of the dock.

Eventually, I found the best way that I could set up the dock for me was to rest the dock on top of one of my speakers with the Thunderbolt cable pointing down like a tail. The ports on the back point to the left, the ports on the front point right and I can position the dock where it’s reasonably accessible. It’s nowhere near ideal but I found it was the best way for me.

Awkward cable positioning aside, how good is this thing otherwise? Well, lets see. The thing I found most disappointing about this dock is that the audio and Ethernet ports are USB devices. Considering that Thunderbolt is essentially an extension of a computer’s PCI Express bus, it seems a bit, well, cheap to saddle this thing with a USB NIC. A PCI Express one would be better as it would take up less system resources and it would able to share up to 40Gbps of bandwidth with the host system, rather than the 5 or 10Gbps that shoving it on the USB bus restricts it to. Yes, that 5 or 10Gbps for the Ethernet port by itself is fine but as the D3100 proves, contention starts to become an issue as you add more USB peripherals to a system.

Moaning about that aside, when you first open the box for one of these docks, there is a big piece of paper that tells you in no uncertain terms to make sure that you go to the Dell website and download the latest drivers and firmware that are available for this dock and, if you’re using a Dell laptop, to make sure that you’re using the latest BIOS for it.

Do NOT ignore any of these instructions

All of the docks that we received from Dell had a pretty old firmware on them and when using the docks with the OOB firmware, they were a nightmare. They constantly disconnected from their host and when they were connected, they were (somehow) laggy and made the laptop CPU usage spike. Updating the firmware on the dock and the laptop resolved these issues immediately. With that installed, the laptop behaved exactly how you would expect it to. Docking and undocking is simple. You don’t have to jump through any hoops in Windows, just pull the cable out and continue working. When you go back to your desk, you put the cable back in and away you go. All of the laptops that I’ve tried with these docks (Dell Latitude 7380 and 7390, Dell XPS 13 and 15, Dell Precision 5530) work perfectly and support USB charging.

I’ve managed to PXE boot and build laptops with these docks attached and this includes laptops which don’t have built-in Ethernet ports. With the latest firmwares, they do exactly what they’re supposed to do and that in itself is high praise.

So now we come back to the WD15. As I say, I’ve not used the WD15 dock so this is conjecture. However, I’m going to assume that it works as well as the TB16 does. In that case, considering that internally, the WD15 and TB16 both have very similar hardware, I’m actually struggling to justify the extra expense for the TB16. They both have the same audio and Ethernet connectivity, both driven from the machine’s USB 3.1 bus. They both act as DisplayPort MST hubs so the extra monitors are driven from the laptop’s GPU. The only advantage that the TB16 gives you is that you can drive more monitors from it and those that you can drive can also have higher resolutions. That’s great but in a general office environment, it isn’t actually that big of an advantage. Very few people in our organisation has a monitor with a resolution higher than 1080p and no-one has more than two monitors so most people get precisely zero benefit if they use a TB16 instead of a WD15. If anything, the WD15 might be the better choice under some circumstances because it has a longer connection cable, albeit still a daft captive one on the left side of the dock. So I guess I’m saying, unless you see a need to drive more than two monitors or monitors with higher resolutions than 1080P, don’t bother with the Thunderbolt dock and get the USB one instead. I think in future, that’s what I’m going to suggest.

Conclusions

It’s still too early to draw any conclusions but the feedback I’ve been getting from staff about the laptop/docking stations has so far been positive. They’re happy that they can just rock up to a desk and be using their own machines. I’m also using a laptop and dock and I couldn’t be happier with the arrangement. I’ll try to revisit this article in another three or four months and see if there’s anything interesting to say.

Basic User Editing Script

Let’s start with a little history, it will hopefully put this script into a bit of context. When I started in my job, one of my first large projects was a change to our Office 365 tenant. When I started there, it was being managed by a system called OpenHive. The vendor that looked after OpenHive was Capita so anyone who has the misfortune of having to work with their services will have an inkling as to why we wanted to move away from them. OpenHive was an Active Directory domain hosted by Capita which used ADFS servers at Capita to authenticate people. This meant that we had to maintain two user databases and people had to remember at least two passwords, one for the local domain and one for their email.

We ended up giving Capita notice that we no longer wished to use their service. We evicted them from our Office 365 tenant, de-federated it from their ADFS and moved management of it in-house. We also installed Azure AD Connect to synchronise users and passwords with Office 365 so people didn’t have to remember two passwords. Existing users were matched using SMTP matching, new users were synced across. One thing I didn’t realise was that the user accounts in Azure AD needed the ImmutableID field removed from them before sync would work but I found that one out eventually.

One problem that we had was the quality and consistency of the data that was being transmitted over to Azure AD which was making our address book look messy to say the least. Another more significant problem was with the UPN suffix of the users: Our domain name uses a non-routable domain suffix (.internal in this case) so whenever a user was getting synced to Office 365, it was getting created with a username with the tenant’s onmicrosoft.com address instead of the default domain name. This was a nuisance.

The system that we use to manage our users is RM Community Connect 4 (CC4). To put it politely, CC4 is a bit shit. That aside, CC4 is basically an interface on top of Active Directory; in theory you’re supposed to create and edit users in there. It creates the AD account, the user area, a roaming profile and other things. However, the AD attributes that CC4 is capable of editing are very limited and one of things that it can’t change is the user’s UPN.

Admittedly, all of this can be changed relatively easily using the ADUC MMC that comes with the Remote Server Administration Tools but while this would solve the UPN problem, it would still be hard to enforce a consistent pattern of data to be transmitted to Azure. I therefore decided we needed a tool to help us with this.

I’m no programmer, it’s been a long time since I did any kind of programming in a serious manner and that was with Turbo Pascal and Delphi. However, I found out that PowerShell has quite a strong forms library so I decided to give it a go using that. This is what I came up with:

It’s nothing fancy but I’m quite pleased nevertheless. Most of the fields are entered manually but the UPN suffix and the School Worked At fields are dropdown menus to make sure that consistent data is entered. The bit that I really like is that when you choose a school from the dropdown menu, the rest of the School fields are automatically populated. The “O365 licence” box populates the ExtensionAttribute15 attribute inside the user’s account, I’m using this for another script which licenses users inside Office 365. I’ll post that one another time.

The script is almost 400 lines long so I’m not going to post it into the body of this article. Instead, I’ll attach a zip file for you download.

I don’t know how useful people will find this but I thought I’d put it up anyway in the hope that someone might like it. This has been tested on Windows 7, 8.1, 10, Server 2012 and 2016. It seems to work with PowerShell 2.0 and newer. You will need the Active Directory PowerShell module that comes with RSAT for this to work. Do what you like with it, reuse it, repost it, whatever. It’d be nice if you gave me a little credit if you do but other than that, I’m not too fussed. The usual “I’m not responsible if this hoses your system” disclaimer applies.

Download the script from here

Almost two years in…

This is bad. I’m paying money each year to host this blog and for the domain name. OK, I’m not paying much but even so, I’m basically ignoring my blog. I said two years ago that this couldn’t stand yet here we are. I must do better.

So what’s been happening in my job?

Well, I’m happy in my work again. They’re keeping me busy. I’ve done a lot since I’ve started, too much to list here but here are some highlights:

  • I’ve implemented two new VMware farms
  • I’ve commissioned two new SANs
  • I’ve helped commission two new very fast Internet connections
  • I’ve installed two new firewalls (Smoothwall)
  • I’ve migrated us away from an Office 365 solution (badly) managed by Capita Openhive to one that’s managed in-house
  • I’ve designed a new Active Directory domain for my workplace and I’m in the process of implementing it
  • I’ve deployed Configuration Manager to replace RM CC4
  • I’ve redesigned our Wireless authentication system
  • I’ve helped install and configure a ton of new HP switches in three of our schools
  • I’ve been swearing an awful lot at some of the design decisions made by my predecessors and the support companies the MAT have employed and have been slowly but surely correcting them

So yes, they’ve certainly been keeping me busy. I plan to post a few articles about some of this and put some some of the PowerShell scripts I’ve written to do certain things. Like I say, I must do better.

Almost two weeks in…

Well, I’m almost two weeks into my new job. It feels good to be back in a school again. It feels even better to be back in a job with a wider range of responsibilities. It feels brilliant to be in a job where I feel like I actually have something to do. 

My new boss seems like a good guy so far. He’s very happy to listen to my thoughts and ideas and he is encouraging me to look at the way things are and to suggest improvements. He is quite new in the position too, he’s only been there for about three months. I suspect that if I’d been invited for interview in the first round, I’d probably have started at around the same time as him. 

There is so much to do. With all due respect to the people looking after the network there before me, I think there have been several questionable design choices. Some of the security policies are downright scary. 

The first thing that needs to be done is to install the latest version of Smoothwall. Unfortunately, the virtual farms that the current instances are stored on are too old to install the latest version. This means that either we have to install some Smoothwall appliances or we need to update the virtual farms. I’m in the process of getting pricing for both options. 

Other things I’ve been doing are:

  • Enabling deduplication on one of their shared network drives. More than 350GB of savings on 1.2TB of data!
  • Attempting to wrap my head around the instance of Veeam they have backing up their virtual farms. 
  • Installing Lansweeper to get an idea of what hardware and software we have in the place. 
  • Installing a KMS server. Seriously, more than 1200 machines using MAK keys is nuts!
  • Installing PasswordState, a locally installed password management system similar to LastPass and Dashlane. 
  • Installing some RADIUS servers to handle wireless authentication. 
  • Planning to deploy WSUS to manage updates on servers. 

It may sound hyperbolic but I feel I’ve done more in two weeks there than I did in an entire year at Westminster. I hope it continues like this. I’m sure it will. 

By the way, as and when I get the time, I’m working on a new article about the Parallels Mac Management product for ConfigMgr. There have been some pretty big updates for it in the last year and one of my contacts at Parallels very kindly let me download it to have a look. Suffice to say, I’m very impressed with what I see there. 

OMG! It’s been a year!

I have really been neglecting this blog. Considering that I’m paying to have the damn thing hosted, this can’t stand.

So what has happened in the last year? Quite a lot really.

First of all, the new job mentioned in the last post. If I’m honest, I’m not happy in that job. I’m not going to go too far into specifics. Suffice to say, the University of Westminster is a good place to work; they are very generous towards their staff, I am well paid there, I get a ludicrous amount of leave and the benefits package is very good. However, the job isn’t for me. I feel far too pigeon-holed. I miss the depth of work from my old job, the amount of responsibility that I had. I don’t like being third-line only very much. I don’t like being sat at my desk all day, every day. I don’t like not being able to get my hands on hardware occasionally. Perhaps unbelievably, I even miss the small amount of interaction that I had with users. I’ve been keeping my eyes open for new jobs and I saw a new one about two months ago. In fact, I even applied twice; the first time around I didn’t get through to the interview stage. It seems that they didn’t recruit in the first run so they re-advertised. I got some feedback from the man who is in charge of recruitment and I applied again. The second time, I got invited to interview.

I went to the interview. I thought that I had messed it up entirely. The first part of the interview was the technical part. I was expecting a test along the lines of almost every other technical test I’ve taken, i.e. “What is a FSMO Role?”, “Why can’t this printer print?”, “Who do you prioritise, the headmaster or a classroom that can’t work?”, you know the sort of thing. Instead, they gave me a series of scenarios and twenty minutes to put some thoughts down on them. While obviously they were interested in my technical skill set, they were more interested in how I approach problems and how my thought processes worked. When they were analysing my answers, there was one question that they asked which I really made a pig’s ear of and which I couldn’t answer. A very awkward silence ensued while I desperately tried to understand what they wanted from me but in the end, they put me out of my misery. Truth be told, I was almost ready to walk away at that point.

The panel interview came after with the ICT Manager and the Head of Performance and Recruitment. That went a little better although I did give a very arrogant answer to a question: They asked if I thought I could do the job and I said that I wouldn’t be sitting there if I didn’t. I cringed almost immediately. Argh.

So anyway, I couldn’t have done too badly as I got the job. On 1st August 2016 I shall be starting work for the Haberdashers’ Aske’s multi-academy trust in New Cross, London. My job title will be IT Systems Administrator. It’s closer to home, it’s more money, there is going to be a lot more variety and responsibility and it’s in an environment that I think I’ll be much more comfortable in. I’m a lot more optimistic about this job than I was about Westminster and I’m looking forward to starting tremendously.

So what else has happened in the last year? Well, I changed the hosting provider of this site. I originally bought space from GoDaddy as it was cheap for the first year. However, subsequent years were stupidly expensive so I said “Bugger that” and changed host. My site is now hosted by SGIS Hosting who are a lot more reasonable and migrated this site to their servers for me. They give you less space and bandwidth than GoDaddy but I have more than enough for my purposes and I have some space with which I can mess about with outside of WordPress if I want to. The only major disadvantage is that I now have to keep WordPress up to date manually.

More significantly, my girlfriend and I have moved away from Hertfordshire. This made me sad, I really loved it in Harpenden and I miss the place a lot. However, it was necessary for reasons that will probably become clear later on in this article. We moved into a nice flat in Beckenham. The rent is a lot more but travel costs are considerably less so it more or less evens out. With this new job, travel costs will be lower still.

The last major thing that has happened is that I have become a father! My son was born on the 25th May 2016 at 10.47. The preceding month and the first week of his life was by far the most stressful time of my life! Towards the end of my girlfriend’s pregnancy, there were complications and he ended up being born early and very small. He was taken to the Special Baby Care Unit (SCBU) at our local hospital where he was looked after for about a week. He was in an incubator for about two days with a glucose drip in his hand. After that, the drip came out and he was put into a cot. In the end, all they were doing was feeding him so they decided that it would be best to send him home. The birth was also very hard on my girlfriend, she ended up staying in the hospital for as long as our son.

If there was ever a cause worth donating money to, it is a SCBU. If you’re reading this and are feeling generous, please feel free to have a look at the Princess Royal University Hospital SCBU Fund’s JustGiving page and chuck a few quid their way. If you don’t want to donate to my local one, please look one up closer to you and donate to them instead. They all (not just the PRUH’s, all of them everywhere) do wonderful work under extremely difficult circumstances and they all deserve far more support than they get.

Anyway, our son is the primary reason we have moved to Beckenham. My girlfriend’s family is from around here and her sister lives nearby. My girlfriend wanted that support network close to her for when our baby arrived. I understand that and support it so here we are! On the whole, it’s a good thing as my girlfriend and our son are both getting much better care from the hospitals down here. In addition, I wouldn’t have been able to get the job in New Cross if we still lived in Harpenden so I’d probably still be stuck as Westminster for the foreseeable future.

So in summary, on a professional level, the last year has been pretty mediocre. On a personal level, despite the stresses and heartache, it’s been awesome. Once again, I toast the future! I’m looking forward to it once again!

The Future

I have a new job.

As of the 1st September, I am going to be working for a university in central London as a “Windows Specialist”. If I’m entirely honest, I’m not entirely sure what my day to day duties are going to be but I have inferred that it’s going to involve helping to migrate from Novell eDirectory to Active Directory, some SQL Server stuff and Commvault.

I have spent almost eight years in my current job. I am happy in it and I wasn’t looking to move on. However, sometimes you see an opportunity and you just have to grab it. I’m going to moving onto a network that’s going to be spanning a large part of London. They have multiple campuses. Their IT has separate teams for Windows, Unix, Infrastructure and Desktop. It’s going to be second and third line mostly I think, the amount of interaction that I have with users is going to be less than what it is at the moment. No more desktop support! It is, probably literally, an order of magnitude bigger than anything I’ve ever done before and I’m simultaneously excited and completely bricking it at the same time.

Perhaps unusually, I asked to extend my notice period. I wanted to work one final summer at the college and get my projects finished and loose ends tied up. They are in the final planning stages now and I’ll be putting them in place in a weeks time. Additionally, I wanted to get some proper handover documentation written too. So far, the document is more than 8000 words long and there’s plenty more to do. It’s a shame I couldn’t have met my successor to hand over to them in person but that’s the way things go sometimes.

The other thing that this extended notice period has done for me is given me a chance to get my head around the idea of leaving where I am and moving on. The difference between moving on this time and the last time is that the last time, I was desperate to go. This time around, I’m upset to be leaving and I’m still a little worried that I’m moving on before I’m ready to go. Don’t get me wrong, I know that I’m capable of doing the job, that’s not my concern. My concern is that I’ve been happy where I am and more than a bit settled and that moving on is going to be an upheaval.

Anyway, the end of term has come and I was one of 16 members of staff leaving this summer. I was mentioned in the principal’s end of year speech and he said some extremely kind words, comparing me to Scotty in Star Trek saying that I worked in the background, quietly and methodically keeping things going and fixing them when they blew up. He also said I’d be incredibly hard to replace which is always nice to hear.

Anyway, to the future! I’m looking forward to it.

Building, Deploying and Automatically Configuring a Mac Image using SCCM and Parallels SCCM Agent

I touched briefly on using the Parallels Management Agent to build Macs in my overview article but I thought it might be a good idea to go through the entire process that I use when I have to create an image for a Mac, getting the image deployed and getting the Mac configured once the image is on there. At the moment, it’s not a simple process. It requires the use of several tools and, if you want the process to be completely automated, a some Bash scripting as well. The process isn’t as smooth as you would get from solutions like DeployStudio but it works and, in my opinion anyway, it works well enough for you not to have to bother with a separate product for OSD. Parallels are working hard on this part of the product and they tell me that proper task sequencing will be part of V4 of the agent. As much as I’m looking forward to that, it doesn’t change the fact that right now we’re on v3.5 and we have to use the messy process!

First of all, I should say that this is my method of doing it and mine alone. This is not Parallel’s method of doing this, it has not been sanctioned or condoned by them. There are some dangerous elements to it, you follow this procedure at your own risk and I will not be held responsible for damage caused by it if you try this out.

Requirements

You will need the following tools:

  • A Mac running OS X Server. The server needs to be set up as a Profile Manager server, an Open Directory server and, optionally, as a Netboot server. It is also needed on Yosemite for the System Image Utility.
  • A second Mac running the client version of OS X.
  • Both the server and the client need to be running the same version of OS X (Mavericks, Yosemite, whatever) and they need to be patched to the same level. Both Macs need to have either FireWire or Thunderbolt ports.
  • A FireWire or Thunderbolt cable to connect the two Macs together.
  • A SCCM infrastructure with the Parallels SCCM Mac Management Proxy and Netboot server installed.
  • This is optional but I recommend it anyway:  A copy of Xcode or another code editor to create your shell scripts in. I know you could just use TextEdit but I prefer something that has proper syntax highlighting and Xcode is at least free.
  • Patience. Lots of patience. You’ll need it. The process is time consuming and and can be infuriating when you get something wrong.

At the end of this process, you will have an OS X Image which can be deployed to your Macs. The image will automatically name its target, it will download, install and configure the Parallels SCCM agent, join itself to your Active Directory domain, attach itself to a managed wireless network and it will install any additional software that’s not in your base image. The Mac will do this without any user interaction apart from initiating the build process.

Process Overview

The overview of the process is as follows:

  1. Create an OS X profile to join your Mac to your wireless network.
  2. Create a base installation of OS X with the required software and settings.
  3. Create a Automator workflow to deploy the Parallels agent and to do other minor configuration jobs.
  4. Use the System Image Utility to create the image and a workflow to automatically configure the disk layout and computer name.
  5. (Optional) Use the Mac OS X Netboot server to deploy the image to a Mac. This is to make sure that your workflow works and that you’ve got your post-install configuration scripts right before you add the image to your ConfigMgr server. You don’t have to do this but you may find it saves you a lot of time.
  6. Convert the image to a WIM file and add it to your SCCM OSD image library
  7. Advertise the image to your Macs

I’m going to assume that you already have your SCCM infrastructure, Parallels SCCM management proxy, Parallels Netboot server and OS X Server working.

Generate an OS X Profile.

Open a browser and go to the address of your Profile Manager (usually https://{hostname.domain}/profilemanager) and go the Device Groups section. I prefer to generate a profile for each major setting that I’m pushing down. It makes for a little more work getting it set up but if one of your settings breaks something, it makes it easier to troubleshoot as you can remove a specific setting instead of the whole lot at once.

Your profile manager will look something like this:

Untitled

As you can see, I’ve already set up some profiles but I will walk through the process for creating a profile to join your Mac to a wireless network. First of all, create a new device group by pressing the + button in the middle pane. You will be prompted to give the group a name, do so.

Untitled 2

Go to the Settings tab and press the Edit button

Untitled 3

In the General section, change the download type to Manual, put a description in the description field and under the Security section, change the profile removal section to “With Authorisation”. Put a password in the box that appears. Type it in carefully, there is no confirm box.

Untitled 4

If you are using a wireless network which requires certificates, scroll down to the certificates section and copy your certificates into there by dragging and dropping them. If you have an on-site CA, you may as well put the root trust certificate for that in there as well.

Untitled 5

Go to the Networks section and set put in the settings for your network

Untitled 6

When you’re done, press the OK button. You’ll go back to the main Profile Manager screen. Make sure you press the Save button.

I would strongly suggest that you explore Profile Manager and create profiles for other settings as well. For example, you could create one to control your Mac’s energy saving settings or to set up options for your users desktop.

When you’re back on the profile manager window, press the Download button and copy the resulting .mobileconfig file to a suitable network share.

Go to a PC with the SCCM console and the PMA plugin installed. Open the Assets and Compliance workspace. Go to Compliance Settings then Configuration Items. Optionally, if you haven’t already, create a folder for Mac profiles. Right click on your folder or on Configuration Items, go to Create Parallels Configuration Item then Mac OS X Configuration Profile from File.

sccmprof

Give the profile a name and description, change the profile type to System then press the Browse button and browse to the network share where you copied the .mobileconfig file. Double click on the mobileconfig file then press the OK button. You then need to go to the Baselines section and create a baseline with your configuration item in. Deploy the baseline to an appropriate collection.

Create an image

On the Mac which doesn’t have OS X Server installed, install your software. Create any additional local users accounts that you require. Make those little tweaks and changes that you inevitably have to make. If you want to make changes to the default user profile, follow the instructions on this very fine website to do so.

Once you’ve got your software installed and have got your profile set up the way you want it, you may want to boot your Mac into Target Mode and use your Server to create a snapshot using the System Image Utility or Disk Utility. This is optional but recommended as you will need to do a lot of testing which may end up being destructive if you make a mistake. Making an image now will at least allow you to roll back without having to start from scratch.

Creating an Automator workflow to perform post-image deployment tasks

Now here comes the messy bit. When you deploy your image to your Macs, you will undoubtably want them to automatically configure themselves without any user interaction. The only way that I have found to do this reliably is pretty awful but unfortunately I’ve found it to be necessary.

First of all, you need to enable the root account. The quickest way to do so is to is to open a terminal session and type in the following command:

dsenableroot -u {user with admin rights} -p {that user's password} -r {what you want the root password to be}

Log out and log in with the root user.

Go to System Preferences and go to Users and Groups. Change the Automatic Login option to System Administrator and type in the root password when prompted. When you’ve done that, go to the Security and Privacy section and go to General. Turn on the screensaver password option and set the time to Immediately. Check the “Show a Message…” box and set the lock message to something along the lines of “This Mac is being rebuilt, please be patient”. Close System Preferences for now.

You will need to copy a script from your PMA proxy server called InstallAgentUnattended.sh. It is located in your %Programfiles(x86)%\Parallels\PMA\files folder. Copy it to the Documents folder of your Root user.

Open your code editor (Xcode if you like, something else if you don’t) and enter the following script:

#!/bin/sh

#Get computer's current name
CurrentComputerName=$(scutil --get ComputerName)

#Bring up a dialog box with computer's name in and give the user the option to change it. Time out after 30secs
ComputerName=$(/usr/bin/osascript <<EOT
tell application "System Events"
activate
set ComputerName to text returned of (display dialog "Please Input New Computer Name" default answer "$CurrentComputerName" with icon 2 giving up after 60)
end tell
EOT)

#Did the user press cancel? If so, exit the script

ExitCode=$?
echo $ExitCode

if [ $ExitCode = 1 ]
then
exit 0
fi

#Compare current computername with one set, change if different

CurrentComputerName=$(scutil --get ComputerName)
CurrentLocalHostName=$(scutil --get LocalHostName)
CurrentHostName=$(scutil --get HostName)

echo "CurrentComputerName = $CurrentComputerName"
echo "CurrentLocalHostName = $CurrentLocalHostName"
echo "CurrentHostName = $CurrentHostName"

 if [ $ComputerName = $CurrentComputerName ]
 then
 echo "ComputerName Matches"
 else
 echo "ComputerName Doesn't Match"
 scutil --set HostName $ComputerName
 echo "ComputerName Set"
 fi

 if [ $ComputerName = $CurrentHostName ]
 then
 echo "HostName Matches"
 else
 echo "HostName Doesn't Match"
 scutil --set ComputerName $ComputerName
 echo "HostName Set"
 fi

 if [ $ComputerName = $CurrentLocalHostName ]
 then
 echo "LocalHostName Matches"
 else
 echo "LocalHostName Doesn't Match"
 scutil --set LocalHostName $ComputerName
 echo "LocalHostName Set"
 fi

#Invoke Screensaver
/System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine

#Join Domain
dsconfigad -add {FQDN.of.your.AD.domain} -user {User with join privs} -password {password for user} -force

#disable automatic login
defaults delete /Library/Preferences/com.apple.loginwindow.plist autoLoginUser
rm /etc/kcpassword

#install Configuration Manager client
chmod 755 /private/var/root/Documents/InstallAgentUnattended.sh
/private/var/root/Documents/InstallAgentUnattended.sh http://FQDN.of.your.PMA.Server:8761/files/pma_agent.dmg {SCCM User} {Password for SCCM User} {FQDN.of.your.AD.Domain}
echo SCCM Client Installed

#Repair disk permissions
diskutil repairPermissions /
echo Disk Permissions Repaired

#Rename boot volume to host name
diskutil rename "Macintosh HD" $HOSTNAME

#disable root
dsenableroot -d -u {User with admin rights on Mac} -p {That user's password}

#Reboot Mac
shutdown -r +60

Obviously you will need to change this to suit your environment.

As you can see, this has several parts. It calls a bit of Applescript which prompts the user to enter the machine name. The default value is the Mac’s current hostname. The prompt times out after 30 seconds. The script gets the current hostname of the machine and compares it to what was entered in the box and changes the Mac’s name if it is different. It then invokes the Mac’s screensaver, joins the Mac to your AD domain, renames the Mac’s hard drive to the hostname of the Mac and downloads the PMA client from the PMA Proxy Server and installs it. It removes the automatic logon for the Root user, removes the saved password for Root, runs a Repair Permissions on the Mac’s hard disk then disables the Root account and sets the Mac to reboot itself after 60 minutes. The Mac is given an hour before it reboots so that the PMA can download and apply its initial policies.

At this point, you will probably want to test the script to make sure that it works. This is why I suggested taking a snapshot of your Mac beforehand. Even if you do get it right, you still need to roll back your Mac to how it was before you ran the script.

Once the script has been tested, you will need to create an Automator workflow. Open the Automator app and create a new application. Go to the Utilities section and drag Shell Script to the pane on the right hand side.

Untitled 7

At this point, you have a choice: You can either paste your entire script into there and have it all run as a big block of code or you can drag multiple shell script blocks across and break your code up into sections. I would recommend the latter approach; it makes viewing the progress of your script a lot easier and if you make a mistake in your script blocks, it makes it easier to track where the error is. When you’re finished, save the workflow application in the Documents folder. I have uploaded an anonymised version of my workflow: Login Script.

Finally, open System Preferences again and go to the Users and Groups section. Click on System Administrator and go to Login Items. Put the Automator workflow you created in as a login item. When the Mac logs in for the first time after its image is deployed, it will automatically run your workflow.

I’m sure you’re all thinking that I’m completely insane for suggesting that you do this but as I say, this is the only way I’ve found that reliably works. I tried using loginhooks and a login script set with a profile but those were infuriatingly unreliable. I considered editing the sudoers file to allow the workflow to work as Root without having to enter a password but I decided that was a long term security risk not worth taking. I have tried to minimise the risk of having Root log on automatically as much as possible; the desktop is only interactive for around 45-60 seconds before the screensaver kicks in and locks the machine out for those who don’t have the root password. Even for those who do have the root password, the Root account is only active for around 5-10 minutes until the workflow disables it after after the Repair Disk Permissions command has finished.

Anyway, once that’s all done reboot the Mac into Target mode and connect it to your Mac running OS X Server.

Use the System Image Utility to create a Netboot image of your Mac with a workflow to deploy it.

There is a surprising lack of documentation on Internet about the System Image Utility. I suppose that’s because it’s so bare bones and that most people use other solutions such as DeployStudio to deploy their Macs. I eventually managed to find some and this is what I’ve managed to cobble together.

On the Mac running OS X Server, open the Server utility and enter your username and password when prompted. When the OS X Server app finishes loading, go to the Tools menu and click on System Image Utility. This will open another app which will appear in your dock; if you see yourself using this app a lot, you can right click on it and tell it to stay in your dock.

siu 1

Anyway, once the System Image Utility loads click on the Customize button. That will bring up a workflow window similar to Automator’s.

SIU 2

The default workflow has two actions in it: Define Image Source and Create Image. Just using these will create a working image but it will not have any kind of automation; the Mac won’t partition its hard drive or name itself automatically. To get this to work, you need to add a few more actions.

There will be a floating window with the possible actions for the System Image Utility open. Find the following three actions and add them to the workflow between the Define Image Source and Create Image actions. Make sure that you add them in the following order:

  1. Partition Disk
  2. Enable Automated Installation
  3. Apply System Configuration Settings

You can now configure the workflow actions themselves.

For the Define Image Source action, change the Source option to the Firewire/Thunderbolt target drive.

For the Partition Disk action, choose the “1 Partition” option and check the “Partition the first disk found” and, optionally, “Display confirmation dialog before partitioning”. Checking the second box will give you a 30 second opportunity to create a custom partition scheme when you start the imaging process on your Mac clients. Choose a suitable name for the boot volume and make sure that the disk format is “Mac OS Extended (Journaled)”

For the Enable Automated Installation action, put the name of the volume that you want the OS to be installed to into the box and check the “Erase before installing” box. Change the main language if you don’t want your Macs to install in English.

The Apply System Configuration Settings action is a little more complicated. This is the section which names your Macs. To do this, you need to provide a properly formatted text file with the Mac’s MAC address and its name. Each field is separated with a tab and there is no header line. Save the file somewhere (I’d suggest in your user’s Documents folder) and put the full path to the file including the file name into the “Apply computer name…” box. There is an option in this action which is also supposed to join your Mac to a directory server but I could never get this to work no matter what I tried so leave that one alone.

The last action is Create Image. Make sure that the Type is NetRestore and check the Include Recovery Partition box. You need to put something into the Installed Volume box but it doesn’t appear to matter what. Put a name for the image into the Image Name and Network Disk boxes and choose a destination to save the image to. I would suggest saving it directly to the /{volume}/Library/Netboot/NetbootSP0 folder as it will appear as a bootable image as soon as the image snapshot has been taken without you having to move or copy it to the correct location.

Once you’ve filled out the form, press the Save button to save your workflow then press Run. The System Image Utility will then generate your image ready for you to test. Do your best to make sure that you get all of this right; if you make any mistakes you will have to correct them and run the image creation workflow again, even if it is just a single setting or something in your script that’s wrong. The other problem with this is that if you add any new Macs to your estate you’ll have to update the text file with the Mac’s names and MAC addresses in and re-create the image again. This is why I put the “Name your Mac” section into the script.

Test the image

The next step now is to test your Netboot image. To do so, connect your Client Mac to the same network segment as your Server. Boot it to the desktop and open System Preferences. Go to to the Startup Disk pane and you should see the image that you just created as an option

boot

Click on it and press the Restart button. The Mac will boot into the installation environment and run through its workflow. When it’s finished, it will automatically log on as the Root user and run the login script that you created in a previous step.

Convert the image to a WIM and add it to your OSD Image Library

Once you’re satisfied that the image and the login script runs to your satisfaction, you need to add your image to the ConfigMgr image library. Unfortunately, ConfigMgr doesn’t understand what an NBI is so we need to wrap it up into a WIM file.

To convert the image to a WIM file, first of all copy the NBI file to a suitable location on your PMA Proxy Server. Log onto the PMA Proxy using Remote Desktop and open the ConfigMgr client. Go to the Software Library workspace and Operating Systems then Operating System Images. Right click on Operating System Images and click on “Add Mac OS X Operating System Image”.

nbi convert

Click on the first browse button and go the location where you copied the NBI file to. This must be a local path, not a UNC.

Click on the second browse button and go to the share that you defined when you installed the Netboot agent on your PMA Proxy. This must be a UNC, not a local path. Press the Next button and wait patiently while the NBI image is wrapped up into a WIM file. When the process is finished, the image will be in your Operating System Images library. There is a minor bug here: If you click on a folder underneath the Image library, the image will still be added to the root of the library and not in the folder you selected. There’s nothing stopping you moving it afterwards but this did confuse me a little the first time I came across it. Once the image is added, you should copy it to a distribution point.

Advertise the image to your Macs

Nearly finished!

The final steps are to create a task sequence then deploy the task sequence to a collection. To create the task sequence, open the ConfigMgr console on a PC which has the Parallels console extension installed. Go to the Software Library workspace and Operating Systems. Under there, go to Task Sequences and right click on Task Sequences. Select “Create Task Sequence for Macs” and this will appear:

tasksequence

Put in a name for the task sequence then press the Browse button. After a small delay, a list of the available OS X images will appear. Choose the one that you want and press the Finish button. The task sequence will then appear in your sequence library but like with the images, it will appear in the root rather than in a specific folder. The only step left is to deploy the task sequence to a collection; the process for this is identical to the one for Windows PCs. I don’t know if it’s necessary but I always deploy the sequence to the Unknown Computers collection as well as the collections that the Macs sit in, just to be sure that new Macs get it as well.

Assuming that you have set up the Netboot server on the PMA Proxy properly, all of the Macs which are in the collection(s) you advertised the image to will have your image as a boot option. Good luck and have fun!

Bootnote

Despite me spending literally weeks writing this almost 4,000 word long blog post when I had the time and inclination to do so, it is worth mentioning again that all of this is going to be obsolete very soon. The next version of the Parallels agent is going to support for proper task sequencing in it. My contact within Parallels tells me that they are mimicking Microsoft’s task sequence UI so that you can deploy software and settings during the build process and that there will be a task sequence wizard on the Mac side which will allow you to select a task sequence to run. I’m guessing (hoping!) that will be in the existing Parallels Application Portal where you can install optional applications from.

Kindness of Strangers

So, I was out cycling this evening. I decided to take my bike up to Someries Castle because I’ve driven past the brown sign pointing at it on my way to work every day for the last two years and I was curious to see what, exactly, was there. The answer is not very much but I digress. The castle is on some land next to a farm and the track that approaches it is very rough. I picked up a puncture there. It was a big one and I couldn’t get enough air into the tyre with my hand pump to get myself home. Of course I stupidly didn’t have any spare tubes or a puncture repair kit on me so  I faced a five mile walk on Cycle Route 6 to get home.

Just under two miles into my walk, another cyclist passed me. He asked me if I was OK and I asked him if he had a puncture repair kit on him. He said no but that his house was just around the corner. He said that he had one there and that I was welcome to repair my bike at his home. I got to his garage and he offered me a spare tube and refused payment for it. He lent me some tyre levers and a pump and we had a brief chat with me about the area, the cycle track between Luton and Harpenden and how we use our bikes.

I thanked him profusely when I finished fixing my bike but I’d like to do so again publicly so to the very nice man who helped me when I needed it: THANK YOU.

The lessons that I’m going to take from this are as follows:

  1. Carry a puncture repair kit or spare tubes with you. Some CO2 tubes are a good idea too. Walking miles home pushing a bike is no fun.
  2. Help people who need it. I intend to carry this man’s kindness forwards; if I ever come across a fellow cyclist in distress I will help them in the same way he helped me.

I’m not going to be trite and say that this restored my faith in humanity or something cheesy like that but it was good to see that there are some decent people out there who will help you for the sake of helping you.

Me

Over the last few years, I have had… issues… with my weight. I have never been morbidly obese but I have been bigger and heavier than I’d like to be. A few years ago, I lost a substantial amount of weight for reasons that I eventually put down to stress; I was in a job that I disliked intensely and pretty unhappy on a personal level too. I did come out of the other side of it and had the unexpected benefit of loosing about eight inches from my waistline and about 20-25KG of weight.

Since then, I’ve gained, lost and gained weight again. I’m nowhere near as big as I was at my heaviest but I’m still on the wrong side of 100KG and some of my clothes are starting to get uncomfortable. I need to do something about this. So, I am going to start tracking what I’m eating. I’m going to cut the crap out of my diet and start riding my bike on a regular basis. I might even start making my eightish mile journey to work on my bike instead of driving it. 16 miles a day?  A tough order at the moment but if I can get my fitness up, I’ll see the benefits.

Anyway, I’m going to start posting my progress on here in the vague hope that making it public will spur me on and keep me on the straight and narrow. Wish me luck!