Tag Archives: Work

SCOM – SQL Management Pack Configuration

SCOM is a bastard of a product, isn’t it? It’s even more so¬†when you’re trying to monitor a SQL instance or two. It’s also quite amusing that Chrome doesn’t recognise SCOM as a word in its dictionary and that it suggests SCAM as a possible word ūüôā

My major project at work for the past few months has been SCOM. I am monitoring about 300 Windows VMs, about a third of which have a SQL database instances on them. I’ve kept with using the LocalSystem account as the SCOM action account and for the majority of the time, that’s enough. However, there have been a few times where it hasn’t been enough. It’s always a permissions issue, the LocalSystem account doesn’t have access to one or more of the databases so the discovery and monitoring scripts can’t run and you get a myriad of alerts.

When it comes to adding a management pack into SCOM,¬†always read the damn documentation that comes with the MP. I know it’s tedious but it’s necessary. Reading the documentation for the SQL Management pack found at Microsoft’s website¬†gives you some interesting recommendations. They suggest that you have three action accounts for SQL:

  1. A discovery account
  2. A default action account
  3. A monitoring account

They also recommend that you put the monitoring and discovery account into an additional AD group. Once you do that, you have to add the users to SQL, assign them specific permissions to databases, give them access to parts of the Windows registry, assign them permissions to various WMI namespaces, grant them local logon privileges and more. I’m not going to go over the whole process, if you really want to see it look at Microsoft’s documentation.

The point is, it’s a lot of work. Wouldn’t it be nice if we could automate it? Well, I’ve written a script that does precisely that. It’s a big one:


Function Set-WmiNamespaceSecurity
{
[cmdletbinding()]
Param ( [parameter(Mandatory=$true,Position=0)][string] $namespace,
[parameter(Mandatory=$true,Position=1)][string] $operation,
[parameter(Mandatory=$true,Position=2)][string] $account,
[parameter(Position=3)][string[]] $permissions = $null,
[bool] $allowInherit = $false,
[bool] $deny = $false,
[string] $computerName = ".",
[System.Management.Automation.PSCredential] $credential = $null)

Process {
#$ErrorActionPreference = "Stop"

Function Get-AccessMaskFromPermission($permissions) {
$WBEM_ENABLE = 1
$WBEM_METHOD_EXECUTE = 2
$WBEM_FULL_WRITE_REP = 4
$WBEM_PARTIAL_WRITE_REP = 8
$WBEM_WRITE_PROVIDER = 0x10
$WBEM_REMOTE_ACCESS = 0x20
$WBEM_RIGHT_SUBSCRIBE = 0x40
$WBEM_RIGHT_PUBLISH = 0x80
$READ_CONTROL = 0x20000
$WRITE_DAC = 0x40000

$WBEM_RIGHTS_FLAGS = $WBEM_ENABLE,$WBEM_METHOD_EXECUTE,$WBEM_FULL_WRITE_REP,`
$WBEM_PARTIAL_WRITE_REP,$WBEM_WRITE_PROVIDER,$WBEM_REMOTE_ACCESS,`
$READ_CONTROL,$WRITE_DAC
$WBEM_RIGHTS_STRINGS = "Enable","MethodExecute","FullWrite","PartialWrite",`
"ProviderWrite","RemoteAccess","ReadSecurity","WriteSecurity"

$permissionTable = @{}

for ($i = 0; $i -lt $WBEM_RIGHTS_FLAGS.Length; $i++) {
$permissionTable.Add($WBEM_RIGHTS_STRINGS[$i].ToLower(), $WBEM_RIGHTS_FLAGS[$i])
}

$accessMask = 0

foreach ($permission in $permissions) {
if (-not $permissionTable.ContainsKey($permission.ToLower())) {
throw "Unknown permission: $permission`nValid permissions: $($permissionTable.Keys)"
}
$accessMask += $permissionTable[$permission.ToLower()]
}

$accessMask
}

if ($PSBoundParameters.ContainsKey("Credential")) {
$remoteparams = @{ComputerName=$computer;Credential=$credential}
} else {
$remoteparams = @{ComputerName=$computerName}
}

$invokeparams = @{Namespace=$namespace;Path="__systemsecurity=@"} + $remoteParams

$output = Invoke-WmiMethod @invokeparams -Name GetSecurityDescriptor
if ($output.ReturnValue -ne 0) {
throw "GetSecurityDescriptor failed: $($output.ReturnValue)"
}

$acl = $output.Descriptor
$OBJECT_INHERIT_ACE_FLAG = 0x1
$CONTAINER_INHERIT_ACE_FLAG = 0x2

$computerName = (Get-WmiObject @remoteparams Win32_ComputerSystem).Name

if ($account.Contains('\')) {
$domainaccount = $account.Split('\')
$domain = $domainaccount[0]
if (($domain -eq ".") -or ($domain -eq "BUILTIN")) {
$domain = $computerName
}
$accountname = $domainaccount[1]
} elseif ($account.Contains('@')) {
$domainaccount = $account.Split('@')
$domain = $domainaccount[1].Split('.')[0]
$accountname = $domainaccount[0]
} else {
$domain = $computerName
$accountname = $account
}

$getparams = @{Class="Win32_Account";Filter="Domain='$domain' and Name='$accountname'"}

$win32account = Get-WmiObject @getparams

if ($win32account -eq $null) {
throw "Account was not found: $account"
}

switch ($operation) {
"add" {
if ($permissions -eq $null) {
throw "-Permissions must be specified for an add operation"
}
$accessMask = Get-AccessMaskFromPermission($permissions)

$ace = (New-Object System.Management.ManagementClass("win32_Ace")).CreateInstance()
$ace.AccessMask = $accessMask
if ($allowInherit) {
$ace.AceFlags = $OBJECT_INHERIT_ACE_FLAG + $CONTAINER_INHERIT_ACE_FLAG
} else {
$ace.AceFlags = 0
}

$trustee = (New-Object System.Management.ManagementClass("win32_Trustee")).CreateInstance()
$trustee.SidString = $win32account.Sid
$ace.Trustee = $trustee

$ACCESS_ALLOWED_ACE_TYPE = 0x0
$ACCESS_DENIED_ACE_TYPE = 0x1

if ($deny) {
$ace.AceType = $ACCESS_DENIED_ACE_TYPE
} else {
$ace.AceType = $ACCESS_ALLOWED_ACE_TYPE
}

$acl.DACL += $ace.psobject.immediateBaseObject
}

"delete" {
if ($permissions -ne $null) {
throw "Permissions cannot be specified for a delete operation"
}

[System.Management.ManagementBaseObject[]]$newDACL = @()
foreach ($ace in $acl.DACL) {
if ($ace.Trustee.SidString -ne $win32account.Sid) {
$newDACL += $ace.psobject.immediateBaseObject
}
}

$acl.DACL = $newDACL.psobject.immediateBaseObject
}

default {
throw "Unknown operation: $operation`nAllowed operations: add delete"
}
}

$setparams = @{Name="SetSecurityDescriptor";ArgumentList=$acl.psobject.immediateBaseObject} + $invokeParams

$output = Invoke-WmiMethod @setparams
if ($output.ReturnValue -ne 0) {
throw "SetSecurityDescriptor failed: $($output.ReturnValue)"
}
}
}

Function Add-DomainUserToLocalGroup
{
[cmdletBinding()]
Param(
[Parameter(Mandatory=$True)]
[string]$computer,
[Parameter(Mandatory=$True)]
[string]$group,
[Parameter(Mandatory=$True)]
[string]$domain,
[Parameter(Mandatory=$True)]
[string]$user
)
$de = [ADSI]‚ÄúWinNT://$computer/$Group,group‚ÄĚ
$de.psbase.Invoke(‚ÄúAdd‚ÄĚ,([ADSI]‚ÄúWinNT://$domain/$user‚ÄĚ).path)
} #end function Add-DomainUserToLocalGroup

Function Add-UserToLocalLogon
{
[cmdletBinding()]
Param(
[Parameter(Mandatory=$True)]
[string]$UserSID
)
$tmp = [System.IO.Path]::GetTempFileName()
secedit.exe /export /cfg "$($tmp)"
$c = Get-Content -Path $tmp
$currentSetting = ""

foreach($s in $c) {
if( $s -like "SeInteractiveLogonRight*") {
$x = $s.split("=",[System.StringSplitOptions]::RemoveEmptyEntries)
$currentSetting = $x[1].Trim()
}
}

if( $currentSetting -notlike "*$($UserSID)*" ) {
if( [string]::IsNullOrEmpty($currentSetting) ) {
$currentSetting = "*$($UserSID)"
} else {
$currentSetting = "*$($UserSID),$($currentSetting)"
}

$outfile = @"
[Unicode]
Unicode=yes
[Version]
signature="`$CHICAGO`$"
Revision=1
[Privilege Rights]
SeInteractiveLogonRight = $($currentSetting)
"@

$tmp2 = [System.IO.Path]::GetTempFileName()

$outfile | Set-Content -Path $tmp2 -Encoding Unicode -Force

Push-Location (Split-Path $tmp2)

try {
secedit.exe /configure /db "secedit.sdb" /cfg "$($tmp2)" /areas USER_RIGHTS

} finally {
Pop-Location
}
}
}

#Set Global Variables

$Default_Action_Account = "om_aa_sql_da"
$Discovery_Action_Account = "om_aa_sql_disc"
$Monitoring_Action_Account = "om_aa_sql_mon"
$LowPrivGroup = "SQLMPLowPriv"

$WindowsDomain = "Intranet"
#Add users to local groups

Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Performance Monitor Users" -user $Monitoring_Action_Account -domain $WindowsDomain
Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Performance Monitor Users" -user $Default_Action_Account -domain $WindowsDomain
Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Event Log Readers" -user $Monitoring_Action_Account -domain $WindowsDomain
Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Event Log Readers" -user $Default_Action_Account -domain $WindowsDomain
Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Users" -user $LowPrivGroup -domain $WindowsDomain
Add-DomainUserToLocalGroup -computer $env:COMPUTERNAME -group "Users" -user $Default_Action_Account -domain $WindowsDomain
#
#AD SIDs for Default Action Account user and Low Priv group - required for adding users to local groups and for service security settings.

#Define SIDs for Default Action and Low Priv group. To get a SID, use the following command:
#Get-ADUser -identity [user] | select SID
#and
#Get-ADGroup -identity [group] | select SID
#Those commands are part of the AD management pack which is why they're not in this script, I can't assume that this script is being run on a DC or on
#a machine with the AD management shell installed
#>

$SQLDASID = "S-1-5-21-949506055-860247811-1542849698-1419242"
$SQLMPLowPrivsid = "S-1-5-21-949506055-860247811-1542849698-1419239"

Add-UserToLocalLogon -UserSID $SQLDASID
Add-UserToLocalLogon -UserSID $SQLMPLowPrivsid

#Set WMI Namespace Security

Set-WmiNamespaceSecurity root add $WindowsDomain\$Default_Action_Account MethodExecute,Enable,RemoteAccess,Readsecurity
Set-WmiNamespaceSecurity root\cimv2 add $WindowsDomain\$Default_Action_Account MethodExecute,Enable,RemoteAccess,Readsecurity
Set-WmiNamespaceSecurity root\default add $WindowsDomain\$Default_Action_Account MethodExecute,Enable,RemoteAccess,Readsecurity
if (Get-WMIObject -class __Namespace -namespace root\microsoft\sqlserver -filter "name='ComputerManagement10'") {
Set-WmiNamespaceSecurity root\Microsoft\SqlServer\ComputerManagement10 add $WindowsDomain\$Default_Action_Account MethodExecute,Enable,RemoteAccess,Readsecurity }
if (Get-WMIObject -class __Namespace -namespace root\microsoft\sqlserver -filter "name='ComputerManagement11'") {
Set-WmiNamespaceSecurity root\Microsoft\SqlServer\ComputerManagement11 add $WindowsDomain\$Default_Action_Account MethodExecute,Enable,RemoteAccess,Readsecurity }

Set-WmiNamespaceSecurity root add $WindowsDomain\$LowPrivGroup MethodExecute,Enable,RemoteAccess,Readsecurity
Set-WmiNamespaceSecurity root\cimv2 add $WindowsDomain\$LowPrivGroup MethodExecute,Enable,RemoteAccess,Readsecurity
Set-WmiNamespaceSecurity root\default add $WindowsDomain\$LowPrivGroup MethodExecute,Enable,RemoteAccess,Readsecurity
if (Get-WMIObject -class __Namespace -namespace root\microsoft\sqlserver -filter "name='ComputerManagement10'") {
Set-WmiNamespaceSecurity root\Microsoft\SqlServer\ComputerManagement10 add $WindowsDomain\$LowPrivGroup MethodExecute,Enable,RemoteAccess,Readsecurity }
if (Get-WMIObject -class __Namespace -namespace root\microsoft\sqlserver -filter "name='ComputerManagement11'") {
Set-WmiNamespaceSecurity root\Microsoft\SqlServer\ComputerManagement11 add $WindowsDomain\$LowPrivGroup MethodExecute,Enable,RemoteAccess,Readsecurity }

#Set Registry Permissions

$acl = Get-Acl 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$Rule = New-Object System.Security.AccessControl.RegistryAccessRule ("$($WindowsDomain)\$($Default_Action_Account)","readkey","ContainerInherit","None","Allow")
$acl.SetAccessRule($Rule)
$acl | Set-Acl -Path 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$acl = $null
$acl = Get-Acl 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$Rule = New-Object System.Security.AccessControl.RegistryAccessRule ("$($WindowsDomain)\$($LowPrivGroup)","readkey","ContainerInherit","None","Allow")
$acl.SetAccessRule($Rule)
$acl | Set-Acl -Path 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$acl = $null

$SQLInstances = Get-ChildItem 'registry::hklm\SOFTWARE\Microsoft\Microsoft SQL Server' | ForEach-Object {Get-ItemProperty $_.pspath } | Where-Object {$_.pspath -like "*MSSQL1*" }

$SQLInstances | Foreach {
$acl = Get-Acl "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\$($_.PSChildName)\MSSQLSERVER\Parameters"
$Rule = New-Object System.Security.AccessControl.RegistryAccessRule ("$($WindowsDomain)\$($LowPrivGroup)","readkey","ContainerInherit","None","Allow")
$acl.SetAccessRule($Rule)
$acl | Set-Acl -Path "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\$($_.PSChildName)\MSSQLSERVER\Parameters"
$acl = $null

$acl = Get-Acl "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\$($_.PSChildName)\MSSQLSERVER\Parameters"
$Rule = New-Object System.Security.AccessControl.RegistryAccessRule ("$($WindowsDomain)\$($Default_Action_Account)","readkey","ContainerInherit","None","Allow")
$acl.SetAccessRule($Rule)
$acl | Set-Acl -Path "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\$($_.PSChildName)\MSSQLSERVER\Parameters"
$acl = $null

}

#Set SQL Permissions

#Get SQL Version
if ($SQLInstances.Count -eq $null) {

$version = Get-ItemProperty "registry::HKLM\Software\Microsoft\Microsoft SQL Server\$($SQLInstances.PSChildName)\MSSQLSERVER\CurrentVersion"

} else {

$version = Get-ItemProperty "registry::HKLM\Software\Microsoft\Microsoft SQL Server\$($SQLInstances[0].PSChildName)\MSSQLSERVER\CurrentVersion"

}
#Import appropriate SQL PowerShell module

if ($version.CurrentVersion -ge 11) {
#Import SQL 2012 Module
Import-Module sqlps
#change out of sql context
c:
} else {
#Add SQL 2008 Snap-in
Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
}

#Create database users and assign permissions

$CreateDatabaseUsers = "use master
go

create login [$($WindowsDomain)\$($LowPrivGroup)] from windows
go

grant view server state to [$($WindowsDomain)\$($LowPrivGroup)]
grant view any definition to [$($WindowsDomain)\$($LowPrivGroup)]
grant view any database to [$($WindowsDomain)\$($LowPrivGroup)]
grant select on sys.database_mirroring_witnesses to [$($WindowsDomain)\$($LowPrivGroup)]
go

create login [$($WindowsDomain)\$($Default_Action_Account)] from windows
go

grant view server state to [$($WindowsDomain)\$($Default_Action_Account)]
grant view any definition to [$($WindowsDomain)\$($Default_Action_Account)]
grant view any database to [$($WindowsDomain)\$($Default_Action_Account)]
grant alter any database to [$($WindowsDomain)\$($Default_Action_Account)]
grant select on sys.database_mirroring_witnesses to [$($WindowsDomain)\$($Default_Action_Account)]
go"

#Generate query to assign users and permissions to databases
$DatabaseUsers1 = "SELECT 'use ' + name + ' ;'
+ char(13) + char(10)
+ 'create user [$($WindowsDomain)\$($LowPrivGroup)] FROM login [$($WindowsDomain)\$($LowPrivGroup)];'
+ char(13) + char(10) + 'go' + char(13) + char(10)
FROM sys.databases WHERE database_id = 1 OR database_id >= 3
UNION
SELECT 'use msdb; exec sp_addrolemember @rolename=''SQLAgentReaderRole'', @membername=''$($WindowsDomain)\$($LowPrivGroup)'''
+ char(13) + char(10) + 'go' + char(13) + char(10)
UNION
SELECT 'use msdb; exec sp_addrolemember @rolename=''PolicyAdministratorRole'', @membername=''$($WindowsDomain)\$($LowPrivGroup)'''
+ char(13) + char(10) + 'go' + char(13) + char(10)
"

$DatabaseUsers2 = "SELECT 'use ' + name + ' ;'
+ char(13) + char(10)
+ 'create user [$($WindowsDomain)\$($Default_Action_Account)] FROM login [$($WindowsDomain)\$($Default_Action_Account)];'
+ 'exec sp_addrolemember @rolename=''db_owner'', @membername=''$($WindowsDomain)\$($Default_Action_Account)'';'
+ 'grant alter to [$($WindowsDomain)\$($Default_Action_Account)];'
+ char(13) + char(10) + 'go' + char(13) + char(10)
FROM sys.databases WHERE database_id = 1 OR database_id >= 3
UNION
SELECT 'use msdb; exec sp_addrolemember @rolename=''SQLAgentReaderRole'', @membername=''$($WindowsDomain)\$($Default_Action_Account)'''
+ char(13) + char(10) + 'go' + char(13) + char(10)
UNION
SELECT 'use msdb; exec sp_addrolemember @rolename=''PolicyAdministratorRole'', @membername=''$($WindowsDomain)\$($Default_Action_Account)'''
+ char(13) + char(10) + 'go' + char(13) + char(10)
"

#
$SQLInstances | Foreach {
if ($_.PSChildName.split('.')[-1] -eq "MSSQLSERVER") {
$InstanceName = $env:COMPUTERNAME
} else {
$InstanceName = "$($env:COMPUTERNAME)\$($_.PSChildName.split('.')[-1])" }

Invoke-Sqlcmd -ServerInstance $InstanceName $CreateDatabaseUsers
$Provision1 = Invoke-Sqlcmd -ServerInstance $InstanceName $DatabaseUsers1
$Provision2 = Invoke-Sqlcmd -ServerInstance $InstanceName $DatabaseUsers2

$Provision1 | foreach {
Invoke-Sqlcmd -ServerInstance $InstanceName $_.ItemArray[0]
}
$Provision2 | foreach {
Invoke-Sqlcmd -ServerInstance $InstanceName $_.ItemArray[0]
}
}

#Grant Default Action account rights to start and stop SQL Services

$SQLServices = Get-Service -DisplayName "*SQL*"

$SQLServices | Foreach {
& c:\windows\system32\sc.exe sdset $_.Name D`:`(A`;`;GRRPWP`;`;`;$($SQLDASID)`)`(A`;`;CCLCSWRPWPDTLOCRRC`;`;`;SY`)`(A`;`;CCDCLCSWRPWPDTLOCRSDRCWDWO`;`;`;BA`)`(A`;`;CCLCSWLOCRRC`;`;`;IU`)`(A`;`;CCLCSWLOCRRC`;`;`;SU`)`S`:`(AU`;FA`;CCDCLCSWRPWPDTLOCRSDRCWDWO`;`;`;WD`)
}

There are huge swathes of this script that I can not take credit for, mostly the functions.

The SetWMINameSpaceSecurity function was pilfered directly from here:¬†https://live.paloaltonetworks.com/t5/Management-Articles/PowerShell-Script-for-setting-WMI-Permissions-for-User-ID/ta-p/53646. I got it from Palo Alto’s website but it appears to have been written by Microsoft themselves

The Add-DomainUserToLocalGroup function was stolen from the Hey, Scripting Guy! Blog, found here: https://blogs.technet.microsoft.com/heyscriptingguy/2010/08/19/use-powershell-to-add-domain-users-to-a-local-group/

The Add-UserToLocalLogon function was lifted wholesale from here: https://ikarstein.wordpress.com/2012/10/12/powershell-script-to-add-account-to-allow-logon-locally-privilege-on-local-security-policy/

The rest, however, is all mine which you can probably tell from the quality of the code. You will need to change some variables from line 223 to match your environment. That said, it works and that’s all I care about. Enjoy!

Sigh, sometimes, software can be too bloody clever for its own good. The Code Block module that I’m using isn’t doing a very good job of formatting this script and it’s replaced some characters such as &, < and > with their HTML equivalents. I¬†think I’ve weeded them all out but I may not have. If not, let me know.

Building, Deploying and Automatically Configuring a Mac Image using SCCM and Parallels SCCM Agent

I touched briefly on using the Parallels Management Agent to build Macs in my overview article but I thought it might be a good idea to go through the entire process that I use when¬†I have to create an image for a¬†Mac, getting the image¬†deployed and getting the Mac configured once the image is on there. At the moment, it’s not a simple process. It requires the use of several tools and, if you want the process to be completely automated, a some¬†Bash scripting as well. The process isn’t as smooth as you would get from solutions like DeployStudio but it works and, in my opinion anyway, it works well enough for you not to have to bother with a separate product for OSD. Parallels are working hard on this part of the product and they tell me that proper task sequencing will be part of V4 of the agent. As much as I’m looking forward to that, it doesn’t change the fact that right¬†now we’re on v3.5 and we have to use the messy process!

First of all, I should say that this is my method of doing it and mine alone. This is not Parallel’s method of doing this, it has not been sanctioned or condoned by them. There are some dangerous elements to it, you follow this procedure at your own risk and I will not be held responsible for damage caused by it if you try this out.

Requirements

You will need the following tools:

  • A Mac running OS X Server. The server needs to be set up as a Profile Manager server, an Open Directory server and, optionally, as a Netboot server. It is also needed on Yosemite for the System Image Utility.
  • A second Mac running the client version of OS X.
  • Both the server and the client need to be running the same version of OS X (Mavericks, Yosemite, whatever) and they need to be patched to the same level. Both Macs need to have either FireWire or Thunderbolt ports.
  • A FireWire or Thunderbolt cable to connect the two Macs together.
  • A SCCM infrastructure with the Parallels SCCM Mac Management Proxy and Netboot server installed.
  • This is optional but I recommend it anyway: ¬†A copy of Xcode or another code editor¬†to create your shell scripts in. I know you could just use TextEdit but I prefer something that has proper syntax highlighting and Xcode is at least free.
  • Patience. Lots of patience. You’ll need it. The process is¬†time consuming and and can be infuriating when you get something wrong.

At the end of this process, you will have an OS X Image which can be deployed to your Macs. The image will automatically name its target,¬†it will download, install and configure the Parallels SCCM agent, join itself to your Active Directory domain, attach itself to a managed wireless network and it will install any additional software that’s not in your base image. The Mac will do this without any user interaction apart from initiating the build process.

Process Overview

The overview of the process is as follows:

  1. Create an OS X profile to join your Mac to your wireless network.
  2. Create a base installation of OS X with the required software and settings.
  3. Create a Automator workflow to deploy the Parallels agent and to do other minor configuration jobs.
  4. Use the System Image Utility to create the image and a workflow to automatically configure the disk layout and computer name.
  5. (Optional) Use the Mac OS X Netboot server to deploy the image to a Mac. This is to make sure that your workflow works and that you’ve got your post-install configuration scripts right before you add the image to your¬†ConfigMgr server. You don’t have to do this but you may find it saves you a lot of time.
  6. Convert the image to a WIM file and add it to your SCCM OSD image library
  7. Advertise the image to your Macs

I’m going to assume that you already have your SCCM infrastructure, Parallels SCCM management proxy, Parallels Netboot server and OS X Server working.

Generate an OS X Profile.

Open a browser and go to the address of your Profile Manager (usually https://{hostname.domain}/profilemanager) and go the Device Groups section.¬†I prefer to generate a profile for each major setting that I’m pushing down. It makes for a little more work getting it set up but if one of your settings breaks something, it makes it easier to¬†troubleshoot as you can remove a specific setting instead of the whole lot at once.

Your profile manager will look something like this:

Untitled

As you can see, I’ve already set up some profiles but I will walk through the process for creating a profile to join your Mac to a wireless network. First of all, create a new device group by pressing the + button in the middle pane. You will be prompted to give the group a name, do so.

Untitled 2

Go to the Settings tab and press the Edit button

Untitled 3

In the General section, change the download type to Manual, put a description in the description field and under the Security section, change the profile removal section to “With Authorisation”. Put a password in the box that appears. Type it in carefully, there is no confirm box.

Untitled 4

If you are using a wireless network which requires certificates, scroll down to the certificates section and copy your certificates into there by dragging and dropping them. If you have an on-site CA, you may as well put the root trust certificate for that in there as well.

Untitled 5

Go to the Networks section and set put in the settings for your network

Untitled 6

When you’re done, press the OK button. You’ll go back to the main Profile Manager screen. Make sure you press the Save button.

I would strongly suggest that you explore Profile Manager and create profiles for other settings as well. For example,¬†you could create one to control your Mac’s energy saving settings or to set up options for your users desktop.

When you’re back on the profile manager window, press the Download button and copy the resulting .mobileconfig file to a suitable network share.

Go to a PC with the SCCM console and the PMA plugin installed. Open the Assets and Compliance workspace. Go to Compliance Settings then Configuration Items. Optionally, if you haven’t already, create a folder for Mac profiles. Right click on your folder or on Configuration Items, go to Create Parallels Configuration Item then Mac OS X Configuration Profile from File.

sccmprof

Give the profile a name and description, change the profile type to System then press the Browse button and browse to the network share where you copied the .mobileconfig file. Double click on the mobileconfig file then press the OK button. You then need to go to the Baselines section and create a baseline with your configuration item in. Deploy the baseline to an appropriate collection.

Create an image

On the Mac which doesn’t have OS X Server installed, install your¬†software. Create any additional local¬†users accounts that you require. Make those little tweaks and changes that you inevitably have to make. If you want to make changes to the default user profile, follow the instructions on this very fine website¬†to do so.

Once you’ve got your software installed and have got your profile set up the way you want it, you may want to boot your Mac into Target Mode and use your Server to create a snapshot using the System Image Utility or Disk Utility. This is optional but recommended as you will need to do a lot of testing which may end up being destructive if you make a mistake. Making an image now will at least allow you to roll back without having to start from scratch.

Creating an Automator workflow to perform post-image deployment tasks

Now here comes the messy bit. When you deploy your image to your Macs, you will undoubtably want them to automatically configure themselves without any user interaction. The only way that I have found to do this reliably is pretty awful but unfortunately I’ve found it to be¬†necessary.

First of all, you need to enable the root account. The quickest way to do so is to is to open a terminal session and type in the following command:

dsenableroot -u {user with admin rights} -p {that user's password} -r {what you want the root password to be}

Log out and log in with the root user.

Go to System Preferences and go to Users and Groups. Change the Automatic Login option to System Administrator and type in the root password when prompted. When you’ve done that, go to the Security and Privacy section and go to General. Turn on the screensaver password option and set the time to Immediately. Check the “Show a Message…” box and set the lock message to something along the lines of “This Mac is being rebuilt, please be patient”.¬†Close System Preferences for now.

You will need to copy a script from your PMA proxy server called InstallAgentUnattended.sh. It is located in your %Programfiles(x86)%\Parallels\PMA\files folder. Copy it to the Documents folder of your Root user.

Open your code editor (Xcode if you like, something else if you don’t) and enter the following script:

#!/bin/sh

#Get computer's current name
CurrentComputerName=$(scutil --get ComputerName)

#Bring up a dialog box with computer's name in and give the user the option to change it. Time out after 30secs
ComputerName=$(/usr/bin/osascript <<EOT
tell application "System Events"
activate
set ComputerName to text returned of (display dialog "Please Input New Computer Name" default answer "$CurrentComputerName" with icon 2 giving up after 60)
end tell
EOT)

#Did the user press cancel? If so, exit the script

ExitCode=$?
echo $ExitCode

if [ $ExitCode = 1 ]
then
exit 0
fi

#Compare current computername with one set, change if different

CurrentComputerName=$(scutil --get ComputerName)
CurrentLocalHostName=$(scutil --get LocalHostName)
CurrentHostName=$(scutil --get HostName)

echo "CurrentComputerName = $CurrentComputerName"
echo "CurrentLocalHostName = $CurrentLocalHostName"
echo "CurrentHostName = $CurrentHostName"

 if [ $ComputerName = $CurrentComputerName ]
 then
 echo "ComputerName Matches"
 else
 echo "ComputerName Doesn't Match"
 scutil --set HostName $ComputerName
 echo "ComputerName Set"
 fi

 if [ $ComputerName = $CurrentHostName ]
 then
 echo "HostName Matches"
 else
 echo "HostName Doesn't Match"
 scutil --set ComputerName $ComputerName
 echo "HostName Set"
 fi

 if [ $ComputerName = $CurrentLocalHostName ]
 then
 echo "LocalHostName Matches"
 else
 echo "LocalHostName Doesn't Match"
 scutil --set LocalHostName $ComputerName
 echo "LocalHostName Set"
 fi

#Invoke Screensaver
/System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine

#Join Domain
dsconfigad -add {FQDN.of.your.AD.domain} -user {User with join privs} -password {password for user} -force

#disable automatic login
defaults delete /Library/Preferences/com.apple.loginwindow.plist autoLoginUser
rm /etc/kcpassword

#install Configuration Manager client
chmod 755 /private/var/root/Documents/InstallAgentUnattended.sh
/private/var/root/Documents/InstallAgentUnattended.sh http://FQDN.of.your.PMA.Server:8761/files/pma_agent.dmg {SCCM User} {Password for SCCM User} {FQDN.of.your.AD.Domain}
echo SCCM Client Installed

#Repair disk permissions
diskutil repairPermissions /
echo Disk Permissions Repaired

#Rename boot volume to host name
diskutil rename "Macintosh HD" $HOSTNAME

#disable root
dsenableroot -d -u {User with admin rights on Mac} -p {That user's password}

#Reboot Mac
shutdown -r +60

Obviously you will need to change this to suit your environment.

As you can see, this has several parts. It calls a bit of Applescript which prompts the user to enter the machine name. The default value¬†is the Mac’s current hostname. The prompt times out after 30 seconds. The script gets the current hostname of the machine and compares it to what was entered in the box and changes the Mac’s name if it is different. It then¬†invokes the Mac’s screensaver, joins the Mac to your AD domain, renames the Mac’s hard drive to the hostname of the Mac and downloads the PMA client from the PMA Proxy Server and installs it. It removes the automatic logon for the Root user, removes the saved password for Root, runs a Repair Permissions on the Mac’s hard disk then disables the Root account and sets the Mac to reboot itself after 60 minutes. The Mac is given an hour before it reboots so that the PMA can download and apply its initial policies.

At this point, you will probably want to test the script to make sure that it works. This is why I suggested taking a snapshot of your Mac beforehand. Even if you do get it right, you still need to roll back your Mac to how it was before you ran the script.

Once the script has been tested, you will need to create an Automator workflow. Open the Automator app and create a new application. Go to the Utilities section and drag Shell Script to the pane on the right hand side.

Untitled 7

At this point, you have a choice: You can either paste your entire script into there and have it all run as a big block of code or you can drag multiple shell script blocks across and break your code up into sections. I would recommend the latter approach; it makes viewing the progress of your script a lot easier and if you make a mistake in your script blocks, it makes it easier to track where the error is. When you’re finished, save the workflow application in the Documents folder. I have uploaded an anonymised version of my workflow:¬†Login Script.

Finally, open System Preferences again and go to the Users and Groups section. Click on System Administrator and go to Login Items. Put the Automator workflow you created in as a login item. When the Mac logs in for the first time after its image is deployed, it will automatically run your workflow.

I’m sure you’re all thinking that I’m completely insane for suggesting that you do this but as I say, this is the only way I’ve found that reliably works. I tried using loginhooks and a login script set with a profile but those were infuriatingly unreliable. I considered editing the sudoers file to allow the workflow to work as Root without having to enter a password but I decided that was a long term security risk not worth taking. I have tried to minimise the risk of having Root log on automatically as much as possible; the desktop is only interactive for around 45-60 seconds before the screensaver kicks in and locks the machine out for those who don’t have the root password.¬†Even for those who do have the root password, the Root account¬†is only active for around 5-10 minutes until the workflow disables¬†it¬†after after¬†the Repair Disk Permissions command has finished.

Anyway, once that’s all done reboot the Mac into Target mode and connect it to your Mac running OS X Server.

Use the System Image Utility to create a Netboot image of your Mac with a workflow to deploy it.

There is a surprising lack of documentation on Internet about the System Image Utility. I suppose that’s because it’s so bare bones and that most people use other solutions such as DeployStudio to deploy their Macs.¬†I eventually managed to find some and this is what I’ve managed to cobble together.

On the Mac running OS X Server, open the Server utility and enter your username and password when prompted. When the OS X Server app finishes loading, go to the Tools menu and click on System Image Utility. This will open another app which will appear in your dock; if you see yourself using this app a lot, you can right click on it and tell it to stay in your dock.

siu 1

Anyway, once the System Image Utility loads click on the Customize button. That will bring up a workflow window similar to Automator’s.

SIU 2

The default workflow has two actions¬†in¬†it: Define Image Source and Create Image. Just using these will create a working image but it will not have any kind of automation; the Mac won’t partition its hard drive or name itself automatically. To get this to work, you need to add a few more actions.

There will be a floating window with the possible actions for the System Image Utility open. Find the following three actions and add them to the workflow between the Define Image Source and Create Image actions. Make sure that you add them in the following order:

  1. Partition Disk
  2. Enable Automated Installation
  3. Apply System Configuration Settings

You can now configure the workflow actions themselves.

For the Define Image Source action, change the Source option to the Firewire/Thunderbolt target drive.

For the Partition Disk action, choose the “1 Partition” option and check the “Partition the first disk found” and, optionally, “Display confirmation dialog before partitioning”. Checking the second box will give you a 30 second opportunity to create a custom partition scheme when you start the imaging process on your Mac clients. Choose a suitable name for the boot volume and make sure that the disk format is “Mac OS Extended (Journaled)”

For the Enable Automated Installation action, put the name of the volume that you want the OS to be installed to into the box and check the “Erase before installing” box. Change the main language if you don’t want your Macs to install in English.

The¬†Apply System Configuration Settings action is a little more complicated. This is the section which names your Macs. To do this, you need to provide a properly formatted text file with the Mac’s MAC address and its name. Each field is separated with a tab and there is no header line. Save the file somewhere (I’d suggest in your user’s Documents folder) and put the full path to the file including the file name into the “Apply computer name…” box. There is an option in this action which is also supposed to join your Mac to a directory server but I could never get this to work no matter what I tried so leave that one alone.

The last action is Create Image. Make sure that the Type is NetRestore and check the Include Recovery Partition box. You need to put something into the Installed Volume box but it doesn’t appear to matter what. Put a name for the image into the Image Name and Network Disk boxes and choose a destination to save the image to. I would suggest saving it directly to the /{volume}/Library/Netboot/NetbootSP0 folder as it will appear as a bootable image as soon as the image snapshot has been taken without you having to move or copy it to the correct location.

Once you’ve filled out the form, press the Save button to save your workflow then press Run. The System Image Utility will then generate your image ready for you to test. Do your best to make sure that you get all of this right; if you make any mistakes you will have to correct them and run the image creation workflow again, even if it is just a single setting or something in your script that’s wrong. The other problem with this is that if you add any new Macs to your estate you’ll have to update the text file with the Mac’s names and MAC addresses in and re-create the image again. This is why I put the “Name your Mac” section into the script.

Test the image

The next step now is to test your Netboot image. To do so, connect your Client Mac to the same network segment as your Server. Boot it to the desktop and open System Preferences. Go to to the Startup Disk pane and you should see the image that you just created as an option

boot

Click on it and press the Restart button. The Mac will boot into the installation environment and run through its workflow. When it’s finished, it will automatically log on as the Root user and run the login script that you created in a previous step.

Convert the image to a WIM and add it to your OSD Image Library

Once you’re satisfied that the image and the login script runs to your satisfaction, you need to add your image to the ConfigMgr¬†image library. Unfortunately, ConfigMgr doesn’t understand what an NBI is so we need to wrap it up into a WIM file.

To convert the image to a WIM file, first of all copy the NBI file to a suitable location on your PMA Proxy Server. Log onto the PMA Proxy using Remote Desktop and open the ConfigMgr client. Go to the Software Library workspace and Operating Systems then Operating System Images. Right click on Operating System Images and click on “Add Mac OS X Operating System Image”.

nbi convert

Click on the first browse button and go the location where you copied the NBI file to. This must be a local path, not a UNC.

Click on the second browse button and go to the share that you defined when you installed the Netboot agent on your PMA Proxy. This must be a UNC, not a local path. Press the Next button and wait patiently while the NBI image is wrapped up into a WIM file. When the process is finished, the image will be in your Operating System Images library. There is a minor bug here: If you click on a folder underneath the Image library, the image will still be added to the root of the library and not in the folder you selected. There’s nothing stopping you moving it afterwards but this did confuse me a little the first time I came across it. Once the image is added, you should copy it to a distribution point.

Advertise the image to your Macs

Nearly finished!

The final steps are to create a task sequence then deploy the task sequence to a collection. To create the task sequence, open the ConfigMgr console on a PC which has the Parallels console extension installed. Go to the Software Library workspace and Operating Systems. Under there, go to Task Sequences and right click on Task Sequences. Select “Create Task Sequence for Macs” and this will appear:

tasksequence

Put in a name for the task sequence then press the Browse button. After a small delay, a list of the available OS X images will appear. Choose the one that you want and press the Finish button. The task sequence will then appear in your sequence library but like with the images, it will appear in the root rather than in a specific folder. The only step left is to deploy the task sequence to a collection; the process for this is identical to the one for Windows PCs. I don’t know if it’s necessary but I always deploy the sequence to the Unknown Computers collection as well as the collections that the Macs sit in, just to be sure that new Macs get it as well.

Assuming that you have set up the Netboot server on the PMA Proxy properly, all of the Macs which are in the collection(s) you advertised the image to will have your image as a boot option. Good luck and have fun!

Bootnote

Despite me spending literally weeks writing this almost 4,000 word long blog post when I had the time and inclination to do so, it is worth¬†mentioning again that all of this is going to be obsolete very soon. The next version of the Parallels agent is going to support for proper task sequencing in it. My contact within Parallels tells me that they are mimicking Microsoft’s task sequence UI so that you can deploy software and settings during the build process and that there will be a task sequence wizard on the Mac side which will allow you to select a task sequence to run. I’m guessing (hoping!) that will be in the existing Parallels Application Portal where you can install optional applications from.

The Grand(ish) Experiment – Two weeks in

It’s been two weeks since I started using the Dell tablet in anger. You may be wondering where the promised updates have gone. Well, here’s one.

I think that the biggest question that I needed to answer was “Can one of these tablets cope as my primary workstation?”. The answer to that is an unequivocal “Yes”. I have been¬†using a Dell Venue 11 Pro 7139. This tablet¬†has a dual core 1.6GHz Core i5, 4GB RAM and a 128GB SSD in there. It is more powerful than a significant portion of the desktop machines in our college and frankly, I would be shocked if it couldn’t. The only thing I’d really want from it would be some more RAM, 4GB is a bit tight these days; sometimes the tablet would freeze while¬†I was using certain System Center components which can be a bit RAM hungry.

The dock that we received was a revision A00 dock which appears to have some issues when using it with multiple monitors. You may recall in my last blog post that I mentioned that I was having difficultly getting a DisplayPort to DVI adapter to work and that I thought I would need an Active one instead. Well, I ordered an Active one and that didn’t work either. This should be a supported configuration. After a bit of research, I found out that that Dell have put out an A01 revision of this dock which fixes these issues. It looks like Dell still have a load of stock of the A00s as the order number on the box was from the end of November and it’s a complete lottery as to which revision you’ll get when you order one. We ordered ours from BT Business so maybe you’d have more luck if you ordered from Dell directly.

This aside, the dock still worked with the DisplayPort to VGA adapter that we ordered so I have been using that to connect my second monitor. This has been OK but there has been the odd occasion where the tablet “forgets” that there is a monitor attached after the displays or the tablet wakes up ¬†after going to sleep. Sometimes telling Windows to reactivate the display works, sometimes you need to undock and redock the tablet to force it to start working again. However, I don’t think that this will be an issue for the people who are going to end up using them as most of them won’t have two monitors attached.

The DPI difference between the tablet’s display and the external monitors has been a source of annoyance for me. Each time I undocked the tablet to use elsewhere, I ended up logging it off and back on so that the desktop was usable. When I redocked afterwards, again I logged off and on so that everything wasn’t massive. Again, I don’t know if a teacher would find this to be an issue.

As a point of interest, when the new Windows 10 build (9926) appeared, I installed it on another 7139 I had lying around and the same resolution issues were still there.

There are still a few things for me to test; I’ve not brought it home to try yet and I haven’t had the opportunity to take it to many meetings. I haven’t tried it in a classroom scenario with visualisers and interactive whiteboards either which is something I will need to do.

The next step is to give a dock and tablet to a teacher and see what they make of it!

The Grand(ish) Experiment, 1st day in

Just a quick post here. The docking stations and cables for the Dell Venues arrived today and I wanted to post my first impressions.

In terms of the hardware, it is surprisingly solid. It’s very weighty and feels like a quality piece of kit. Docking the tablet is easy, it goes smoothly in and out although I think it would be better if the dock had the guiding lugs like the keyboards have but with the docking station it’s just the dock connector that holds that tablet on. You can twist the tablet on the¬†connector a little which feels a bit scary when you first notice it.

The dock itself has three SuperSpeed USB 3 ports, an HDMI port, a DisplayPort and a USB 2.0 Fast Ethernet (100 megabit only) port. The tablet is charged by the dock when it’s plugged in.

We bought a DisplayPort to DVI and a DisplayPort to VGA adapter to go with it as well as an HDMI to DVI cable. The HDMI cable worked as expected but the DisplayPort to DVI cable didn’t. Looking further at the spec of the cable that we bought, it appears to be a passive adapter and for a passive adapter to work, the port needs to be a DisplayPort++ port and I don’t think the port on the is. I think we would need an Active adapter instead. The VGA adapter works though so I’ve been using that.

It all seems to work nicely but there is something of a quirk. The tablet has a 10.8″ 1080p screen. This means that it has a rather small dot pitch and to use the tablet’s desktop comfortably, you need to turn scaling on. When you plug the tablet into the dock with a standard DPI monitor attached to it, Windows attempts to run one level of scaling on one monitor and another on the other but this isn’t entirely successful. By the looks of it, Windows seems to run the screen at the scaling level of the main display¬†across all screens¬†but resizes and resamples the contents of the windows which are on the higher/lower DPI screen. This makes the contents of the windows look rather blurred. It also does strange things to the taskbar and the chrome of the windows, they are either really tiny or really large depending on which screen is nominated as your primary when you log on. I probably haven’t explained this very well, it’s quite hard to describe.

Otherwise it’s more or less like using a standard Windows 8.1 desktop. I will say it’s nice to have a third screen on which your email client and helpdesk can sit on their own while the “real” work stuff can sit on the other two.

Venue11Pro_keyboard-900-90

The Grand(ish) Experiment

There has been a lot of discussion at work recently about the future and how we’re going to embrace it. Specifically, a lot of the discussion has been around tablet computers and how the college is going to start using them.

There are a lot of people who hear the word “Tablet” and think “iPad”. This is understandable; while the iPad was far from the first tablet on the market, it was the first to really grab¬†the attention of the general public and it’s probably safe to say that it’s the best known brand of tablet out there.

We looked very hard at the iPad and we also looked very hard at various Android tablets, Windows RT tablets and even some full blown Intel Windows tablets. We bought some of each and have found some good uses for them. For example, our Security teams have cellular enabled Nexus 7 tablets with which they can log onto our MIS System and identify students and check to see if they should be in lessons or not. Our Music department have been using iPads as music sequencers. To my considerable surprise, there are even some people who like the Surface RT, a device that I absolutely loathe. For the most part, people seem to like them because they have a desktop, keyboard and mouse and that they can use them for remoting onto a terminal server relatively easily. In other words, it seems to me that they like them because they’re like thin and light laptops.

In the end, we, perhaps rather predictably, decided to attempt to standardise on¬†the full Windows tablets. This was because, primarily, iPads and the like are single user devices; the user “owns” the device, has their¬†email/apps/documents/settings on there and woe betide anyone else who wants to use it. That’s OK if you’re got the budget to buy 2500 devices and assign one per member of staff and student but not so good if you want to buy classroom sets and have different people use them. Yes, you can put Office on an iPad but to use it effectively and save your documents, you have to sign the Office apps into Office 365 and you have to remind people to log out of it when they’re finished. It would only be a matter of time before someone gets to data they shouldn’t be able to get at.

With a full Windows device, we can join it to the domain. Different people can log on to it¬†and get their own settings and documents. The device can be managed by standard management systems like ConfigMgr, KACE, Altiris or whatever else tickles your fancy. They can run standard Windows software on the desktop so you don’t have to get a whole load of new applications or retrain your users that much. Windows Enterprise has a very nifty feature called DirectAccess which acts as a transparent VPN connecting the user to the corporate network wherever they are as long as they have an internet connection. In addition, a lot of the Windows devices out there are “Hybrid” devices, they frequently come with or have optional keyboard docks so they convert into a almost standard laptop for when a user wants to do a lot of “conventional” work and despite the noise about the Modern environment, Windows is a good a tablet operating system as any other and it’s been improving steadily since Windows 8 first came onto the market. There is a lot of flexibility with Windows tablets¬†which I’ve come to appreciate since I’ve started looking at them.

Using Windows does bring its own set of disadvantages; the only way to get “Modern” (i.e. touch friendly, tablet enabled) applications is through the Windows¬†Store. The Windows¬†Store is somewhat behind Apple’s and Google’s in terms of number and usefulness of apps. In terms of manageability, it’s way behind Apple’s app store. At the moment, there is no way to bulk-buy apps from there.¬†Modern apps are installed in the User’s profile, not assigned to machines so anyone who wants a Modern app needs to either have a Microsoft account attached to their domain account or we need to obtain¬†a sideloading key and get unsigned AppX packages from software publishers to push out to machines. I haven’t looked much at the latter option as I suspect the amount of companies willing to do that can be counted on one hand.

Anyway, despite the disadvantages being presented by Windows 8.1 and the Modern interface, we decided to use Windows tablets. We looked at the lines from various manufacturers and settled on the Dell Venue range. It is¬†vast, it ranges from the small 8″ tablets going to 720p 10″ 32 ¬†bit Atom tablets to 1080p 64 bit Atoms to dual core Core Ms to Core i3s, i5s and i7s. You can get them with large amounts of storage and RAM. They also come with a standard range of accessories such as docking stations, keyboard docks, styluses and network cards. The styluses and NICs work with the entire range, the keyboards and the docking stations work with all of the 10″ tablets.¬†The other advantage of choosing Dell as a manufacturer is that for their corporate lines at least, they tend to standardise on parts and accessories for a number of years which is attractive if you need to support a device for¬†the long term.

So, we’ve selected our software platform and we’ve selected our hardware platform. The question has been, what exactly are we going to do with them? There have been various ideas proposed. The 8″ Venues are really nice devices, they perform relatively well for the spec, they’re light and they’re just the right size and weight to use in portrait mode and type on them. We’re considering using those as classroom machines to replace a load of god-awful Intel Classmate convertibles we bought a few years back. However, the idea that’s really caught the imagination¬†so far is to replace all of the teacher’s computers with them.

At the moment, our college has a desktop PC in each classroom attached to an interactive whiteboard, projector, monitor, sometimes a visualiser, a keyboard and mouse. Each department also has a staff workroom where the majority of teachers either have a dedicated PC for them to work on or a space for them to work on a¬†laptop issued to them by the college. The idea has been to issue a 10″ Venue to each member of staff, replace the classroom and workroom¬†PCs with docking stations for the Dell Venues and take the laptops off those who have them. The teacher could come in a the beginning of the day, dock their Venue at their desk and do some work. When it’s time for a class, they undock the tablet and go to their class. Once they’re in the class, they dock the tablet again and they’re connected to their whiteboard, projector, visualiser and any other equipment they need.¬†There would be no need for them to log in, they’d just wake the tablet up and unlock it. They would no longer be tied to a specific classroom if they need a special piece of software installed, they’d just take it around with them. With accessories like wireless projection systems such as WiDi or Miracast, they could have the interactive whiteboard software open on their tablet and wander around the room scribbling on their tablets with a stylus and still have what they’re working on displayed on the screen. They could go back to their desk and write on the whiteboard again. Then when they’ve finished the class, they could go back to their workroom, find a free space and dock again. If there is no free space, grab a keyboard dock and sit on an empty desk or a sofa. The idea has a lot of potential and the teachers that I’ve spoken to about it so far have seen the advantages.

So with this in mind, we have ordered some Dell tablets, one 7140 based on the Core M CPU and some refurbished 7139s based on a 1.6GHz Core i5. Both tablets have a 11″ 1080p screen, 4GB RAM and a 128GB SSD. We have ordered some Dell “Slim” tablet keyboards for them. We have also ordered and received a Dell “Mobile” tablet keyboard which is the same as the “Slim” except it has an additional battery in it and also a Folio keyboard. We have ordered a pair of docking stations and a couple of styluses. We want to give the idea a go with a few people and see how well it works.

So, what’s the big experiment you ask? Well, I think that it’s unfair that I hoist all of this stuff on teachers without trying it for myself first. I intend to use one of these tablets as my primary workstation for a couple of weeks and see how I get on. I’ll put the software that I need¬†on it and see how it copes. While my workflow¬†is completely¬†different to a¬†teacher’s, it’s probably reasonably safe to say that I put at least as much stress on a computer as a teacher does, possibly more. I’ll carry it around with me everywhere I go, take notes for any meetings that I go to on it and maybe even bring it home once or twice to see how well DirectAccess works. Then when I’ve done that, I’ll offload the setup onto someone else (possibly the college Principal, he’s expressed an interest and he’s one of the Surface RT proponents). I’ll write about my¬†experience on this blog. I’ll talk about how well it performs, the good bits, the bad bits and everything else besides. I have to admit, I’m actually quite looking forward to this.

 

 

DCM Script – Detect Office Activation Status on Windows 7 and Activate if Unactivated

This one was a lot of fun and by “fun”, I mean a complete pain.

Recently, several of my helpdesk calls have been along the lines of “When I open Word, it says that it needs activating”. As I’d hope most people with more than 20 PCs to manage do, we use a Key Management Services (KMS) Server to activate all of our Windows and Office clients. Windows and Office are supposed to activate themselves either during the build process or very soon afterwards. However, the PCs need to phone back to the KMS server every 180 days to remain activated so either the PC hasn’t activated Office¬†during the build process or its activation ticket has expired and it hasn’t managed to get a new one.¬†Therefore, I needed a way to detect whether Office is activated on a computer and activate it if it wasn’t. Detect a state? Remediate it if it isn’t in a desired state? Hmm, this sounds like¬†something thats perfect for DCM! So I went a-looking, seeing what I could see.

First of all, this post is written for 64 bit machines which are running 32 bit Office. However, if you’re running 64 bit Office or 32 bit Office on 32 bit Windows, it’s just a matter of adjusting the paths for the Office VBS script accordingly.

At first, I hoped that I could use pure PowerShell to fix this. There is a very handy CIM instance called SoftwareLicensingProduct which lists the activation status for the Microsoft products installed on your computer. I thought a simple Powershell command like

Get-CimInstance SoftwareLicensingProduct -Filter "Description LIKE '%KMSCLIENT%'" | select ID, Description, LicenseStatus, Name, GenuineStatus

would give me a nice base to work from. On my Windows 8.1 machine, it does; it lists all of the KMS products on your PC and their activation statuses. However, on Windows 7, that CIM instance only lists the operating system, not Office and unfortunately Windows 7 is what is installed on the vast majority of the computers in my workplace. So that meant going back to the drawing board.

I needed another way to get the activation status for Office. From Office 2010 onwards, there is a VBS script called ospp.vbs. It needs to be run with the cscript interpreter as it’s purely command line rather than GUI driven.¬†There are several switches for it which perform operations like attempting an activation, clearing the activation status, setting the KMS server name and port and displaying the activation status of the various Office products. Running the following command:

cscript "C:\Program Files (x86)\Microsoft Office\Office 15\ospp.vbs" /dstatus

returned the following output on my PC with Office 2013 Pro Plus, Project 2013 Standard and Visio 2013 Pro installed on it:

---Processing--------------------------
---------------------------------------
SKU ID: 427a28d1-d17c-4abf-b717-32c780ba6f07
LICENSE NAME: Office 15, OfficeProjectStdVL_KMS_Client edition
LICENSE DESCRIPTION: Office 15, VOLUME_KMSCLIENT channel
LICENSE STATUS: ---LICENSED---
REMAINING GRACE: 177 days (256304 minute(s) before expiring)
Last 5 characters of installed product key: 8QHTT
Activation Type Configuration: ALL
KMS machine name from DNS: kmsserver.domain:1688
Activation Interval: 120 minutes
Renewal Interval: 10080 minutes
KMS host caching: Enabled
---------------------------------------
SKU ID: b322da9c-a2e2-4058-9e4e-f59a6970bd69
LICENSE NAME: Office 15, OfficeProPlusVL_KMS_Client edition
LICENSE DESCRIPTION: Office 15, VOLUME_KMSCLIENT channel
LICENSE STATUS: ---LICENSED---
REMAINING GRACE: 177 days (256304 minute(s) before expiring)
Last 5 characters of installed product key: GVGXT
Activation Type Configuration: ALL
KMS machine name from DNS: kmsserver.domain:1688
Activation Interval: 120 minutes
Renewal Interval: 10080 minutes
KMS host caching: Enabled
---------------------------------------
SKU ID: e13ac10e-75d0-4aff-a0cd-764982cf541c
LICENSE NAME: Office 15, OfficeVisioProVL_KMS_Client edition
LICENSE DESCRIPTION: Office 15, VOLUME_KMSCLIENT channel
LICENSE STATUS: ---LICENSED---
REMAINING GRACE: 177 days (256304 minute(s) before expiring)
Last 5 characters of installed product key: RM3B3
Activation Type Configuration: ALL
KMS machine name from DNS: kmsserver.domain:1688
Activation Interval: 120 minutes
Renewal Interval: 10080 minutes
KMS host caching: Enabled
---------------------------------------
---------------------------------------
---Exiting-----------------------------

Apart from the KMS Server, that output is verbatim. There is some very useful information in there; the product license, the activation information, the KMS server it’s using to activate, how long the activation has left. It’s great! Unfortunately it’s also a big lump of text which isn’t especially useful by itself.

At this point, I could have just created a package which ran

cscript "C:\Program Files (x86)\Microsoft Office\Office 15\ospp.vbs" /act

and called it a day. It certainly would have worked to an extent but I still wanted to use DCM. Using DCM would have been better because:

  • I can, in theory, set it to detect whether Office needs activating and only run the activation script if it’s not¬†whereas using a package with that command line in it will attempt activation of Office whether it needs activating or not
  • Using a package would be a set-once kind of affair, if Office decides to deactivate itself or fails reactivation after the KMS grace period expires, using a package won’t allow the script to re-run whereas using DCM, I can re-run the detection script every hour, every day, every week, every month or whatever

So I turned back to PowerShell and, eventually, came up with this:

C:\Windows\System32\cscript.exe 'C:\Program Files (x86)\Microsoft Office\Office15\OSPP.VBS' /dstatus | Out-File $env:temp\actstat.txt

$ActivationStatus = $($Things = $(Get-Content $env:temp\actstat.txt -raw) `
                            -replace ":"," =" `
                            -split "---------------------------------------" `
                            -notmatch "---Processing--------------------------" `
                            -notmatch "---Exiting-----------------------------"
                       $Things | ForEach-Object {
                       $Props = ConvertFrom-StringData -StringData ($_ -replace '\n-\s+')
                       New-Object psobject -Property $Props  | Select-Object "SKU ID", "LICENSE NAME", "LICENSE DESCRIPTION", "LICENSE STATUS"
        })

$Var = "Office Activated "
for ($i=0; $i -le $ActivationStatus.Count-2; $i++) {
    if ($ActivationStatus[$i]."LICENSE STATUS" -eq "---LICENSED---") {
        $Var = $Var + "OK "
        }

    else {
        $Var = $Var + "Bad "
        }
        }

If ($Var -like "*Bad*") {

    echo "Office Not Activated"
}
else
{
    echo "Office Activated"
}

That¬†script runs the Office activation VBScript and saves the output to a text file in the user’s TEMP directory. It reads the created text file and dumps the entire lot into a variable called¬†Things (I was experimenting, I couldn’t think of a better name once I had finished and hey, it worked! If it ain’t broke don’t fix it).¬†It converts the text file into a series of PowerShell objects using the series of dashes to separate them, replaces any colons with equals signs¬†and excludes the “Processing” and “Exiting” lines. It uses the¬†ConvertFrom-StringData command to add and populate properties on the objects which is why the colons needed replacing. It then selects the particular properties that I’m interested in. The whole lot gets put into a array¬†called¬†ActivationStatus which I can now use to do what I need to do.

The script creates another object called¬†Var¬†and pre-populates it with a bit of random text. It runs through all but the last object in the¬†ActivationStatus array (If you look at the text file output, you’ll see that the series of dashes appears twice at the end so my little routine creates a blank but not null object at the end of the array) and checks to see if the “LICENSE STATUS” property is equal to ‘— LICENSED —“. If so, it appends “OK ” onto the end of¬†Var, if not it adds “Bad “. Finally, the script looks at¬†Var and sees if the word “Bad” appears in it. If so, it echos back to ConfigMgr that Office is activated or not activated.

The remediation script looks like this:

cscript "C:\Program Files (x86)\Microsoft Office\Office 15\ospp.vbs" /act

Simple, no?

When you’ve created the Detection and Remediation scripts inside ConfigMgr, create a Compliance Rule which looks for a string called “Office Activated”. Then, as always, either create a new baseline and deploy it to a collection or add it to an existing one.

DCM Script – Detect if a Mac is a Member of the Domain and Join If Not

As I’ve said before, Macs can be a pain in the backside when you’re trying to manage a lot of them. One of the particular bugbears that I’ve found is that they have a habit of unbinding themselves from your Active Directory domain for no apparent reason. Usually this would mean a helpdesk call because someone can’t log on and disruption and annoyance and well, you get the idea.

This script is a bit of a kludge. My Bash isn’t the best by any stretch of the imagination and I’ve put detection and remediation into the same snippet as for some reason, I couldn’t get a separate remediation script to work. No matter. It’s not ideal but it still works. Anyway, the script looks like this:


DOMAIN_STATUS=$(dsconfigad -show | awk "/Active Directory Forest/" | cut -d "=" -f 2)"_Member"
if [[ ${DOMAIN_STATUS} == "{{domain.fqdn}}_Member" ]]; then

echo "OK"

exit 2 # already a domain member, exit script

fi

dsconfigad -add {{domain.fqdn}} -user {{user}} -password {{password}} -force
EXIT_CODE=$(echo $?)

if [[ ${EXIT_CODE} != 0 ]]; then

echo "There was an error. Code is " $EXIT_CODE
exit ${EXIT_CODE}

fi

echo "OK"

Change anything in dual braces to reflect your environment.

The script runs a command called¬†dsconfigad which gets information about the Active Directory domain that the Mac belongs to. It trims out the FQDN of the domain, appends¬†_Member onto the end of it and adds it to a variable. I’m adding¬†_Member to the end of the string¬†because if the Mac isn’t a member of a domain, dsconfigad¬†returns a null value and the variable doesn’t get created.

The script compares the output with what it should be. If it matches, it returns “OK” to ConfigMgr and exits. If not, it joins the Mac to the domain and returns “OK” to ConfigMgr. If for some reason the domain join fails, the script¬†sends the error code back to ConfigMgr.

As always, you set the detection rule to look for a string called “OK”, add the rule to a new or pre-existing baseline and deploy the baseline to a collection. After you do, any Mac which is managed by ConfigMgr but which is not a member of your domain will find itself joined.

As I say, I know that my Bash scripting skills are fairly minimal¬†so if you see a better way for this script to work, please feel free to contact me. The usual “I’m not responsible if this script hoses your Mac and network” disclaimers apply.

Controlling Dual Monitor Modes via the Command Line

This one is absurdly simple but pretty useful nevertheless.

At work, we have been getting a lot of calls recently where the teacher¬†has complained that their interactive whiteboards aren’t working properly and all that they can see on the projected surface is their wallpaper. I’m sure that anyone who has experience with this things will immediately see that of course, their whiteboards are fine and that the PCs are set to extend the desktop onto a secondary display rather than¬†clone it.

There are some big advantages to extending the desktop and I think that there are a few more IT literate teachers who have figured this out and decided to extend their desktop. However, what they’re also doing is forgetting to set it back when they’re finished and therefore upsetting the next teacher who goes to use the room. This of course generates a call to us and wastes everybody’s time.

I wanted to see if there was a way to control extending or cloning displays¬†using a script or a PowerShell command. I googled for a while and found a few third party programs which claimed they could do it but I found that they didn’t work that well. I eventually came across this¬†page which informed me about a program built into Windows versions from Windows called displayswitch.exe. It even has some command line switches!

displayswitch.exe /clone
displayswitch.exe /extend
displayswitch.exe /internal
displayswitch.exe /external

Those are pretty self explanatory I think! I then created a couple of GPOs with WMI filters which detect interactive whiteboards. Inside those GPOs are startup and logout scripts with the following command:

displayswitch.exe /clone

So each time a PC with an interactive whiteboard attached to it is started or logged out, it puts itself back into clone mode. Easy!

DCM Script – Detect and disable Intel Graphics Card Service

As I imagine the majority of corporate PCs do these days, all of the computers at my workplace have integrated Intel graphics chipsets. And why not, for a business PC they’re perfectly adequate; their 3D acceleration is good enough for Aero on Windows 7 and for anything else the vast majority of users need.

However, there is a rather… annoying feature of the drivers which I like to suppress. The driver puts an application into the System Notification Area which makes it easy for people to mess around with graphical settings and which lets them change the orientation of the screen by pressing certain key combinations. I’m sure that for a lot of corporate settings this isn’t too much of a problem but for a school or college it generates a lot of helpdesk calls because the little sods darlings like hitting those keys and turning the screens upsidedown.

Anyway, this DCM script detects whether the service is running and kills and disables it if it is

$IntelGFXService = Get-Service | Where-Object {$_.Name -like 'igfx*'}

 if ($IntelGFXService -ne $null) {

    $IntelGFXServiceName = $IntelGFXService.Name
    $IntelFGXStartupMode = Get-CimInstance Win32_Service -Filter "Name='$IntelGFXServiceName'"
    $IntelGFXService.Status
    $IntelFGXStartupMode.StartMode

       if ($IntelGFXService.Status -eq "Running" -and $IntelFGXStartupMode.StartMode -eq "Auto")
        {
            echo "Service Started, Startmode Automatic"
        }
       elseif ($IntelGFXService.Status -eq "Stopped" -and $IntelFGXStartupMode.StartMode -eq "Auto")
        {
            echo "Service Stopped, Startmode automatic"
        }
       elseif ($IntelGFXService.Status -eq "Running" -and $IntelFGXStartupMode.StartMode -eq "Disabled")
        {
            echo "Service Started, Startmode Disabled"
        }
       else
        {
            echo "all disabled"
        }

}
else
{
    echo "all disabled"
}

That checks the status of the service and reports the status back to ConfigMgr. The remediation script looks like this:

$IntelGFXService = Get-Service | Where-Object {$_.Name -like 'igfx*'}

Set-Service -Name $IntelGFXService.Name -StartupType Disabled
Stop-Service -Name $IntelGFXService.Name
get-process igfx* | stop-process

That stops the service, disables it and kills any relevant processes running alongside the service.

Set the compliance rule to look for a string called “all disabled” and apply the rule to either a new or existing baseline. That’s it for today!

apple_xserve_2009_nehalem

Mac Servers in a Post Xserve World

About three years ago, Apple discontinued the Xserve line of servers. This presented a problem. While the Xserve never was top tier hardware, it was at least designed to go into a server room; you could rack mount it, it had proper ILO and it had redundant power supplies. You would never run an Apache farm on the things but along with the Xserve RAID and similar units from Promise and Active, it made a pretty good storage server for your Macs and it was commonly used to act as an Open Directory Server and a Workgroup Manager server to manage them too.

Discontinuing it was a blow for the Enterprise sector who had came to rely on the things as¬†Apple didn’t really produce¬†a suitable replacement. The only “servers” left in the line were the Mac Pro Server and the Mac Mini server. The only real difference between the Server lines and their peers were that the Servers came with an additional hard drive and a copy of OS X Server preinstalled. The Mac Mini Server was underpowered, didn’t have redundant PSUs, it only had one network interface, it didn’t have ILO and it couldn’t be racked without a third party adapter. The Mac Pro was a bit better in terms of spec, it at least had two network ports and in terms of hardware it was pretty much identical internally to its contemporary Xserve so it could at least do everything an Xserve could do. However, it¬†couldn’t be laid down in a cabinet as it was too tall so Apple suggested you stood two side by side on a shelf. That effectively meant that you had to use 10U to house four CPU sockets and eight hard drives. Not a very efficient use of space and the things still didn’t come with ILO or redundant power supplies¬†and it was hideously expensive, even more so than the Xserve. It also didn’t help that Apple didn’t update the Mac Pro for a very long time and they were getting rapidly outclassed by contemporary hardware from other manufacturers, both in terms of hardware and price.

Things improved somewhat when Thunderbolt enabled Mac Mini Servers came onto the scene. They came with additional RAM which could be expanded, an extra hard drive and another two CPU cores. Thunderbolt is essentially an externally presented pair of PCI Express lanes. It gives you a bi-directional interface¬†providing 10Gbps of bandwidth to external peripherals. Companies like Sonnet¬†and NetStor¬†started manufacturing rack mountable enclosures into which you could put one or more¬†Mac Minis. A lot of them included ThunderBolt to PCI Express bridges with actual PCIe slots which meant you could connect RAID cards, additional network cards, faster network cards, fibre channel cards and all sorts of exciting serverish type things. It meant for a while, a Mac Mini Server attached to one of these could actually act as a semi-respectable server. They still didn’t have ILO or redundant PSUs but Mac Servers could be at least be reasonably easily expanded and the performance of them wasn’t too bad.

Of course, Apple being Apple, this state of affairs couldn’t continue. First of all they released the updated Mac Pro. On paper, it sounds wonderful; up to twelve CPU cores, up to 64GB RAM, fast solid state storage, fast GPUs, two NICs and six(!) Thunderbolt 2 ports. It makes an excellent workstation. Unfortunately it doesn’t make such a good server;¬†it’s a¬†cylinder which makes it even more of a challenge to rack. It only has one CPU socket, four memory slots, one storage device and there is no internal expansion. There is still no ILO or redundant power supply. The ultra powerful GPUs are no use for most server applications and it’s even more expensive than the old Mac Pro was. The Mac Pro Server got discontinued.

Apple then announced the long awaited update for the¬†Mac Mini. It was overdue by a year and much anticipated in some circles. When Apple finally announced it in their keynote speech, it sounded brilliant. They said it it was going to come with an¬†updated CPU, a PCI Express SSD and an additional Thunderbolt port. Sounds good! People’s enthusiasm for the line was somewhat dampened when they appeared on the store though. While the hybrid SSD/hard drive was still an option, Apple¬†discontinued the option for two hard drives. They soldered the RAM to the logic board. The Mac Mini Server was killed off entirely so that means that you have to have a dual core CPU or nothing. It also means no memory expansion, no RAIDed boot drive and the amount of CPU resources available being cut in half. Not so good if you’re managing a lot of iPads and Macs using Profile Manager or if you have a busy file server. On the plus side, they did put in an extra Thunderbolt port and upgraded to Thunderbolt 2 which would¬†help if you were using more external peripherals.

Despite all of this, Apple still continue to maintain and develop OS X Server. It got a visual overhaul similar to Yosemite and it even got one or two new features so it clearly matters to¬†somebody at Apple.¬†So bearing this in mind, I really don’t understand why Apple have discontinued the Mac Mini Server. Fair enough them getting rid of the Mac Pro Server, the new hardware isn’t suitable for the server room under any guise and it’s too expensive.¬†You wouldn’t want to put an iMac or a Macbook into a server room either. But¬†considering what you’d want to use OS X Server for (Profile Manager, NetRestore, Open Directory, Xcode), the current Mac Mini is really too underpowered and unexpandable. OS X Server needs some complimentary hardware to go with it and there isn’t any now. There is¬†literally¬†no Apple product being sold at this point that I’d want to¬†put into a server room and that’s a real shame.

At this point, I hope that Apple do one of two things. Either:

Reintroduce a quad or hex core Mac Mini with expandable memory available in the next Mac Mini refresh

Or

Start selling¬†a version of OS X Server which can be installed on hypervisors running on hardware from other manufacturers. OS X can already be run on VMware ESXi, the only restriction that stops people doing this already is licensing. This would solve so many problems, people would be able to run OS X on server class hardware with whatever they want attached to it again. It wouldn’t cause any additional work for Apple as VMware and others already have support for OS X in their consumer and enterprise products. And it’d make Apple even more money. Not much perhaps but some.

So Tim Cook, if you’re reading this (unlikely I know), give your licensing people a slap and tell them to get on it. kthxbye

%d bloggers like this: