Installing and Using Command Line Tools

There are many command line tools and scripts you can download and install that are very useful for Mac Admins.

(Just a representative list, certainly not complete.)

Some of these tools provide installer packages that deploy the tool in the proper directory – usually /usr/local/bin so that you and other users can use it. (/usr/local/bin is in the macOS default PATH.)

However, many of these tools, such as munkipkg or my own quickpkg just come as a git project folder, with no or few instructions to get it set up. The assumption is, that when you use these tools you are familiar enough with the shell to make them work.

There are actually several approaches to getting these tools to actually work for you, most with different upsides and downsides. This post will talk about a few of them.

Getting the Tool

Before you can choose a method to run the tool, you need to get it. Many admins share their scripts and tools through a hosted service like Github. My quickpkg tool, for example, is a python script hosted as an open Github repository.When you follow that link you will see the main project page. The page has a menu area up top, a file list in the middle and below an area where an introduction to the project (the ReadMe file) is shown. It is worth to read the ReadMe in case they have special installation instructions.

Download the Release

Git is a version management tool that lets you track changes throughout the coding process. Github is one popular service to host these projects online. Contributors of a project have the option of marking steps of the project as a ‘release.’ Releases are considered a tested and stable stop in between less reliable developmental steps.

Releases will be shown in the project’s ‘releases’ page (link in the middle of the page, above the file list). (quickpkg releases page

On the releases page you will see a list of releases with the newest on top. At the very least each release will have a snapshot of the project’s code as a zip or tar.gz archive. Some projects provide other archives or installers such as dmg or pkg as well.

Download the Current Project

Some projects do not manage releases. (You will see ‘0 releases’ in the tool bar.) Then you can still download the most recent version of the project. There is a large green ‘Clone or download’ button on the right area above the project’s file list for this. When you click that button it will expand to show some more options.

‘Download ZIP’ will simply download an archive of the current state of project, much like the release download would.

When you download the archives, either through the releases page or from the ‘Download ZIP’ button, the resulting project folder will not be connected with the Github project any more. If you just want to use the current version, then that is fine and will serve you well. If you want an updated version in the future you will simply download the newer version and replace the tool you already have.

If you rather use git to download and manage the code, then you can do that here, too. However, that is a topic for another post.

Using the Tool

However you get the project you will now have a directory with the tool and any supporting files. You can already change directory to this folder in Terminal (drag the folder on to the Terminal icon to open a new Terminal already changed to it) and run the tool directly:

$ cd ~/Projects/quickpkg/
$ ./quickpkg
usage: quickpkg [-h] [--scripts SCRIPTS] [--preinstall PREINSTALL]
                [--postinstall POSTINSTALL]
                [--ownership {recommended,preserve,preserve-other}]
                [--output OUTPUT] [--clean] [--no-clean] [--relocatable]
                [--no-relocatable] [--sign SIGN] [--keychain KEYCHAIN]
                [--cert CERT] [-v] [--version]
                item_path
quickpkg: error: too few arguments
This will do for tools that you use rarely. But for tools that you want to use frequently typing the path to the tool is quite cumbersome.

Put it in the PATH

The PATH environment variable lists the directories where the shell looks for commands. You could add the project directory of the tool you just added to the PATH, but that would be tedious to manage.

An easier solution is to copy the tool to /usr/local/bin. This is the designated directory for custom commands. /usr/local/bin is also in the default macOS PATH.

However, copying the tool has some downsides. When the tool get’s updated you will have to copy the newer version, as well. Also some tools may require additional resources or libraries that reside in its project directory.

Instead of moving the tool, you can create a symbolic link to the tool in /usr/local/bin.

I keep the project folders of tools in ~/Projects so I use the command:

$ sudo ln -s ~/Projects/quickpkg/quickpkg /usr/local/bin
$ ls -al /usr/local/bin/quickpkg 
lrwxr-xr-x  1 root  wheel /usr/local/bin/quickpkg -> /Users/armin/Projects/quickpkg/quickpkg
Since symbolic links use paths, this has the advantage that when you download a newer version of the project to the same location, the link will point to the new version.

Putting links to a tool in /usr/local/bin has a few downsides (or upsides, depending on your perspective):

  • you need to have administrator privileges to change /usr/local/binlinks/tools you add to /usr/local/bin affect all users on that Mac

Set your own PATH

When you want to have the tools only affect your shell environment you need to do a bit more work.

First you need to choose a location where you tools or links should live. I have created a directory ~/bin for that purpose.

$ mkdir ~/bin
When you don’t want anyone else on the Mac to see what you are doing in that directory, you can remove everyone else’s access to it:
$ chmod 700 ~/bin
If you want you can also hide the directory in the Finder:
$ chflags hidden ~/bin
(Use the same command with nohidden to make Finder show it again.)

(To test the following properly, you need to delete the symbolic link we created earlier in /usr/local/bin. If that still exists the shell will use that, since it comes earlier in the PATH.)

You can create a symbolic link to the in ~/bin with

$ ln -s ~/Projects/quickpkg/quickpkg ~/bin
However, these still will not work, since we need to add the ~/bin directory to your personal PATH.

To do that you need to add this line to your ~/.bash_profile or ~/.bashrc:

export PATH=$PATH:~/bin
(Read more about how to create a bash profile here and here. This assumes you are using bash, the default on macOS. Other shells will have other locations where you can change environment variables.)

Then open a new Terminal window or type source ~/.bash_profile so that the new profile is loaded in the current window and try running the command.

Success!

Single Brackets vs Double Brackets

In my recent post I mentioned in passing, that you should be using double brackets [[…]] for tests in bash instead of single brackets.

This is the post where I explain why. I also talked about this briefly in my MacSysAdmin session: Scripting Bash

Double Brackets are a bashism

Double brackets were originally introduced in ksh and later adopted by bash and other shells. To use double brackets your shebang should be #!/bin/bash not #!/bin/sh.

Since sh on macOS is bash pretending to be sh, double brackets will still work with the wrong shebang, but then your script might break on other platforms where a different shell might be pretending to be sh. Consistent behavior across platforms is the main point why sh is still around, so don’t use double brackets in sh (or use bash to use double brackets).

I go into detail on why to use bash over sh in this post: On the Shebang

Side note on syntax

In shell scripts you usually use tests in if or while clauses. These are tedious to write in the interactive shell. The ‘and’ operator && will execute the following statement only if the preceding statement returns 0 (success). So you can use && to write simple if … then … clauses in a single line.

if [ -d Documents ]
then
    echo "found docs"
fi

and

[ -d Documents ] && echo "found docs"

have the same effect. The second is much shorter, but as soon as the test or the command gets more complex you should revert to the longer syntax.

Alternatively, the ‘or’ operator || will only execute the following statement when the previous statement returns non-zero or fails:

[ -d Documents ] || echo "no docs"

is the same as

if [ ! -d Documents ]
then
    echo "no docs"
fi

What’s wrong with the single brackets?

The single bracket [ is actually a command. It has the same functionality as the test command, except that the last argument needs to be the closing square bracket ]

$ [ -d Documents && echo "found docs"
-bash: [: missing `]'
~ $ [ -d Documents ] && echo "found docs"
found docs
$ test -d Documents  && echo "found docs"
found docs

Note: in bash on macoS both test and [ are built-in commands, but as usual for built-in commands there are also executables /bin/test and /bin/[.

A single bracket test will fail when one of its arguments is empty and gets substituted to nothing:

$ a="abc"
$ b="xyz"
$ [ $a = $b ] || echo "unequal"
unequal
$ unset a
$ [ $a = $b ] || echo "unequal"
-bash: [: =: unary operator expected
unequal

You can prevent this error by quoting the variables (always a prudent solution).

$ [ "$a" = "$b" ] || echo "unequal"
unequal

Double brackets in bash are not a command but a part of the language syntax. This means they can react more tolerantly to ‘disappearing’ arguments:

$ [[ $a = $b ]] || echo "unequal"
unequal

You will also get an error if one of the arguments is substituted with a value with whitespace with single brackets, while double brackets can deal with this.

$ a="a"
$ b="a space"
$ [ $a = $b ] || echo "unequal"
-bash: [: too many arguments
unequal
$ [[ $a = $b ]] || echo "unequal"
unequal

Note: the = operator in sh and bash is for string comparison. To compare numerical values you need to use the -eq (equals), -ne (not equals), -gt (greater than), -ge (greater than or equal), -lt (less than), -le (less than or equal) operators. With double brackets you can also use two equals characters == for a more C like syntax. (or, better, use ((…)) syntax for arithmetic expressions)

Also, when using the = to assign variables, you cannot have spaces before and after the =, while the spaces are required for the comparison operator (both with single and double brackets):

a="a"           # no spaces
b="b"           # no spaces
[ "$a" = "$b" ] # spaces!
[[ $a = $b ]]   # spaces!

Since the single bracket is a command, many characters it uses for its arguments need to be escaped to work properly:

$ [ ( "$a" = "$b" ) -o ( "$a" = "$c" ) ]
-bash: syntax error near unexpected token `"$a"'
$ [ \( "$a" = "$b" \) -o \( "$a" = "$c" \) ]

You could alternatively split this example into two tests: [ "$a" = "$b" ] || [ "$a" = "$c" ].

Double brackets interpret these characters properly. You can also use the (again more C like) && and || operators instead of -a and -o.

 [[ ( $a = $b ) || ( $a = $c ) ]]

In general, you can work around most of the issues with single bracket syntax, but the double bracket syntax is more straight forward and hence more legible and easier to type.

Double bracket features

Aside from the cleaner syntax, there are a few ‘bonus’ features you gain with double brackets.

With double brackets you can compare to * and ? wildcards, and bracket globbing […]:

$ a="Documents"
$ [[ $a = D* ]] && echo match
match
$ a=hat
$ [[ $a = ?at ]] && echo match
match
$ [[ $a = [chrp]at ]] && echo match
match

You can also use < and > to compare strings lexicographically:

$ a=cat
$ b=hat
$ [[ $a < $b ]] && echo sorted
sorted

And you get an operator =~ for regular expressions:

$ a=cat
$ b="the cat in the hat"
$ [[ $a =~ ^.at ]] && echo match
match
$ [[ $b =~ ^.at ]] && echo match

Note that you should not quote the globbing patterns or the regex pattern.

Summary

  • you should use bash for shell scripting on macOS
  • when using bash, you should use double brackets instead of single brackets
  • double brackets are safer, easier to type and read, and also add few neat features

defaults – the Plist Killer

Last week, fellow MacAdmin Kyle Crawford has discovered that on macOS High Sierra the defaults will delete a property list file with invalid plist/XML syntax, even when you just attempt to read data. Erik Holtham has a more detailed OpenRadar bug.

Patrik Wardle has found the relevant code, which deletes the file when it doesn’t validate.

This is bad. Thanks to all involved for finding, documenting and sharing this.

This is new behavior in High Sierra. I am not yet sure which version of High Sierra this new behavior was introduced. The behavior makes sense in the context of an application attempting to read a settings file, but the defaults tool deleting arbitrary files is, of course, dangerous.

Update: This behavior has been fixed in 10.13.4. However, it is still good practice to avoid defaults for anything other than actual preferences files.

What to do?

As usual, don’t panic. This will only affect Macs running High Sierra and corrupt or broken files. However, if you have a script that accidently points the defaults command at a different file, it will delete that. So you have to use it with care.

It is probably a good practice to verify a file before you attempt to modify it with the defaults command in a script with the plutil command:

if ! plutil -lint path/to/file.plist; then
    echo "broken plist"
    exit 1
else
    defaults path/to/file …
fi

Alternatives to defaults

Alternatively, you can and should use plutil or PlistBuddy to read and modify property list files.

Learn more about plutil, PlistBuddy and other tools to read and write property lists in my book: “Property Lists, Preferences and Profiles for Apple Administrators”

plutil is unfortunately not really useful to read a single value from a property list file. The extract verb will show any value as its own plist. The plutil command is useful to edit existing plist files. (Read details on the plutil command.)

PlistBuddy, however is very useful for both reading an writing values to a property list file:

$ /usr/libexec/PlistBuddy berry.plist -c "print size"
$ /usr/libexec/PlistBuddy berry.plist -c "set size enormous"

PlistBuddy has the additional advantage of allowing to add or edit values nested deep in dict or array structures.

You can get more information on PlistBuddy in its man page or my book.

The interactive mode of PlistBuddy is also very useful.

So the defaults command is dead?

No.

Apple has been warning us to not use defaults for generic property list file editing and parsing for quite a while in the defaults command’s man page:

WARNING: The defaults command will be changed in an upcoming major release to only operate on preferences domains. General plist manipulation utilities will be folded into a different command-line program.

As this warning states, the defaults tool reads and writes data to plist files through macOS’s preferences system. This has the advantage that the tool gets (and changes) the current value whether it is cached in memory or not. When an application is listening for notifications that a preference has changed (not many do) then it will be notified.

Files for preference domains are usually stored in /Library/Preferences/, ~/Library/Preferences or their ByHost subfolders. Sandboxed applications will have their preference plist files in their container.

There, however, many other files that are property list files which are not part of the user defaults system: launchd files, configuration profiles and AutoPkg` recipes to name just a few.

Mac Admins commonly use the defaults tool, despite Apple’s warning, to create, read and edit generic plist files. As mentioned above, plutil, PlistBuddy or direct manipulation through Obj-C, Python or Swift, are better choices for generic plist files.

You can learn about all these options in my book: “Property Lists, Preferences and Profiles for Apple Administrators”

And now Server.app, too!

I have written a book which expands on this topic and is regularly updated. Please check it out: “macOS Installation for Apple Administrators

There is a common understanding that celebrity deaths come in groups of three. Maybe Apple was aiming for that, too. After killing off Imaging and NetBoot/NetInstall, now there is a new support article:

Prepare for changes to macOS Server – Apple Support.

In this article Apple announces they will change the macOS Server app “to focus more on management of computers, devices, and storage on your network.” All other services will be deprecated.

The article lists the deprecated services and provides links to some open source alternatives.

  • Calendar
  • Contacts
  • DHCP
  • DNS
  • Mail
  • Messages (Jabber)
  • NetInstall (NetBoot)
  • VPN
  • Websites (Apache)
  • Wiki

In the beginning these services will remain available when you upgrade from an older version where they are activated, but will be hidden from new installations. In some unspecified future version of macOS Server, the services will be removed.

There are few services not listed here. They were already deprecated or moved to the ‘normal’ macOS in the last Server release. Open Directory and Software Update Server were deprecated and automatically hidden in Server 5.4 (the version which was released with macOS High Sierra). At the same time, Content Caching (Caching Server), File Sharing and Time Machine services moved from the Server app to the Sharing preference pane on macOS (and are available on every Mac, without having to purchase macOS Server). Xcode Server has moved into Xcode 9.

If you are using macOS Server for one of the above solutions, what should you do?

Don’t Panic

Apple is not killing off these services immediately. Server 5.5, which was released together with macOS 10.13.3 still has all the ‘normal’ services. Apple will hide the services in the UI to discourage their use in a future release. For the time being you can continue to use them. However, you need to start planning your move away from macOS Server.

While many Mac administrators would argue that macOS Server is not and never was a “professional” server, or even a server for any kind of deployment, it has found a niche in some small network environments. While the UI was certainly never perfect is has always been somewhat easier than messing with config files.

The replacements that Apple suggest in their article are worthy solutions if you need to maintain the services locally. Many are the open source projects that Apple used inside macOS Server themselves. While this removes the UI for monitoring and configuring the services, it also takes Apple out of the loop for updates and security patches. By getting the software directly you can get more timely updates. It also requires more maintenance and effort from the administrator, especially when you are using multiple services.

To the Cloud!

However, many of the above service are better replaced by cloud-hosted services, such as Office 365 or Google for Business/Education. These will also cover user identity management (replacing Open Directory) and file sharing with cloud storage systems.

For obvious reasons, DNS, DHCP and VPN cannot be run in the cloud. For small networks, these services are usually run on the router. However, if your router cannot run these services then you can run them on a dedicated box.

For my home network I am considering (i.e. finally found an excuse for) a Raspberry Pi.

NetBoot is still dead

Apple recommends NetSUS and BSDPy for NetBoot and NetInstall. These are certainly worthy solutions to host your nbi folders.

However, NetInstall functionality (this has been discussed before) is not present with the iMac Pro. It is to be expected that future new Mac hardware releases will follow the iMac Pro.

If you currently have a NetBoot/NetInstall based imaging or installation based workflow hosted on macOS Server, you need to be exploring alternative onboarding/setup workflows instead. DEP + MDM is the solution that Apple is pushing here.

Whatever solution you will find for your setup, it will require a lot of effort to get working smoothly. Rather than spending time and effort to move your NetBoot setup to BSDPy or NetSUS, leave it where they are for as long as they will still work and spend time on building a new supportable and supported workflow instead.

Whither macOS Server?

The Apple support article states:

macOS Server is changing to focus more on management of computers, devices, and storage on your network.

I would guess that ‘storage on your network’ means Xsan. Which some people still use. Seems weird to leave this as part of macOS Server and not split it out like other services. On the other hand it seems hard to imagine that this is some new server management feature.

What remains, is Profile Manager.

Profile Manager is considered Apple’s reference implementation of the MDM protocol. Most would not recommend using it in professional environments and few do (even fewer happily).

Now, that Apple is effectively reducing the functionality of macOS Server to Profile Manager, the question is: will it remain a mere reference implementation or will Apple finally put the resources behind Profile Manager to make it a usable, affordable and scalable solution?

Or maybe I get to write Profile Manager’s eulogy in a few years time as well. Only time will tell.

Does this mean Apple is leaving Enterprise business?

Really!? No.

In some ways Apple has never been able to enter Enterprise business with their own server products, hardware and software.

But they have been able to enter Enterprise business with their devices, Macs and iPhones and iPad. And because those devices are popular and trendy with Enterprise users, the Enterprises need to support them. That is what the MDM protocol and DEP are for.

With this step, Apple is making it clear that they are not even trying to play in the server business. They are happy to provide the MDM protocol and a reference implementation. They will support the infrastructure necessary to make DEP, MDM and VPP work. Apple is not interested in being the hardware that runs DNS, DHCP, file shares, Mail, calendaring and chat etc. Maybe not even the MDM server. Apple is very happy to leave this business to others. Apple sells devices.

macOS Server has been a neglected step child since the demise of the Xserve. I am surprised it took Apple this long to make it obvious.

I have written a book which expands on this topic and is regularly updated. Please check it out: “macOS Installation for Apple Administrators

Get an Icon for your Mac

A few weeks ago I had a post about getting the “Marketing Name” for a Mac.

At that time I was also trying to get an icon or image file for the current Mac model, but could not find a way to do it.

Since then I have found that the AppKit framework provides a method to get an image for the Mac.

[NSImage imageNamed: NSImageNameComputer] # Objective-C

NSImage(named: .computer) # Swift

To get this image data into a file requires some passing through other classes. However, this is possible in Python on macOS. (I had some trouble, but figured it out with some help in the MacAdmins Slack #python channel, thanks!)These are the posts that were recommended reading or watching:

In case you need an image file for the Mac, here is the code. It will generate a 512px image for the current Mac. The two lines you may want to change are line 7 for the size of the image and line 16 for the filename.

Update: improved version here (not by me)

NetInstall is Dead, too

I have written a book which expands on this topic and is regularly updated. Please check it out: “macOS Installation for Apple Administrators

Tim Perfitt of Twocanoes Software (Winclone, SD Clone, etc.) got an iMac Pro.

For obvious reasons he immediately looked at the details of the new boot process, and has found some details that were speculated or unknown so far. Most of this happened on Twitter, which is a quite hard to put together afterwards, so here is a summary: (there were several members of the Mac Admin community involved, thanks to all of them!)

Update: Tim Perfitt now as an excellent detailed post on his findings here.

There are probably a few more details which will come out as other admins get their iMac Pros in the following days and weeks. But this gives us enough confirmation of facts to know:

NetBoot is dead!

Don’t Panic!

So the news of NetBoot’s demise has not been exaggerated. Also it is to be expected that all new hardware from Apple going forward will have Secure Boot and probably not NetBoot.

MacAdmins will all need to plan ahead and look at the options that are on the table for Mac management going forward.

There is speculation that the current TouchBar MacBook Pros might get Secure Boot added in a future update to 10.13. Even if that is not the case, then it is a safe assumption that future Mac releases will contain the T2 system controller or something similar and have the same Secure Boot features (or lack thereof).

Deployment Strategies Going Forward

Device Enrollment Program and Mobile Device Management (DEP + MDM) is certainly Apple’s deployment method of choice. They have been pushing to this for a few years now and it has also been the way to manage iOS devices.

DEP is a process where a new device (iOS or Mac) is registered to your organization at purchase and you can assign it to your Mobile Device Management server through Apple’s website.

At it’s very first boot, the Mac will check with Apple’s DEP servers and get the MDM’s information, register with the MDM and then the management settings take over, adding configuration, software and, with some management systems, local tools to install and manage non-AppStore software.

When a Mac’s system volume is erased and macOS is re-installed the process starts over, keeping the Mac managed by the same MDM.

Apple has also made DEP+MDM a requirement to manage Kernel Extensions without user interaction. Furthermore, Apple states that going forward, the “approved” level of MDM (either by DEP or explicit user interaction) will be used for more configurations in the future.

This is similar to “supervised” iOS devices. However, Apple provides two means to supervise an iOS device: with DEP and by manually connecting an iOS device to a Mac with Apple Configurator. The process with Configurator can be automated, aside from the manual connection. On macOS the manual (“user-approved” MDM enrollment) cannot be automated and cannot even be performed over remote control.

In general, DEP+MDM works well. It enables certain management styles and workflows that were not possible before. An organization can order a device from Apple or a reseller and have it sent directly to an employee. When the employee unboxes the new device it is registered with the organization’s MDM and receives configuration profiles and software, even when the device is off-site.

Apple and MDM vendors like to call this workflow “zero-touch” deployment, because the IT department does not have to touch the device. This is a great improvement for many Mac administrators.

However, there are a few downsides to DEP+MDM:

External Dependency

Apple’s DEP servers are an external dependency and a single-point-of-failure in the deployment workflow. There were a few outages of the DEP system this year. Even worse, Apple does not include the DEP service in their status overview page. This leaves Mac admins wondering if a problem is on their side or with Apple.

DEP availability

DEP is not available in all regions where Macs are sold. With imaging and NetInstall off the table, this is leaves only manual installation and MDM enrollment/approval for management.

Also, a client has to be online and the network to have access to Apple’s servers. This requires un-proxied access to Apple’s 17.* IP range. However, especially when you are outside of the US, the processess at installation may attempt to connect to other IP addresses as well.

With Apple Configurator an administrator can also add existing iOS 11 devices into DEP, not just new ones. This option is not available for existing Macs.

User Interaction

DEP + MDM allows to automatically enroll devices without requiring IT to touch a device. However, the process is not automated. There has to be a user present to interact at several points with the Mac for the initial setup. While profiles can manage and reduce the user input required during setup, there are a few steps you cannot automate away.

Also the enrollment process will only install the management tools then show the user the desktop. Actual installation of software packages takes place in the background and might take a long time. This can leave the user confused as to what is going on. Certain management options, such as enabling FileVault may require a logout or restart, interrupting the user with whatever they started to do on their new Mac or leaving the Mac in an unsecure state until the user restarts.

This downside of DEP is so glaring that many open source solutions have sprung up to provide a user interface for the post-DEP initial configuration cycle.

(I am probably missing some, let me know!)

This innovation and initiative of individual admins and the community as a whole is admirable. Thanks to all who provide!

Either way, administrators using DEP + MDM have to be aware of the time required for the download and installation of large software packages and choose which pieces are absolutely required and which can be deferred to be installed later at a time of the user’s choice through a self service portal.

Software and Configuration Management

DEP handles the initial connection to the MDM. The MDM can push and enforce profiles to control some settings. The MDM can also initiate installation of (Mac) App Store software through the Volume Purchase Programm (VPP).

To manage software and configurations that are not in the Mac App Store or not supported by configuration profiles, administrators need to install a local tool on the client system.

The MDM protocol provides a tool called InstallApplication that will instruct a client Mac to download a pkg file and install it. For example, the Jamf Pro management suite uses this to install the jamf binary tool, which then can take over and perform many other management tasks, which the MDM system does not provide.

Some management systems (so far I know of SimpleMDM and AirWatch, let me know if I missed any) allow admins to provide their custom installer to install a local management tool (e.g. Munki, Puppet)

Notably, Apple’s reference MDM implementation, Profile Manager (part of the macOS Server app) does not allow for custom installs.

Erik Gomez has done outstanding work documenting his experiences with this process. The entire series is worth reading. but if you want to catch up quickly, the recent posts have a good summary of the status quo and a real-world implementation.

Offboarding, Re-installation and Re-purposing

Once the initial configuration is complete, MDM+VPP+management system will take care of installations and software updates. However, there are situations, where you will want to ‘nuke and pave’ or ‘wipe and re-install.’

There are many reasons an admin may want to do this, most of them involve ‘configuration drift.’ I.e. over time as more and more software gets installed and configured on a given system, errors and conflicts pile up and cause problems. At this point it is usually easier to ‘nuke and pave’ or ‘erase and install’ than to track down the actual conflicts.

In an ideal world, all the configuration owned by a user would be exclusively in that user’s home directory and you would only have to delete and recreate that user, rather than the entire system. However, we do not live in that world.

Many pieces of software store configuration in central locations, but still assume that these central locations are writable by the user. In most setups this is the case, because by default users are admins on macOS. However, this spreads configuration and other data all through the system, making it impossible to isolate all changes.

With imaging and NetInstall, admins could use the same workflow for the initial installation and configuration than for subsequent-installations.

Furthermore, the process could be automated to the point were no local interaction was necessary, or just the minimal interaction of someone restarting a Mac and holding the ‘N’ key. From then on all steps could run fully automatically and without interaction. Further more, imaging with block copy was fast.

With High Sierra, Imaging is not supported anymore, except in very specific circumstances. Automated Installations with NetInstall are broken. And with the iMac Pro the option for NetInstall goes away completely. (Which may explain why automated NetInstall was not fixed in High Sierra.)

This leaves manually booting to (Internet) Recovery, erasing the startup volume in Disk Utility and re-installing macOS as the only means of re-installing a Mac. After the installation DEP should re-connect the Mac with the MDM and management should take over.

However, you cannot completely automate the DEP interaction, so after waiting for ~30minutes for the installation to complete, someone has to confirm a few dialogs before managed installation can kick in and do the rest of the work. All this interaction is time-consuming and error-prone.

“Erase all Contents”

On iOS, you rarely have to re-install the system. Instead there is a function ‘Erase all Contents and Settings’ which restores a device to a clean unconfigured state, from which DEP and then the MDM can take over. You can even send the wipe command over the air with an MDM. (On iOS you also have to manually confirm a few dialogs before DEP and MDM can take over, but the entire process is much faster.)

Until Apple provides this feature on macOS locally and remotely, admins who rely on fast restores either have to stay on Sierra, postpone new hardware purchases, or revisit and redesign their workflows.

This mostly affects education customers with lab deployments. Other large Mac deployments were a Mac is “owned” by a single user, are less affected by this.

Sidenote on Mac App Store and VPP

Mac App Store applications have to be sandboxed, which means they can’t even access all of the user home directory, only their designated sandbox. Managing these applications and the data is much more manageable than other Mac applications and tools where anything goes.

On iOS, the App Store and VPP are the only means of distributing applications, and this is much simpler and manageable. However, the App Store rules prohibit entire classes of tools and services. While the Mac App Store enforces similar rules, users can still download and install applications and tools outside of the Mac App Store. For most Mac users, this is the defining advantage of macOS.

However, this higher complexity of software and deployment methods, requires more complex deployment and configuration workflows.

MDM + VPP cannot handle this complexity, which is why management systems, such as Jamf Pro, Filewave, and Munki, exist and need to exist for macOS management.

Also I need to say that not all software and installers need to be as complex as they are. Most software that comes with complex installation tools are unnecessarily complex and error prone. Often the developers are just taking a cheap shortcut, by assuming the current user has write access to the application bundle, or has admin privileges, etc.

However, there are still entire classes of professional software, that even if they did simplify their applications and installers, would not currently be allowed in the Mac App Store (IDEs and developer tools, hardware drivers, certain virtualization software features, anything that needs root access, etc.)

Also the App Stores (on macOS and iOS) have features, that cannot be purchased or distributed by VPP. In-App-Purchases and subscriptions cannot be purchased or distributed. Recently, Pre-orders for a special price were added as a feature to the App Stores, but these can also not be used for VPP.

Overall, I would love if all software were available in the (Mac) App Store and could be managed and distributed with VPP, but we are a long way from that reality.

Why all this?

By now it seems fairly obvious that Apple wants to get macOS system security to a point where only Apple can ever affect and change system software and firmware. That is a worthwhile goal. It means that your data is secure on an encrypted drive, they decryption key is locked in the secure enclave, but Apple can design solutions like TouchID and FaceID to unlock everything quickly.

To close the loop for all this security, the system needs to be able to verify and confirm that the software running the system (both the OS and firmware) are up to date and in their original state.

Imaging, NetBoot and NetInstall bypass most of this security. I believe it could be possible to create a networked installation workflow with all the security in mind, but it might just not be worth the effort. Apple seems to think this is not worth doing right now.

And remember that imaging and NetInstall are not valuable in themselves, but they are valuable as tools to achieve something useful, namely: automated installation and configuration of Macs.

DEP + MDM + VPP gets us there in many situations. In many use cases, DEP allows for workflows that were not possible before. Other technologies in macOS High Sierra (snapshots) promise some more useful tools, but they are not quite there yet.

Right now the gap between what we currently use as admins and what will come down the road is getting really wide.

Where to go from here?

There are two things a system administrator needs to balance:

  • provide a stable and efficient environment to manage the computers, software, configurations, and users
  • adapt the environment and workflows for new and future requirements and technologies

These to goals are often at odds and balancing them is a circus act in the best of times. Right now Apple is making our collective lives harder by shaking the rope we are standing on and throwing a few new balls in to the juggling act at the same time.

Apple and the MDM vendors are providing a powerful new solution DEP + MDM which works well for some deployement styles (1-to–1 deployments).

History has shown, that going against Apple’s vision of how their devices should be used, will not result in a smooth experience. Take a good look at your deployments using imaging and NetInstall: might a different deployment scenario work?

In education, labs are often used because certain software is too complex or expensive to be provided to all laptops. In this case virtualization, switching to another software solution or different OS might be a solution. (And please let your Apple rep know you are considering switching to another OS. That is the great leverage we have on Apple to support better workflows.)

To buy some time, you could hold on to Sierra for a while longer. Right now, all Macs except the iMac Pro still support Sierra. As new hardware gets released next year, your options will dwindle. Maybe your organization can accelerate or postpone purchases with that in mind. This cannot last forever, but buy you some time.

However, any new Mac releases will, like the iMac Pro, require High Sierra and (most probably) have the same Secure Boot features. How will you support those when they are purchased? The high price of the iMac Pro might discourage purchases, but for how much longer. You will need to have an answer in place.

(Also, this is great argument that you need an iMac Pro for testing, now… 🙂 )

Your answer may very well be, that you will have to accept the extra manual affort required to (re-)install High Sierra based Macs. However, then you had better have an idea of how much more effort and time will be required, to justify the extra workload to your organization.

Test, test, test!

Do you have a DEP + MDM solution in place? Did you get the budget for it? Are you testing deployment workflows with it? For the past year and more, the writing has been on the wall that this is the way to go. If you haven’t started on this by now, you really, really have to.

Do you have an idea/solution on how to smoothen the new deployment workflow for you or your users? Whether it is just an idea or a finished workflow, please discuss and share it in the MacAdmin community. The MacAdmins Slack is a great place to start. Maybe someone in the community will figure out how to use APFS snapshots to quickly and reliably restore a Mac to a well-known state before Apple does.

There are already someinterestingideas out there.

Maybe you’ve already done all this and found a setup that works for you. Well done! This would be a great time to present your solution and how you got there at a Mac Admin meeting or conference. Many other admins would love to learn from you. (Or just write a blog post.)

Talk with your Apple Reps, file bugs, etc. Don’t expect Apple to bring back imaging or NetInstall, but do point out the shortcomings of Apple’s solutions going forward.

The orchestration for Apple to get the new hardware and software components and pieces in place must be enormous. Some pieces take longer and with patience we will see how everything fits together.

We are living in interesting times!

Happy New Year 2018!

I have written a book which expands on this topic and is regularly updated. Please check it out: “macOS Installation for Apple Administrators

iMac Pro Implications for Mac Admins

The first iMacs Pro will ship this week to some lucky buyers, just in time to keep Apple’s promise of shipping this year.

Now that we have all gotten over the sticker shock when you max out the configuration in the Store, what does the new tech in iMac Pro mean for admins?

Secure Boot

First is the new secure boot. iMac Pro comes with Secure Boot enabled and External Boot disabled. You can disable (or moderate) the settings in the new ‘Startup Security Utility’ in Recovery.

With secure boot enabled, a Mac will verify the integrity of the OS and confirm with Apple before booting. It may require an update to be installed before continuing to boot.

Somewhat surprisingly, Secure Boot on the iMac Pro will verify the integrity of a BootCamp/Windows installation as well as macOS. (The continued persistence of BootCamp makes me wonder what Apple uses it for internally.)

The support article seems to imply that on the strongest setting, the iMac Pro might force an update before you can boot. We will have to wait and see how far back Apple will “trust” older versions of macOS.

By default an iMac Pro will not boot from an external device. This setting can be changed in the ‘External Boot’ area of the Startup Security Utility.

You can still boot to the Startup Manager with the option key but when you select an external drive you will get an error message. You can only select internal drives with the option key.

Both of theses settings can probably only be disabled manually in Recovery mode. This renders most automated installation and imaging procedures useless. Also the support article states that you have to enter a local administrator password to change the setting. This can also be difficult in settings where a tech or admin might not know a local password.

NetBoot

Prohibiting External Boot will (probably) also prohibit NetBoot and NetInstall. However, Apple updated their support article “Create a NetBoot, NetInstall, or NetRestore image” with the note:

iMac Pro computers don’t support starting up from network volumes.

Also the support article “Mac startup key combinations” has added this to the description of the ‘N’ key:

iMac Pro doesn’t support this startup key.

It is as of yet unclear if this means that iMac Pro will not NetBoot under any circumstances or if it will NetBoot, but not in the default configuration and you have disable the boot security first.

The phrasing in the articles seems clear, but it may be an error/omission. If you happen to get your hands on an iMac Pro and can test, NetBoot/NetInstall, please let me (and everybody else) know.

The other question that remains is whether Internet Recovery still works on the iMac Pro. There has not (yet) been an amendment to the Internet Recovery support article. Internet Recovery is a form of NetInstall as well, albeit with a different discovery method.

Imaging is dead and NetInstall is not doing so well

So, as predicted, the iMac Pro puts yet another nail in the coffin of imaging. You will have to run the iMac Pro in a lowered security mode, for it to accept an OS that was not installed on itself and verified by the internal T2 system controller chip.

While it is still possible to disable the boot security, this has to be done manaully. There is no way to automate the deactivation, much like you cannot automate disabling SIP.

Finally, NetInstall might not work at all, even when the boot security is disabled. And even if NetInstall does still work on the iMac Pro, NetInstall is still quite broken in High Sierra: additional pkgs have to be in just the right format to work, automated installations are broken, and you cannot initiate a NetInstall remotely through a script or the management system, but have to be physically at the machine and hold the ‘N’ key. (all of these affect all Macs, not just the iMac Pro)

And even when you have managed to get all of these to work, then new security like UAKEL and UAMDM might still require an administrator to touch all the machines again after re-imaging.

“Zero-touch” deployment

When you consider the standard use case, where a Mac is in possession of a single user (whether it is owned by that user or organisation) then most of these problems are fairly easy to work around with some user guidance and education. DEP enforces enrollment and tools like SplashBuddy and DEPNotify can make the process more understandable for the user.

“Zero-touch” deployment in this case means that the IT department will not have to touch the device. Even though you can automate much of the configuration, the enrollment is not entirely automatic, the process still requires the user to be at the Mac and fill in or confirm some dialogs.

However, for other deployment scenarios, especially general access labs in education, this breaks exisiting workflows. You never know what the users (and applications) are going to do to a system, even if they don’t have adminstrative privileges. Re-imaging rather than figuring out which configuration is broken is a quick and efficient remediation for many problems.

Many professional software packages are notoriously hard to install in an automated fashion and even harder to de-install cleanly. In addition, this kind of software tends to have very strict licensing terms and high prices. “Wipe and re-install” is a simple and fast workflow to ensure software and drivers are removed cleanly and repurpose a Mac (or an entire Lab of Macs) for a different task. (e.g., switch a video or audio lab to an lab with engineering and math software) Many admins have fully automated touch free workflows that can be started remotely through ssh, Apple Remote Desktop or a management system.

Not only do all of these workflows have to be re-visited and re-built without imaging, but they will not be able to run without user interaction. Without NetInstall (or if NetInstall remains broken) the user interaction may be non-trivial.

To wipe and re-install an iMac Pro, an admin has to boot to Recovery, manually erase the drive in Disk Utility and then start the installation process. The tech or admin will have to know and enter a local administrator password. Even with DEP, there are a few dialogs after the installation that need to be confirmed manually before DEP and any automation from the management system can start their work.

True “Zero-touch”, where no-one has to physically touch the Mac, (re-)deployment is not possible with Apple’s currently supported toolset for High Sierra and iMac Pro.

The Missing Piece

If macOS had an “Erase All Content and Setting” option like iOS does, then you could do a quick reset and with DEP + management system quickly restore a Mac to the previous (or a new) configuration. On iOS this is achieved by keeping the system on a separate volume from apps and user data. This separation would not be quite so easy on macOS, but with APFS snapshots the system could create (and preserve) a snapshot after a clean installation and provide hooks for scripts and management systems to restore to that.

It is quite frustrating that this option does not yet exist. Apple is removing older workflows from the toolset without providing a functioning alternative. If Apple decides to implent this function in macOS and enable its automation from an MDM, then you have the best of both worlds, the advanced security and automation and management for admins!

Make Noise

Apple seems to be unware of or indifferent to these methods and workflows. Most enterprise customers might not be affected by them. Those customers that are, need to let Apple know through the usual means: your sales reps, your support contact (if you have one) and by filing bugs.

If you are at an instituition that is considering to buy a classroom full of iMacs Pro you will have a large financial leverage with this deal. So let your sales reps and engineers know of your issues, but also be understanding that they might not have a solution for you right away.

Even so, DEP and MDM will be a major part of whatever solution you will have to use in the future. If you have not started working on your implementation yet, there is no time like the present.

Maybe you can use this article to convince your management to purchase an iMac Pro so you can test it. If that actually works, let me know. 😉

Interesting Software on Sale in the AppStores

I am working on a post on the iMac Pro, but then Apple dropped a few interesting support articles and I have to re-write the entire thing.

Until then, I found a few interesting sales going on the App Stores (iOS and Mac). Not sure how long these sale prices will be on for, so go get them! I use all of these apps regulary and recommend them often. App store links are affiliate links, so every purchase supports Scripting OS X.

Happy Holidays!

Edovia Screens: iOS, Mac

My favorite VNC/screen sharing application on iOS. Sale is for both Mac and iOS versions.

Byword: iOS, Mac

This is the app I use to write the posts for the newsletter and this blog. It is a simple yet useful markdown editor. Publishing directly from Byword to a weblog site is a free In-App-Purchase. (Not sure if this is part of the sale.)

Duet Display: iOS

Turn your iPad or iPhone into a second (or third) screen for your Mac. Duet Displays requires connection with a USB-Lightning cable to a mac but then you can use that nice retina iPad screen as extra dekstop space.

Junecloud Deliveries: iOS, Mac

My favorite package tracking software. This sale is for the Mac App Store.

Get the “Marketing Name” for a Mac

I am working on making our onboarding/installation process nicer. For that I decided it would be nice for SplashBuddy to say “Welcome to your ’MacBook (Air|Pro)/iMac/Mac (mini|Pro)”.

You can get the Mac model name easily enough from System Profiler:

$ system_profiler SPHardwareDataType
Hardware:
    Hardware Overview:
      Model Name: MacBook Pro
      Model Identifier: MacBookPro14,3

However, this is only the “model family.” When you go into “About this Mac” it give you a more detailed “name” like “MacBook Pro (15-inch, 2017).”

Friendly people on the MacAdmins Slack pointed me towards a property list file in the ServerInformation framework:

/System/Library/PrivateFrameworks/ServerInformation.framework/Resources/English.lproj/SIMachineAttributes.plist

This contains the “Marketing Model Name” among other information for each type of Mac, sorted by the model identifier, which you can also get from system_profiler or with sysctl -n hw.model. Even better this plist is localized, so you can get the marketing name in the current user’s preferred language.

The easiest way to get the correct localization is to use the system’s NSBundle class and methods and the read the property list. I chose Python to do this, since it offers access to NSBundle with the Cocoa-bridge and reading and working with property lists is very easy. (Learn more about working with property lists in Python in my book: ‘Property Lists, Preferences and Profiles for Apple Administrators’)

You can run this script with a parameter:

$ ./modelinfo.py MacBookAir7,2
13" MacBook Air (Early 2015)

When you don’t give a parameter, it will use sysctl -n hw.model to determine the current Mac’s model identifier and use that:

$ ./modelinfo.py
15" MacBook Pro with Thunderbolt 3 and Touch ID (Mid 2017)

Since the script uses NSBundle and does not go directly to a specific localization of the property list file, it will give different language results depending on the preferred language of the user:

$ ./modelinfo.py     # (Dutch)
15-inch MacBook Pro met Thunderbolt 3 en Touch ID (medio 2017)

$ ./modelinfo.py     # (Japanese)
15インチMacBook Pro、Touch Barを搭載(Mid 2017)

Imaging is Dead… Long Live the Installer!

The writing has been on the wall for a long time. With the release of macOS High Sierra, Apple has finally confirmed that imaging is dead.

Apple doesn’t recommend or support monolithic system imaging for macOS upgrades.

I have written a book which expands on this topic and is regularly updated. Please check it out: “macOS Installation for Apple Administrators

Update 2019-07-16: While most of the information in this post is still relevant,  I wrote a new, updated post regarding the macOS 10.15 Catalina Upgrade here: “Imaging is still dead

The Final Nail

The final nail in the coffin for imaging is this support article: Upgrade macOS on a Mac at your institution

It states the limitations to installing and upgrading macOS with High Sierra:

  • the Mac being installed or updated must be connected to the internet
  • installations and updates cannot be done on external devices, like those connected via Target Disk Mode, Thunderbolt, USB, or Firewire
  • there are four supported methods of installing macOS High Sierra

These methods are also re-iterated in this section in the macOS Deployment Reference

Some of the features have changed since this article was written. You can find an updated post for macOS 10.13.4 here.

Why?

Apple’s stated reason for requiring the installer is to ensure that a Mac’s firmware is all up to date and matches the OS installed on it.

Only the macOS Installer can download and install the firmware update. Firmware updates can’t be done on external devices, like those connected via Target Disk Mode, Thunderbolt, USB, or Firewire.

This is especially important in High Sierra, because to boot into a system on an APFS formatted (or converted disk) the Mac’s firmware needs to be able to mount and read APFS. The firmware that was installed with 10.12 or earlier is not able to read APFS volumes and. When you image a Mac with High Sierra and APFS without updating the firmware, you will get the question mark at boot, because the firmware cannot find a system.

However, the EFI, which manages (among other things) the boot process is not the only “firmware” that needs to be managed on your Mac. Many of the hardware components in your Mac, such as the SSD, the power controller (SMC) and the TouchBar controller on the new MacBooks Pro, have their own firmware that needs to be installed and updated.

Apple has been increasing the protection of the vital parts of the system and hardware. The firmware in these components cannot just be changed by any process. Only the Apple macOS Installer application has sufficient privileges and entitlements to perform these updates. The installer process has to run on the Mac itself, it cannot run over target disk mode.

Future Mac hardware might introduce even more components that require firmware.

On iOS, a secure boot chain prevents tampering with the system on a device after it has been installed. The secure boot chain also prevents replacing the system with an image from another device.

It is conceivable that Apple wants to implement a secure boot system on future Macs as well. Current Macs probably do not have the hardware required to implement this. (The TouchBar MacBooks Pro have a Secure Enclave chip like the iPhone and iPad and might already have the necessary pieces in place.)

So now what?

This has been coming for a long time. Even though APFS is not, as originally predicted, the direct culprit. It is still end-of-the-line for imaging.

However, the news is not entirely dire. Apple has been surprisingly forthright about the direction they want to go. While the documentation is a bit lacking, there are instructions on what the solutions for Mac System Adminstrators should be.

The four supported means of installing and upgrading macOS and the firmware for Macs are a clear direction of what needs to be done. However, Apple is a bit shy on how administrators can and should implement them.

There are a few options:

Put the Burden on the Users

This will work in some deployments where users are in control of their Macs. Either a full BYOD (Bring Your Own Device) scenario, or one where devices provided by the organisation are in full control of the user.

Even when the devices are enrolled in an MDM, users are still administrators and in control. Administrators can use reporting tools, to determine which Macs are capable of installing High Sierra and not upgraded yet.

You can even use reporting tools to gather information on the firmware and whether it matches the latest version.

You can then instruct users with email or notifications to download the High Sierra installer and initiate the upgrade themselves. You should warn them to have a current backup and that the process might take some time, so it should be run overnight.

If you have a software management system in place, you can use that to load the macOS Installer application on the clients and notify the user when it is ready.

The Mac App Store only downloads a “stub” installer application which is then filled in with an extra download. If you have blocked access to Apple Software Update Servers or redirected clients to a local, managed Software Update Server, clients might not get the complete installer application. Greg Neagle has a great post on this.

To save download time for the users, you can also provide USB/Thunderbolt drives with a bootable installer drive. This might speed things up a bit, though it does not really change the process.

How do we Automate this?

As system administrators we want to automate the process, so that it ideally does not require any human interaction. That way we can replicate the process hundreds and thousands of times.

Ideally, we also want to inject some custom steps into the process. Apple provides two means of achieving both of these steps, and some open source tools are providing solutions as well.

NetInstall

A custom NetInstall set built with Apple’s System Image Utility is one of the supported means of installing and updating macOS (and the firmware) to High Sierra.

You can find System Image Utility in /System/Library/CoreService/Applications. You can customize the process and even add your own installer packages, scripts or profiles. Packages used in System Image Utility have to be Distribution Packages.

Since a NetInstall system is like a Recovery system, you can use scripts to control behavior of your Mac like allowed NetBoot IPs, or ‘User-Approved Kernel Extension Loading’ (UAKEL) here as well.

You can even automate the NetInstall process to a point where, once you have chosen the NetInstall volume (when holding the the option key at boot) the remaining process is without interaction. (Though this is a dangerous choice, as it might simply wipe and re-install Macs. Use the other limitation options such as by MAC address or hardware type to keep this safe.)

NetInstall requires Mac running macOS Server. However, BSDPy can replace a NetBoot/NetInstall server and runs on many platforms, including VMs.

Note: as of 10.13.0 there still seem to be a few bugs with NetInstall on HighSierra. It works mostly but seems exceedingly slow. Also some admins have reported problems with adding mulitple packages, profiles or scripts for configuration. 10.13.1 is already in beta and seeding and you should be testing that.

NetInstall on USB

If you do not have the infrastructure to run a NetInstall or BSDPy server, you can also restore the NetInstall.dmg image that System Image Utility creates to an external drive. When mounted on a Mac, it will the High Sierra installer applications, and double-clicking it will start the proper installation process.

Any additional packages, profiles or scripts will be included with this custom external install application as well.

The startosinstall Command

When you already have a management system (Munki, Jamf, Filewave, etc.) you want to initiate the update process with rules or policies. At first glance the supported means of installing macOS seem to be at odds with managed client workflows, since they require user interaction.

However, there is a tool hidden inside the macOS Installer application (since macOS 10.12 Sierra) called startosinstall. The full path to the tool is

/Applications/Install macOS High Sierra.app/Contents/Resources/startosinstall

Note: I believe it was Rich Trouton who first documented this tool in his notes for WWDC 2016. Since then many admins and open source projects have worked to figure out how to use this tool in the best way.

When you run it with the --usage argument you get the following:

$ /Applications/Install\ macOS\ High\ Sierra.app/Contents/Resources/startosinstall --usage
Usage: startosinstall

Arguments
--applicationpath, a path to copy of the OS installer application to start the install with.
--license, prints the user license agreement only.
--agreetolicense, agree to license the license you printed with --license.
--rebootdelay, how long to delay the reboot at the end of preparing. This delay is in seconds and has a maximum of 300 (5 minutes).
--pidtosignal, Specify a PID to which to send SIGUSR1 upon completion of the prepare phase. To bypass "rebootdelay" send SIGUSR1 back to startosinstall.
--converttoapfs, specify either YES or NO on if you wish to convert to APFS.
--installpackage, the path of a package to install after the OS installation is complete; this option can be specified multiple times.
--usage, prints this message.

Example: startosinstall --converttoapfs YES 

There is also an undocumented --nointeraction flag which can be used to run the tool without any user interaction. This is obviously useful for management systems.

Once you have used to your management system to make sure the macOS Installer application is on the client system, you can execute a script with the startosinstall command to initiate the installation process. Remember that the installation process can take a long time, so it should be initiated by the user in a Self Management portal or run during off-hours for kiosk like Macs in labs or classrooms.

startosinstall --applicationpath /Applications/Install\ macOS\ High\ Sierra.app \
    --agreetolicense \
    --nointeraction

The --converttoapfs [YES|NO] argument allows you to suppress automatic APFS conversion on SSD Macs.

There is also a --volume argument to target the non-boot volume. However, this will only work when SIP is disabled or when you run startosinstall from a Recovery/NetInstall disk.

The --installpackage option allows you to add one or more custom packages that will be installed after the OS installation is complete. This is very useful for customization and cleanup. Packages used with startosinstall --installpackage also have to be Distribution Packages.

Note: even though the usage states that you can repeat the --installpackage argument, as of 10.13.0 only the first package given will run the installation with fail with more than one package. Make that one package count. (Note: edited this paragraph. Thanks to Greg for clarifying.)

Tool Support

Is Imaging completely dead?

The imaging tools (like Disk Utility, hdiutil and asr) will work with APFS volumes. However, the support article states:

You can use system images to re-install the existing operating system on a Mac.

So when you need a workflow that requires quick re-imaging, you can use one of the supported methods to install or update the Mac (Firmware and OS) and then use monolithic (or thin) imaging over network or thunderbolt for fast restores. This is useful for scenarios where fast imaging turnaround is required, such as classrooms and labs or loaner laptop setups. However, you have to use extra care to make sure the image system version matches the version that was installed.

Going forward, I expect imaging to be less and less feasible as future Mac hardware and security features will make it harder and harder to use. The times where we could have just one image which will run on all supported Macs might be over as hardware (and the software required to run the hardware) becomes more and more fractured. Note that there is not a unified single ‘iOS’ image/installer for all iOS devices.

Summary

macOS High Sierra 10.13.0 works well for individual users. However, there still are quite a few issues that are relevant for managed deployments. There are many problems with Active Directory and Filevault, NetInstall is slow and adding multiple packages to an installation is broken. Just to mention a few.

Even though it makes sense for some deployments to hold back from High Sierra right now, you will want or have to upgrade soon.

Apple said at WWDC the iMac Pro will ship in December 2017. Its tech specs page, states it will run High Sierra. You can expect it to require High Sierra.

Also, critical security patches might only be pushed for High Sierra.

Imaging is dead. In an unusual move Apple has come right out and said it loud and clear. If you have not done so already, start testing and implementing one of the above strategies right now, so you are ready to move to High Sierra.

Read about more changes with the macOS 10.13.4 updates here.

I have written a book which expands on this topic and is regularly updated. Please check it out: “macOS Installation for Apple Administrators