So Long, Cassini, and Thanks for all the Pics!

(Please indulge me while I stray from my usual topics.)

Today the Cassini orbiter will burn up in Saturn’s atmosphere, nearly twenty years after it was launched from Earth and thirteen years after it entered orbit around Saturn.

It makes me terribly sad to see the (physical) end of its mission. But I believe it is better to celebrate what an amazing and mind-boggling mission it has been.

Not only did the Huygens probe land on another planet’s moon, but every time the controllers pointed Cassini’s camera and sensors at something new, it discovered a surprise.

Cassini saw the hexagonal storm on Saturn’s pole. A moon that looks like a dryed out sponge, moons with ridges, hydrocarbon lakes on Titan, water volcanoes on Enceladus, gravity waves in the rings

The list goes on and on.

And the pictures sent back to Earth kept being utterly beautiful, showing a planetscape and moonscapes vastly different from our terrestrial experience. We could never have imagined the views and features revealed to us by the orbiter’s camera.

But now we can.

This is of course, the point of science and exploration. Where ever humanity starts to explore, we always discover the unexpected.

Cassini, Rosetta, Juno, NewHorizon and all the other probes in space, prove over and over again, that the universe is stranger than we can imagine. But once someone has measured or seen our universe has expanded. We have learned.

After today there will be one spacecraft left in the outer solar system: Juno is orbiting Jupiter and scheduled to be scuttled into the Jovian atmosphere in 2018. NewHorizons has zipped through the Plutonian system and is preparing to meet a small icy body in the Kuiper belt on its way out of the system.

As of now, none of the space agencies are preparing another mission to the outer solar system. There are plans. However, the preparation and travel necessary for these missions requires several years.

Once Juno is scuttled, we will not see new pictures from the outer solar system for at least a decade or more.

This makes me much more sad than seeing the end of this glorious, astounding and wonderful mission.

Thank you, Cassini, and everyone who made this happen!

New Support Articles for High Sierra

Apple has released a few support articles relevant for Mac Administrators:

These contain a few very interesting and useful pieces of information.

System Installation and Upgrades

The article lists four supported methods of installing or upgrading macOS:

The article explicitly states installing with Target Disk Mode is not supported. Also it states that “monolithic imaging” is neither supported nor recommended for “upgrading or updating”. The reason given is that firmware updates, which may be required for the new version of macOS, will not be applied with monolithic imaging or installing over Target Disk Mode.

Interestingly enough the article goes on to say that imaging can be used to restore a Mac to the currently installed macOS version. You can build images from/with APFS volumes with Disk Utility/diskutil and System Image Utility.

This is a surprisingly detailed amount of guidance from Apple. It does not matter whether you use “fat imaging” where you capture a fully installed image from an existing installation or “thin imaging” where you create a base system image with some small additional installations with a tool like AutoDMG.

You should not use imaging to upgrade an OS, either major or minor upgrades. You can, however, still use imaging to restore a Mac quickly to the currently installed OS version. Keeping the firmware of the Mac in sync with the OS is the obvious reason, but remember that TouchBar MacBook Pros have a separate firmware/OS for the TouchBar/Secure enclave controller. Also the APFS file system conversion that happens during the macOS High Sierra upgrade rearranges the system volume layout.

If you don’t need to quickly restore Macs often, you should interpret this as the official direction to abandon imaging. You should use one of the supported installation and upgrade methods for the OS and a software management system such as Munki, Jamf Pro, Filewave etc. for the additional software and configuration.

If you are in an environment where you frequently need to quickly restore Macs (classrooms and loaner laptops), then you need two workflows: one to upgrade the OS and firmware, and another for the quick restoration using imaging (which I assume will still work with Target Disk Mode).

Imaging is dead!

(except for some particular use cases)

Some people are already working on extracting the firmware update part from the system image installer and that may be useful for some workflows. But in general it will be less effort and trouble to go with the recommended, supported solutions.

If you need to check if the firmware of a given Mac matches the OS, you can use this table provided by Pepijn Bruienne.

You can see your firmware version in the System Profiler application, it is listed under ‘Hardware’ as ‘Boot ROM Version.’ You can also use the system_profiler command: system_profiler SPHardwareDataType

Secure Kernel Extension Loading

The article on Secure Kernel Extension Loading (SKEL) recaps what we already know from Technical Note TN2459 but has two very interesting additions:

In macOS High Sierra, enrolling in Mobile Device Management (MDM) automatically disables SKEL. The behavior for loading kernel extensions will be the same as macOS Sierra.
In a future update to macOS High Sierra, you will be able to use MDM to enable or disable SKEL and to manage the list of kernel extensions which are allowed to load without user consent.

Once again this provides fairly obvious direction: you should use MDM in some form to manage Macs.

You do not have to use a combined solution for MDM and software management (e.g. Jamf or Filewave) but can combine an MDM with a different management solution (e.g. Munki and/or Chef or Puppet). SimpleMDM and AirWatch are leading with solutions that support installing the client agents over the MDM InstallApplication command, which means you can distribute Munki etc. to Mac clients even over DEP.


The article on APFS also recaps much of what we already know. However there is one sentence which clarifies when a Mac will be upgraded to APFS and when it will remain on HFS+:

When you upgrade to macOS High Sierra, systems with all flash storage configurations are converted automatically. Systems with hard disk drives (HDD) and Fusion drives won’t be converted to APFS. You can’t opt-out of the transition to APFS.

In other words: spinning disks (including Fusion) remain on HFS+, “pure” SSDs get APFS. You get no choice either way. Since the conversion to APFS seems to contain more that merely the filesystem conversion (APFS system volumes have a different partition layout).

The APFS conversion is another step that will happen when you run the macOS Installer, rather than when you image. Technically you will able to build an APFS macOS image on an SSD Mac and then image that to a Mac with spinning disk, but the result is not supported according to the upgrade article.

It will be interesting to see if Fusion drives will be added to APFS support in a future update. It might be that the parts to support multi-drive APFS aren’t quite ready yet, or that Apple considers the benefits are not worth the effort, and Fusion drives should be considered a fading tech from now on.

Disk Utility can format external drives as APFS, but consider that those will only be readable by Macs with 10.12.6 and 10.13.


I am sure I missed a lot of pieces, things are still fresh and not even entirely out of beta yet. New workflows and methods will definitely emerge once High Sierra is released. However, now we actually got some specific “dos and don’ts” from Apple. Use these to plan your future workflows and infrastructure. If you have not started testing with the developer beta or public beta yet, now is the time.

Thoughts on the iPod (2001–2017)

“Dead!?” Mine still works!
I started working for Apple Germany in January 2001.

It was a few years after the release of the iconic Bondi Blue iMac and the deal with Microsoft. The worst of the dark ages were over. But still, many of my friends were astounded, when Microsoft and Linux would obviously split the market in the future, and Apple had no future. My main reason for joining was that I was that I was excited for Mac OS X, which was (in my opinion) the only OS that successfully combined a decent UI with a Unix core. I got the job then because I knew both Mac and Unix.

Even though Mac OS X 10.0 (later known as Cheetah) had been released earlier that year, the uptake with customers was still minuscule as major third party vendors were slow to adopt the new platform. (You’re thinking Adobe, but the main laggard was actually Quark. Adobe would gain market share from adopting Mac OS X quickly, but I am leaping ahead here.) Apple’s main “bread’n butter” business was selling PowerMac G4 and the new Titanium PowerBook G4 to print and design professionals.

Because of time zones Keynotes were shown in the Apple Germany Office after hours. For the Keynote in October 2001 there were only three people who stayed to watch. The rumor mill had predicted some ‘iTunes announcement.’ Most co-workers dismissed the iPod as a toy that wouldn’t help us sell Macs.

I had been eyeing other MP3 players and really liked the idea of a ‘1000 Songs in Your Pocket.’ So I took advantage of the employee deal to buy an iPod for half price. When I finally got my iPod it quickly became a “must carry” device along with my cell phone.

It is really hard to remember that we used to have at best one or two handful of music CDs or cassettes with us when we were on the road.

Over the next few months and years, I would count the increasing number of people wearing the iconic white headphones on my commute on the Munich S-Bahn (metro). At first there’d only be a few and we would exchange knowing nods, as if we were the members of some secret club. From that anecdotal count and from the sales numbers it was obvious that Apple was on to something. Interestingly, the Mac sales numbers rose together with iPod sales. The press and Apple executives called this the ‘halo effect.’

My main job during the early first decade of the millennium was consulting IT departments, mainly on Xserves, Xserve RAIDs and Xsan (which were introduced just a few months later). Many customers were, however, much more interested in the latest iPod. Mac and Xserve was work, but iPods were fun! I remember doing a workshop at a university on Mac/Unix/Servers and Storage when the news dropped that the iTunes Store was now (finally) available in Germany. This announcement got cheers and applause from an IT crowd.

The iPod certainly turned out to be a successful ambassador for Apple. It gave Apple good press and helped raise the image of Apple out of permanent ‘beleaguered’ status. It also showed that design and user experience could be successful against mere feature lists and price. The combination of operating system, interface and hardware from the same designer mattered and made the difference. This emboldened Apple to stick to this same philosophy with the Mac.

Once cell phones started storing and playing music, the demise of the iPod was obvious, though it was a slow recline. Apple did the ‘courageous’ act of cannibalizing their own product with the iPhone.

Yesterday, Apple removed the last dedicated iPods from the Store and the webpage. The iPod lives on in the iPod touch and the Music app on iPhones and iPads. It still marks the end of an era.

The iMac demonstrated that Apple could and would keep doing what Apple was good at: building great personal computers. The iPod showed that Apple could be more than that.

The iPod transformed Apple from a one-product company (the Mac), to a consumer electronics company with multiple product lines and platforms. Apple had attempted this before with the Newton, but had not been successful. Ironically, they probably were not even planning this with the iPod. The iPod also lead to the iTunes Store, which was the platform the AppStore is based on.

Not bad for a “lame” product.

Relocatable Package Installers and quickpkg Update

In my book “Packaging for Apple Administrators” I show a great use of pkgbuild to wrap an application in a package installer:

$ pkgbuild --component /Applications/ Numbers.pkg

If the application is not already in the /Applications folder, you have to add the --install-location:

$ pkgbuild --component /Volumes/Firefox/ --install-location /Applications Firefox.pkg

This is great and wonderful, but has one drawback: the installers pkgbuild creates this way are ‘relocatable’. When the installer does not find the application in the target location, it will look if the application is installed elsewhere on the system. If it finds the ‘relocated’ application bundle, it will happily try to update it in the new location.

Usually this is not a big problem on managed systems. However, if users have copies of applications in unusual locations, e.g., because they do not have permission to install in /Applications or because they themselves are admins with dozens of versions in ~/Library/AutoPkg, then this can lead to unexpected behavior or failure.

The common solution to this is to create ‘non-relocatable’ installer packages.

What makes a pkg relocatable

The relocate element in the PackageInfo file in an installer package controls this behavior. You can see the PackageInfo file in Pacifist or with pkgutil:

$ pkgutil --expand Firefox.pkg Firefox_expanded
$ more Firefox_expanded/PackageInfo 

Among much other data you will see this xml element:

    <bundle id="org.mozilla.firefox"/>

This tells the Installer to look for an application bundle with the given identifier and install in that location. To disable this, you can replace the above element with an empty relocate element:


Then installer will install or upgrade in the given install-location (e.g. /Applications) only.

You can apply this change to the expanded PackageInfo file with a text editor and re-create the pkg file with

$ pkgutil --flatten Firefox_expanded/ Firefox-nr.pkg

(‘nr’ for ‘non-relocatable’)

However, applying these steps after creating each package is tedious and error-prone, so we want to look for a better solution.

Telling pkgbuild to not re-locate

The pkgbuild man page mentions there is an option to create non-relocatable installer pkgs with the BundleIsRelocatable option in a ‘component property list’. This is great, since it is better to use documented options, rather than hacking the PackageInfo directly. However, to use the --component-plist option with pkgbuild you have to use the --root option rather than the --component option This requires a bit more effort.

First create a project folder:

$ mkdir -p Firefox/payload
$ cd Firefox

And copy the application to the payload directory:

$ cp -R /Volumes/Firefox/ payload/

Then you can use pkgbuild’s --analyze to create a template component property list:

$ pkgbuild --analyze --root payload Firefox-component.plist
pkgbuild: Inferring bundle components from contents of payload
pkgbuild: Writing new component property list to Firefox-component.plist

You can then open the generated property list file in a text or property list editor. You will see several values for different settings and a list of ChildBundles. Change the value of the BundleIsRelocatable key from <true/> to <false/>. You can do this in the editor or with the plutil command:

$ plutil -replace BundleIsRelocatable -bool NO Firefox-component.plist

Then build the package with pkgbuild:

$  pkgbuild --root payload --identifier org.mozilla.firefox --version 53.0.3 --install-location /Applications --component-plist Firefox-component.plist Firefox-53.0.3.pkg

This will build the package installer with an empty relocate element.

Note: munki-pkg has an option suppress-bundle-relocation which achieves the same result.


This approach can be useful but is still complicated. To simplify the creation I have updated my quickpkg tool to create non-relocatable packages by default. You can change the new default behavior with the --relocatable option.

$ quickpkg ~/Downloads/Firefox\ 53.0.3.dmg 


Tab Completion for autopkg

Tony Williams aka ‘honestpuck’ has built a script to enable tab-completion for autopkg in bash.

This means that you can type

$ autopkg s⇥

(where ⇥ is the tab key) and it will autocomplete to

$ autopkg search 

This will also work for recipe names:

$ autopkg run BBEdit⇥⇥  BBEdit.jss       BBEdit.pkg       
BBEdit.install   BBEdit.munki     

This is really useful. Auto-completion not only saves on typing, but helps to avoid errors.

Installing autocompletion in your profile

Tony has provided instructions on how to install the script with brew. However, it not hard to install this manually in your .bash_profile or .bashrc. First, clone the github repository on to your system (I keep all projects like this in an un-creatively named ‘Projects’ folder):

$ cd ~/Projects
$ git clone

This will download the project to autopkg_create. The file we need is the autopkg file inside that folder.

Then add the following lines to your .bash_profile or .bashrc:

if [[ -r "$HOME/Projects/autopkg_complete/autopkg" ]]; then
    source "$HOME/Projects/autopkg_complete/autopkg"

You will need to adjust the path if you are using a different location. Basically these lines say: if this file exists and is readable, then read and interpret it as bash source. Since you need to define functions in the context of the shell, you need to `source` the file, rather execute it as script. (When you run the the file as a script, the functions will be defined in the context of the script, and then ‘forgotten’ when the script ends.)

Save your new profile and open a new Terminal window or type

$ source ~/.bash_profile

to update an existing shell.

Thanks again to Tony Williams, this is very useful!

Typefaces for Coding and Terminal

Since the previous posts were about customizing the shell for shell I thought I’d update an older post and look at some monospaced fonts suitable for Terminal and text editors to get a change from Menlo.

Not so serious, but fun…

C64 TrueType is a fun addition at the end. As the name implies this font recreates the 8-pixel characters from the C64. Together with some extra settings in Terminal and your bash_profile you can take your terminal back to the 80s.

If you have been following along my lose series on Terminal in macOS, this serves as a nice example of some more exotic Terminal customization.

MacSysAdmin 2017 Conference Open for Registration

You can now register for the MacSysAdmin Conference in Göteborg (Sweden) which will go from October 3-6.

It is a great conference and I have wanted to go for years, but never managed to make it. The list of speaker has been and is really very impressive.

So I am very proud to announce that not only will I be attending but also presenting a session on macOS bash scripting this year! (so literally a session on ‘Scripting OS X… macOS’)

Looking forward to seeing you all there!

Weekly News Summary for Admins – 2017-04-07

On Scripting OS X

Mac Pro: Signs of Life

This week’s big surprise is that Apple has let out some early news that they are working on a new Mac Pro. Even more interesting than new hardware is (for me) that Apple is trying to re-assure the pro and prosumer market that Apple cares about them.

Micheal Tsai has a great summary post.

Other News

To Listen