macOS Installation Book – Update!

When I published my new book on “macOS Installation” I was very aware of the fact that I was trying to capture a moving target. The good thing about digital books is that they are software and, as such, can be easily updated.

Today, I pushed the first update to “macOS Installation” to include some extra information from the last few weeks.

I am somewhat surprised that neither of the two 10.13 updates since the book was released or the news about macOS Mojave (10.14) at WWDC has led to major changes.

Even the release of the 2018 MacBook Pro last week confirmed our expectations rather than surprising them. Nevertheless, the updates and other new information have added up to the point where I thought it was time for an update. I have listed the changes here. You can also find the list of changes (with links to the relevant sections within the book) in the ’Version History) section of the book itself.

  • Updated Secure Boot sections to include the 2018 MacBook Pro
  • Added a few notes on Recovery and Content Caching changes with 10.13.5
  • Restructured and re-wrote the first section of Chapter 5. It is now two sections with some new figures.
  • Older macOS Versions: added a link to El Capitan download
  • APFS: replaced mentions of ‘Flash’ drives with ‘solid-state storage (SSD)’, added a note of Apple’s APFS plans in macOS Mojave
  • corrected the description of non-removable MDM profiles in ‘Avoiding DEP’

Most of the changes are in anything related to Secure Boot (because of the new MacBook Pro). I also re-wrote and clarified the first section of Chapter 5, the ‘Strange New World’ section and added a few new figures to visualize the workflows better. (You can sample read the original version.)

If you have bought the book, the update is free and you should be notified about it in the iBooks app. If you have not purchased the book yet, you can get in the iBooks Store!

Thank you!

Hasta la Vista, Imaging…

New MacBook Pros! With T2 chips!

The new features, improved RAM and SSD capacity, keyboard (!) and screens are all nice and interesting. Even more remarkable is that Apple mentions the T2 chip in the headline.

Of course, the T2 chip means, that like the iMac Pro, the 2018 MacBook Pros will not NetBoot (at all) or boot from external devices (without going through a convoluted setup process).

So far, it was possible to downgrade 2017 MacBook Pros to Sierra and keep using the same imaging procedures as before. Now, Apple has now moved their flagship Mac model to the new architecture.

If you do not have an installation based deployment based workflow prepared yet, it is high time to get one in place. I explain what you can do and some examples of how you can do it in my new book: “macOS Installation for Apple Administrators” (sample chapter here).

The Next Age of the Mac

Yesterday marked the day where Mac OS X has been available to the public longer than the previous Macintosh operating system (known as ‘Mac OS’ [with a space] or ‘Classic’ towards the end of its lifetime).

This has been noted by many on Twitter and on the Mac news sites.

I do think it is a milestone worth noting. Darwin/Mac OS X/OS X/macOS has served as a stable foundation not just for the Mac but also for the iPhone, iPad, Apple TV, Apple Watch and now even the HomePod.

However, I have also seen comments that the “next age of the Mac” will have to happen soon, because Mac OS X/macOS is so old now. That statement really bugs me.

Mac OS X was not Apple’s first attempt at an operating system to replace ‘Classic.’ In the late 80s and early 90s it was already obvious that the Macintosh System architecture would not scale to modern CPUs and work requirements. An application that crashed would normally take down the entire OS. You had to assign memory to an application manually. System Extensions would frequently conflict with each other and also crash the entire system. There was no concept protected memory or segregating processes from each other. There were no multiple users with access privileges in the file system.

The operating system had been designed for a 8MHz 16-bit CPU with 128KiloByte of RAM where every cycle and every byte had to count! Apple was now in the PowerPC era and the requirements for the system were vastly different.

Taligent, Copland and Gershwin were successive efforts and promises for a better system that each failed for various reasons. Some of the parts of each did find their way into Mac OS. Then Apple bought NeXT (and Steve Jobs) and the rest is history.

So, for most of the nineties, it was obvious that the classic Macintosh system needed replacement something newer. Microsoft had Windows NT and alternative operating systems like BeOS and NeXTStep were showing the way. By the time Mac OS X arrived, classic Mac OS was old, but users needed to hang on to it because of critical applications and workflows.

Now, in 2018, macOS might be as old as Classic was in 2001, but it doesn’t feel as old. Like the original system for the Macintosh 128K, Mac OS X 10.0 was designed for entirely different hardware and use cases. In 2001 only the high-end PowerMacs had two CPUs. Mac OS X required at least 64MB of RAM (a 500 fold increase over the original Macintosh). Laptop batteries would last for three to four hours under the most ideal circumstances. Digital photography and video was still vastly inferior to analog. Music came from CDs. Screens had far lower resolutions and security meant requiring a password to login. Wifi was new, and hardly ubiquitous. Bluetooth was brand new and used in expensive cell phones which where used for talking, not data. There was no App Store.

All of this would change, sometimes quickly, over the next years.

Mac OS X and Apple as a whole were able to adapt to these changes. Hardware and software were optimized to deal with video and media. Multi-tasking and threading were improved as multiple CPUs and cores became cheaper and common, even in laptops, tablets and phones. As mobility and power consumption got more important the hardware and software was adapted to take that into account. Security and privacy became more and more important and integrated in the operating systems and file systems.

Apple used Mac OS X as the basis for iPhone OS, porting a Unix system to a phone. There has also been much back and forth of software and technology between the two systems (or three or four). macOS and iOS have evolved, changed and adapted in a way that the classic Mac OS in the 80s and 90s did not.

I am not claiming that macOS in its current form is perfect and cannot or should not be improved. In a few days at WWDC Apple will show us how they plan to further evolve macOS and iOS to adapt further to the future and I am looking forward to it.

When you wonder ‘what is next for the Mac’ you are ignoring that in 2018, the Mac and macOS are not an isolated platform any more.

All my Apple devices talk with each other and exchange data. My Mac shows me which website I am reading on the phone. My phone unlocks my watch and my watch can unlock my Mac. I can create a note on my tablet, add pictures from the phone and finish it on my Mac. I can read messages on my Mac or have them read to me by the phone through my headphones. When I say ‘Hey Siri’ the devices that can hear me decide among themselves which should answer.

macOS and the Mac are now just a part of larger ‘system.’ This system runs on different devices: from my headphones to the iMac on my desk to servers on Apple’s data centers. It includes custom silicone, software, and data stores and relies on communication and local cached data and protocols to communicate locally and all around the world.

The digital hub has grown to the ‘digital net’, where everything is connected and (ideally) everything is available everywhere.

Not all of this works all the time yet. Why the iPhone still cannot pickup a playlist from where I paused it on the Mac is still an absolute mystery to me.

When this new system fails, we get very frustrated, there is no ‘shell’ we can drop down to, to fix a thing. It is often quite impossible to even figure out in which part of this net of devices and services the problem is occuring.

macOS, iOS, iCloud, Siri, HomeKit, Bluetooth and Wifi, Messaging, email, App Stores and third party apps, devices and services like Google, Office 365, 1Password, etc.

Hardware, software and services. All have to work together in the digital net.

When Apple introduced Mac OS X one of the main benefits was that you could easily manage multiple users on one device.

Now, 17 years later, we have multiple devices per user.

I want is the Mac to keep evolving and adapting with my digital net, so I can continue to use its strengths (large screen, CPU/GPU power, storage, high throughput I/O) and supplement its weaknesses (not mobile, few sensors) with the other devices and services. I don’t want the Mac to fall behind or out of the digital net.

I want to stop having to think about whether something I want to do is a “Mac” task or a “phone” task, but whether I’d rather have a keyboard and a large screen or maybe prefer to do it in a chair in the backyard or by talking with Siri, while walking somewhere. Not all those options will work for every task, but I’d like the options to increase. And I want the Mac to be part of that.

I don’t expect a new age for the Mac. I don’t want a new age for the Mac. That would be too small, too myopic, too limiting.

The next age of the Mac is with the digital net and it has already begun.

Dutch MacAdmins Meeting: 8 June

The (ir)regular meeting of Dutch MacAdmins will happen again! We will meet on June 8, 14:00-17:00 at SAP Netherlands in ‘s-Hertogenbosch. Main topics will be:

  • WWDC news and how it affects MacAdmins
  • Mac management at SAP
  • anything else you may want to bring up

Join #thenetherlands channel on MacAdmins Slack for questions, feedback and great discussions or if you want to volunteer to present.

Registration (Eventbrite, free)

Converting Composer dmg ‘installers’ to pkg

Jamf Composer has always had two formats to build installers. The standard pkg and the seemlingly standard (but not) dmg. The pkg option will build a standard pkg installer file, which will install with any system that can install pkg files.

The dmg option will build a standard dmg disk image file, with the payload of the installer as contents. On its own, however, this dmg cannot do anything. The Jamf Pro management system how ever will understand what to do and how to install the files from the dmg to a system. There are certain features in Jamf Pro which can install and distribute files to user directories and templates (called ‘Fill User Templates’ FUT and ‘Fill Every User’ FEU) which only work with dmg installers in Jamf Pro.

However, Jamf themselves have been recommending to use the standard pkg format in favor of their proprietary use of dmg. Also the Composer application is 32-bit and its future is uncertain.

Luckily there are plenty of great other third-party tools to build installer packages. I cover many of them in my book: Packaging for Apple Administrators

In general, it is probably preferable to re-visit your imaging process and rebuild any installer you still may have in dmg format from scratch. However, in some cases that might not be possible or necessary.

Since the Composer generated dmgs contain all the files for the payload in the proper folder structure you can just use the entire mounted volume as your payload root for pkgbuild. You can easily convert a Composer generated installer dmg to a standard pkg with these commands:

1) mount the dmg:

$ hdiutil attach /path/to/Sample.dmg

this will output a bunch of info, the very last bit is the mount point of the dmg /Volumes/Sample (the name will depend on the dmg)

2) build a pkg with the contents of the mounted dmg as a payload:

$ pkgbuild --root /Volumes/Sample --version 1.0 --identifier com.example.sample --install-location / Sample-1.0.pkg

This will create Sample-1.0.pkg in your current working directory. (I like to include the version in the pkg file name, but that is entirely optional.)

3) cleanup: unmount the dmg

$ hdiutil detach /Volumes/Sample

Obviously this will not work well with other dmgs, such as Full System dmgs, or dmgs downloaded from the web, which contain an app that should be dragged to /Applications to install (use quickpkg for those dmgs).

macOS 10.13.4 Spring Update for Mac Admins

With the recent release of 10.13.4 (the ‘spring update’) a few things have changed for the deployment of macOS. The initial premise is still unchanged: Imaging is still dead.

(NetInstall got a bit of a life extension, though.)

I have written a book which expands on this topic and is regularly updated. Please check it out: “macOS Installation for Apple Administrators

Quick Recap

High Sierra came with many features for both users and admins. However, for Mac admins it brought the support article HT208020: “Upgrade macOS on a Mac at your institution”. In this article Apple lists the supported means of installing and upgrading macOS and explicitly states that ‘monolithic imaging’ is not recommended and will not ensure that the firmware of a Mac is of the correct version to run the OS image that was just laid down. (I posted an article on this back in October.)

Then, with the release of the iMac Pro in December, it became clear that NetBoot and NetInstall, will not be supported with the new hardware. The assumption is, that will also be true for all new Macs with the T2 or a newer system controller. (My post on that, from December.)

NetBoot and NetInstall are used by many administrators to provide a centralized workflow to (re-)deploy or re-purpose a Mac. Common tools are macOS Server’s NetInstall, DeployStudio, AutoCasperNBI, or Imagr.

On top of that some NetInstall features, such as automated installation and adding custom packages were broken in earlier releases of 10.13.

Also booting off external drives became much harder on the iMac Pro, since its default security prohibits external boot and you have to go through the entire installation process before you can re-enable it.

Also Apple announced that the macOS Server app will lose most features, including NetBoot/NetInstall in a future release later this year. (My post on that, from January.)

What changed in 10.13.4?

The ‘spring update’ macOS 10.13.4 brought a few welcome changes.

We got a glimpse of some of these in February when HT208020 was briefly updated with new information. Interestingly, now that 10.13.4 is released, HT208020 has not been updated.

However, we got new detailed information in this article:

Enterprise content:
 - No longer disables User Approved Kernel Extension Loading on MDM-enrolled devices. For devices with DEP-initiated or User Approved MDM enrollment, administrators can use the Kernel Extension Policy payload.
- Improves Spotlight search results for files stored on network mounts.
- Properly evaluates ACLs on SMB share points.
- Adds the --eraseinstall flag to the startosinstall command in the macOS Installer app at Contents/Resources/startosinstall. Use this flag to erase and install macOS on a disk. For details, run startosinstall with the --usage flag.
- Updates System Image Utility to allow creating NetInstall images that erase and install macOS to a named target volume.

Robert Hammen posted a great summary on Slack.

Not documented here, but 10.13.4 also fixes the bug that the defaults command deletes non-plist files.

UAKEL

The first one is really important. Apple introduced a new security feature called “User Approved Kernel Extensions” (UAKEL) in 10.13. This means that third party kernel extensions have to be approved by the user at the Mac (within 30 minutes after installing) before they can be loaded.

Prepare for changes to kernel extensions in macOS High Sierra – Apple Support

In 10.13.0–10.13.3 Apple simplified the life of Mac admins by disabling UAKEL on Macs which were enrolled with an MDM. In 10.13.4 Apple added a Kernel Extension Policy profile payload which allows Mac admins to whitelist certain Kernel Extensions centrally from the MDM.

This is a useful addition and allows Mac admins to manage Kernel Extensions before they are installed and without the necessity of user interaction.

However, 10.13.4 also changes the previous behavior for UAKEL on MDM managed Macs. Since admins now have a way of whitelisting Kernel Extensions, UAKEL will be enabled, even on MDM managed Macs.

If you were installing a Kernel Extension on a managed Mac from 10.13.0 to 10.13.3 it would work, since MDM disabled UAKEL. Once you upgrade that Mac to 10.13.4, UAKEL will turn active and it will block the Kernel Extension from loading, unless there is a profile whitelisting the extension!

When the extension was previously manually approved on 10.13 or grandfathered in when the Mac was upgraded from 10.12, then it will still run under 10.13.4. While all of this has some internal logic, it will lead to strange situations were Kernel Extensions will load on some clients and not load on others.

The best way to avoid the confusion is to have the Kernel Extension Policy profile ready and in place before your clients update to 10.13.4. In other words: now!

User Approved MDM

However, (yes, there is another ‘however’ here). The Kernel Extension Policy profile has to “be delivered via a user approved MDM server.” This is another level of security introduced to keep the user in the loop.

Apparently there is malware/trojans that tricks users into accepting MDM profiles to connect their iOS devices or Macs to malicious MDMs. I am not convinced these new measures will be effective against this kind of trickery, though.

Macs deployed with the Device Enrollment Program (DEP) are considered “user approved” by default. Otherwise the user has to approve the MDM profile when it is installed. Again, this cannot be automated or approved over remote control.

This can throw a wrench into non-DEP installation workflows (sometimes also called user-initiated enrollment). Some management systems will let the user download a pkg that installs the MDM profile and necessary certificates. Some solutions install the MDM through an agent software (which is still necessary for many tasks that the mdm client software cannot perform). Either of these workflows will require the user to go to the Security pane in system preferences within 30 minutes after installation.

Note: Jeremy Baker found a creative way that uses User-interface scripting to click the approve button. However, his script requires Accessibility access for the script, which also cannot be provided in an automated fashion. (The database in question is protected by SIP.)

However, in 10.13.4, when a User installs the MDM profile directly by double-clicking it, they will be prompted to approve the MDM as part of the profile installation dialogs, streamlining the process.

Rich Trouton has documented the new workflows for the user in these articles:

For Jamf you will need to upgrade your Jamf Pro server to version 10.3 to get the new workflow. Other management solutions may already implement this or also need to be on the latest version to work well with these new requirements.

startosinstall

The next interesting new feature is for startosinstall which gains a new --eraseinstall option.

Graham Pugh has already documented this very well:

Erase All Contents And Settings – erase and reinstall macOS in situ

This allows for automated workflows where you can wipe and re-install macOS, add a few custom packages to the installation process with --installpackage arguments, which configure your management system. Then after first boot your management system takes over and installs/configures the rest.

It is important to note that startosinstall uses some APFS volume creation trickery to make --eraseinstall work. This means that you cannot run --eraseinstall on a Mac with a HFS+ system volume. You have to already be on a 10.13 system with APFS to use this option.

Nevertheless,--eraseinstall is a welcome and necessary addition to startosinstall. However, what struck me most about this is that this is the first time Apple is even mentioning the startosinstall command in any documentation. Since this tool is central to many approaches to automate the installation process, I am happy it is finally getting recognized as ‘official’.

NetInstall

The last feature isn’t really new. Apple fixed a bug that has been around since 10.13.0. In previous versions of 10.13, when you built a NetInstall set with System Image Utility and chose the option for “Automated Erase Install” on a certain volume (by name), the installation would just stall at a grey screen. Now, this option works as expected, when you build the NetInstall set from 10.13.4.

The days of NetInstall are still numbered because the iMac Pro (and presumably future Macs with similar controllers) cannot NetBoot at all, and macOS Server is loosing NetInstall along with many other services. Nevertheless, this provides another Apple-supported workflow for automated erase and re-install which you can customize with your own packages.

NetInstall (with or with out the erase) is also a good workflow to upgrade Macs with older versions of macOS to 10.13. (In this case we don’t care that it doesn’t work for iMac Pros, since they already come with 10.13.)

Since, currently, in most deployments the iMacs Pro will be a significant minority (if present at all), this allows administrators to deploy a well known workflow (NetInstall) for the existing fleet this year, while figuring out new workflows (startosinstall + DEP?) for future Macs.

There are, however, still a few kinks left with NetInstall workflows:

More Options

There was no Spring Update resurrection: Imaging is still dead.

It is obvious, that while Apple may not be going in exactly the direction that we would like them to, they are listening to criticism and providing solutions, albeit slowly. The additions to startosinstall and the fix to NetInstall now allow for automated wipe and install workflows. These were not really possible before.

However, installation workflows, however you start them, are much slower than block copying a prepared image. This is an important consideration for education deployments, which usually re-image dozens, hundreds, or even thousands of Macs over the summer break.

But this update now provides a few useful options for High Sierra:

  • use NetInstall Automated Erase to wipe and upgrade existing Macs from earlier macOS versions to 10.13
  • use startosinstall to upgrade and update Macs (including iMac Pros) to the latest version of 10.13
  • use startosinstall with --eraseinstall to ‘wipe and install’ existing 10.13 Macs
    • this will only work with 10.13 SSD Macs with APFS. Since Fusion drives and HDs cannot get APFS yet, you will have to use NetInstall to convert/upgrade these Macs, when (if ever) they support APFS
    • iMac Pros cannot use automated NetInstall, however, since they definitely have APFS 10.13 already installed, you can use startosinstall --eraseinstall
  • if fast restore times are important (like a loaner MacBooks scenario, where devices need to get reset to a well-known state quickly), use any of the above to get the Macs up to the latest 10.13, then you can still use imaging to quickly lay down a ‘fresh’ image of the same macOS version
    • you will need to put in extra effort to keep the image system version in sync with the version installed (or upgraded) on the target Macs

Now is the time to start testing, testing, testing to get your workflows ready for the summer re-installation marathon and the 10.14 release in the fall. (And to give Apple another chance to fix the remaining issues in the next update.)

I have ignored the file share changes in this post. While these are certainly important, they don’t really have influence on deployment strategy.

Join the MacAdmins on the MacAdmins Slack to share experiences and solutions!

Apple’s new Upgrade/Update Strategy

There is another aspect of the Spring Update.

Apple switched to the yearly upgrade cycle for Mac OS X with 10.7. (Upgrade meaning a ‘major’ version change, i.e. 10.8 -> 10.9. Nevermind that Apple uses the second version number for the major version. More on macOS version numbers here.) Apple did summer releases for 10.7 and 10.8 and then switched to the Fall release schedule. Since 10.9 releases have been reliably in late September or October.

The rule used to be that upgrades would bring lots of new features, both visible to the user and under the hood and then Apple would release updates (the third number in the version) to fix bugs and issues. Sometimes new features would be introduced in updates, but those were rare exceptions and usually done to match with iOS or iCloud (dotMac, Mobile Me) features.

The rule of thumb was that the first two or three releases were for ‘early adopters’ only and it would be fairly safe to migrate to the new major version by the third or fourth update. Admins could join the developer program to get access to the developer releases of the next upgrade after WWDC, but getting your hand on early releases of the updates was more difficult.

iOS, on the other hand, has had a different pattern. Apple released iOS 4.2 in the spring with new features to support the then new iPad. Since the iPad and iPhone hardware releases rarely synced to the same time of year, iOS has had a pattern with a major new iOS release in the Fall (usually with a new iPhone) and a ‘Spring Update’ with new features to match with a new iPad.

Even though the Mac hardware follows yet other cycles. We now see this Fall Upgrade/Spring Update pattern with macOS as well.

Apple has added new features in 10.13.4. Some are visible to users (eGPU support, Business Chat, new privacy dialogs), some are for client management (UAMDM, UAKEL profiles, new configuration profile documentation).

We now also have a public beta program for iOS and macOS which covers not just the major upgrades, but ‘minor’ updates as well. The beta versions for iOS 11.4 and macOS 10.13.5 were released right after 11.3 and 10.13.4. And it looks like they will contain yet more new features (iMessages on iCloud and AirPlay 2).

The fact that Apple is willing to add new features or change existing fucntionality at any time during the update cycle is a win to users. It is also a reaction to competitors more-cloud based solutions that can be updated at any time.

However, for us admins this means change:

  • the notion of a ‘stable’ release is a thing of the past.

Features might be added or changed at any time during the upgrade cycle. Different parts of the deployment and workflow will be in different stages of ‘maturity’. Additionally, parts of the workflow (DEP, MDM, and VPP) exist in the cloud and might also change at any time (Apple School Manager was not introduced with a major iOS release and it looks like Apple Business Manager will not sync with a major iOS or macOS release, either).

macOS and iOS don’t stand alone. Apple (and third parties) are building networks of operating systems, software, devices and cloud-based services. Scheduling these releases in to a yearly major update cycle must be nearly impossible. It is also not necessary as distribution of software has become reliable, secure, and fast enough to push frequent updates.

In some ways Apple is reacting to their more cloud-based competitors, which can and will push incremental updates to their systems continually.

  • ‘permanent beta’ mode

The beta versions for iOS 11.4 and macOS 10.13.5 were released hours after the release 11.3 and 10.13.4. If you are concerned how the new updates will work (or not) in your environment, you have to be testing now.

This is being a pro-active adminstrator. Rather than waiting for problems to occur and trying to fix them, you are anticipating problems and trying to pre-empt them. The traditional release cycle of the past allowed us to switch between the two roles over the year. Now, we are either in permanent beta-test or in permanent break/fix.

I don’t think any one likes it, but it is the situation we have to deal with and I don’t see that changing any time soon. You will have to adapt your and your organization’s workflows to adapt to this new situation.

  • Apple is listening and communicating the changes

Provide feedback (bugreports) to Apple. Not all the issues you find will be fixed (some might be).

However, this gives you time to document issues for your users and allow you to implement management strategies to ameliorate them. (Even if all you can do is write knowledge base articles along the line of “we know this is broken” you are saving some people a lot of time and nerves.)

I find it very interesting and encouraging that we are learning about these changes in offical support articles. Also then changes to UAKEL and UAMDM were based on user feedback, mainly adminstrators who complained that the feature as it was initially implemented was unmanageable.

Of course there are many other challenges and issues which have not been fixed (yet). But it is encouraging to see this kind of feedback work.

I have written a book which expands on this topic and is regularly updated. Please check it out: “macOS Installation for Apple Administrators

Setting the PATH in Scripts

A discussion that comes up frequently on MacAdmin Slack and other admin discussions is:

Should commands in scripts have their full path hardcoded or not?

Or phrased slightly differently, should you use /bin/echo or just echo in your admin scripts?

I talked about this briefly in my MacSysAdmin session: Scripting Bash

Why can’t I just use the command?

When you enter a command without a path, e.g. echo, the shell will use the PATH environment variable to look for the command. PATH is a colon separated list of directories:

$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin

The shell will look through these directories in order for the given command.

You can read more detail about the PATH and environment variables in these posts:

PATH is Unreliable

The example PATH above is the default on macOS on a clean installation. Yours will probably look different – mine certainly does:

$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/VMware Fusion.app/Contents/Public:/usr/local/munki:/Users/armin/bin

Third party applications and tools can and will modify your PATH. You yourself might want to change your PATH in your shell profile.

But on top of that, the PATH will be different in different contexts. For example, open the Script Editor application, make a new script document, enter do shell script "echo $PATH" and run the script by hitting the run/play button.

The small AppleScript we just built runs the shell command echo from the AppleScript context. The result is

/usr/bin:/bin:/usr/sbin:/sbin

Note how the PATH in this context is different from the default macOS PATH in Terminal and also different from your PATH.

I also built a simple payload-free installer package which only runs the echo "Installer PATH: $PATH" command. You will have to search through /var/log/install.log to get the output:

installd[nnn]: ./postinstall: Installer PATH: /bin:/sbin:/usr/bin:/usr/sbin:/usr/libexec

Which is yet another, different PATH.

Solutions to the PATH confusion

The PATH may be different in different contexts that your script may run in.

That means we cannot be consistently certain if a command will be or which command will be found in a script in different contexts. This is, obviously, bad.

Mac system adminstrative scripts which can run in unusual contexts, such as the Login Window, NetInstall, the Recovery system or over Target Disk Mode and usually run with root privileges. You really want to avoid any uncertainty.

There are two solutions to make your scripts reliable in these varying contexts:

  1. hardcode the full path to every command
  2. set the PATH in the script

Both are valid and have upsides and downsides. They are not exclusive and can be both used in the same script.

Going Full PATH

Update: 2020-08-25 changed some of the sample commands.

You can avoid the unreliability of the PATH by not using it. You will have to give the full path to every command in your script. So instead of

#!/bin/sh

systemsetup -settimezone "Europe/Amsterdam"

you have to use:

#!/bin/sh

/usr/sbin/systemsetup -settimezone "Europe/Amsterdam"

When you do not know the path to a command you can use the which command to get it:

$ which systemsetup
/usr/sbin/systemsetup

Note: the which command evaluates the path to a command in the current shell environment, which as we have seen before, is probably different from the one the script will run in. As long as the resulting PATH starts with one of the standard directories (/usr/bin, /bin, /usr/sbin, or /sbin) you should be fine. But if a different PATH is returned you want to verify that the command is actually installed in all contexts the script will run in.

Using full paths for the commands works for MacAdmin scripts because Mac administrative scripts will all run on some version of macOS (or OS X or Mac OS X) which are very consistent in regard to where the commands are stored. When you write scripts that are supposed to run on widely different falvors of Unix or Linux, then the location of certain commands becomes less reliable.

Choosing your own PATH

The downside of hardcoding all the command paths is that you will have to memorize or look up many command paths. Also, the extra paths before the command make the script less legible, especially with chained (piped) commands.

If you want to save effort on typing and maintenance, you can set the PATH explicitly in your script. Since you cannot rely on the PATH having a useful value or even being set in all contexts, you should set the entire PATH.

This should be the first line after the shebang in a script:

#!/bin/sh
export PATH=/usr/bin:/bin:/usr/sbin:/sbin

systemsetup -settimezone "Europe/Amsterdam"

Note: any environment variable you set in a script is only valid in the context of that script and any sub-shells or processes this script calls. So it will not affect the PATH in other contexts.

This has the added benefit of providing a consistent and well known PATH to child scripts in case they don’t set it themselves.

The downside of this is that even with a known PATH you cannot be entirely sure which tool will be called by the script. If something installed a modified copy of echo in /usr/bin it would be called instead of the expected /bin/bash.

However, on macOS the four standard locations (/usr/bin, /bin, /usr/sbin, /sbin, as well as the less standard /usr/libexec) are protected by System Integrity Protection (SIP) so we can assume those are ‘safe’ and locked down.

/usr/local/bin is a special case

But notice that I do not include /usr/local/bin when I set the PATH for my scripts, even though it is part of the default macOS PATH. The PATH seen in the installer context does not include /usr/local/bin, either.

/usr/local/bin is a standard location where third party solutions can install their commands. It is convenient to have this directory in your interactive PATH under the assumption that when you install a tool, you want to use it easily.

However, this could create conflicts and inconsistent results for administrative scripts. For example, when you install bash version 4, it will commonly be installed as /usr/local/bin/bash, which (with the standard PATH) overrides the default /bin/bash version 3.

Since you chose to install bash v4, it is a good assumption that you would want the newer version over the older one, so this is a good setting for the interactive shell.

But this might break or change the behavior of administrative scripts, so it is safe practice to not incluse /usr/local/bin in the PATH for admin scripts.

Other Tools

When you use commands from other directories (like /usr/libexec/PlistBuddy, or third party tools like the Munki or Jamf tools) then it is your choice whether you want to use full path for these commands or (when you use the commands frequently in a script) add their directory to the PATH in your script:

E.g. for Munki

export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/munki

or Jamf

export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/jamf/bin

Since the third party folders are not protected by SIP, it is safer to append them at the end of the PATH, so they cannot override built-in commands.

Commands in Variables

Another solution that is frequently used for single commands with long paths is to put the entire path to the command in a variable. This keeps the script more readable.

For example:

#!/bin/sh

# use kickstart to enable full Remote Desktop access
# for more info, see: http://support.apple.com/kb/HT2370

kick="/System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart"

#enable ARD access
$kick -configure -access -on -users remoteadmin,localadmin -privs -all
$kick -configure -allowAccessFor -specifiedUsers
$kick -activate

Note: that you need to quote the variable when the path to the command contains spaces or other special characters.

Summary

As a system administrator it is important to understand the many contexts and environments that a script might be run in.

Whether you choose to write all command paths or explictly set the PATH in the script is a matter of coding standards or personal preference.

You can even mix and match, i.e. set the PATH to the ‘default four’ and use command paths for other commands.

My personal preference is the solution where I have to memorize and type less, so I set the PATH in the script.

Either way you have to be aware of what you are doing and why you are doing it.

All the Articles

While organizing links for my upcoming MacAD.UK presentation, I noticed that there are quite a few series of articles I have written over the past year or so. Some are weren’t quite intended as series but turned into loose sequels. Some were intended as series from the start, but usually turned out longer than intended.

Anyway, while I was organizing links anyway I also created a new page on this site that organizes the series:

Article Series on Scripting OS X

I intend to keep it updated as new articles and series are added.

Enjoy!