Prefs CLI Tools for Mac Admins

Recently I have been working on some… well… “stuff” that uses custom configuration profiles. Very custom, and since I am testing things, they need to be updated a lot.

The issue with defaults

When you are working with defaults/preferences/settings/property lists on macOS, you will be familiar with the defaults command line tool. But, as useful as defaults can be, it has some downsides.

One of the great advantages of macOS’ preference system is that settings can be provided on multiple levels or domains. In my book “Property Lists, Preferences and Profiles for Apple Administrators, I have identified 19 different levels where settings for a single application can originate.

You will be most familiar with plist files in /Library/Preferences (system), ~/Library/Preferences (user), and managed configuration profiles (managed). When an app or tool requests a setting, the preferences system will merge all those levels together and present only the most relevant value. When the developer uses the system APIs (correctly), they do not have to worry about all the underlying levels, domains and mechanisms very much, but automatically gain support for things like separated system and user level settings files and support for management through configuration profiles.

The macOS defaults command line tool can work with settings on different levels or domains, but will only show the settings from one at a time. By default it only works with the user domain settings stored in ~/Library/Preferences/. When you have settings in multiple levels or from configuration profiles, you may be able to point defaults directly at the files. Or in the case of managed settings from profiles, you have to use a different tool. Either way, you have to determine which setting might override another and which final value might be visible to the app or process.

A new prefs tool

Years back, I had built a python script, called prefs.py, which would not only show the coalesced set of settings but their origin level. When macOS removed Python 2 in macOS 12.3, this tool obviously broke.

While working with preferences and profiles recently, this feature would have been quite useful to debug and verify preferences. I could have adapted the existing tool to work with MacAdmins Python 3, but felt I would learn something from recreating it in Swift. I had already started down that road just a bit for my sample project in this post.

So, you can find the new Swift-based prefs command line tool on GitHub. You can also download a signed and notarized pkg which will install the binary in /usr/local/bin/.

If its most basic form, you run it with a domain or application identifier. It will then list the merged settings for that preference domain, showing the level where the final value came from.

% prefs com.apple.screensaver
moduleDict [host]: {
    moduleName = "Computer Name";
    path = "/System/Library/Frameworks/ScreenSaver.framework/PlugIns/Computer Name.appex";
    type = 0;
}
PrefsVersion [host]: 100
idleTime [host]: 0
lastDelayTime [host]: 1200
tokenRemovalAction [host]: 0
showClock [host]: 0
CleanExit [host]: 1

I find this useful when researching where services and applications store their settings and also to see if a custom configuration profile is set up and applying correctly. There is a bit of documentation in the repo’s ReadMe and you can get a description of the options with prefs --help.

plist2profile

Another tool that would have been useful to my work, but that was also written in python 2 is Tim Sutton’s mcxToProfile. Back in the day, this tool was very useful when transitioning from Workgroup Manager and mcx based management to the new MDM and configuration profile based methods. If you have a long-lived management service, you will probably find some references to mcxToProfile in the custom profiles.

Even after Workgroup Manager and mcx based settings management was retired, Tim’s tool allowed to create a custom configuration profile from a simple property list file. Configuration Profiles require a lot of metadata around the actual settings keys and values, and mcxToProfile was useful in automating that step.

Some management systems, like Jamf Pro, have this feature built in. Many other management systems, however, do not. (Looking at you Jamf School.) But even then creating a custom profile on your admin Mac or as part of an automation, can be useful.

So, you probably guessed it, I also recreated mcxToProfile in Swift. The new tool is called plist2profile and available in the same repo and pkg. I have focused on the features I need right now, so plist2profile is missing several options compared to mcxToProfile. Let me know if this is useful and I might put some more work into it.

That said, I added a new feature. There are two different formats or layouts that configuration profiles can use to provide custom setting. The ‘traditional’ layout goes back all the way to the mcx data format in Workgroup Manager. This is what mcxToProfile would create as well. There is another, flatter format which has less metadata around it. Bob Gendler has a great post about the differences.

From what I can tell, the end effect is the same between the two approaches. plist2profile uses the ‘flatter’, simpler layout by default, but you can make it create the traditional mcx format by adding the --mcx option.

Using it is simple. You just need to give it an identifier and one or more plist files from which it will build a custom configuration profile:

% plist2profile --identifier example.settings com.example.settings.plist

You can find more instructions in the ReadMe and in the commands help with plist2profile --help

Conclusion

As I had anticipated, I learned a lot putting these tools together. Not just about the preferences system, but some new (and old) Swift strategies that will be useful for the actual problems I am trying to solve.

I also learnt more about the ArgumentParser package to parse command line arguments. This is such a useful and powerful package, but their documentation fails in the common way. It describes what you can do, but not why or how. There might be posts about that coming up.

Most of all, these two tools turned out to be useful to my work right now. Hope they will be useful to you!

Build a notarized package with a Swift Package Manager executable

One of the most popular articles on this blog is “Notarize a Command Line Tool with notarytool.” In that post I introduced a workflow for Xcode to build and notarize a custom installer package for a command line tool. This workflow also works with apps and other projects that require a customized installer package building workflow. I use it in many of my own projects.

But Xcode is not the only way to build Swift binaries. Especially for command line tools, you can also use Swift Package Manager. This provides a mostly command line based interface to building and organizing your project, which you might prefer if you want to use an IDE that is not Xcode, or have Swift projects that need to run cross-platform.

I also have an older article on building a command line tool with Swift Package Manager. But then, I did not create an installer package or notarize the resulting binary.

Placing the binary in an installer package file is the best way to distribute a binary as you can control where in the file system the binary is installed. Notarizing the pkg file is necessary when you are distributing a command line tool, since it enables installations without scary dialogs or handling quarantine flags.

Also, some of the behavior of Swift Package Manager (SPM) and Xcode have changed since the previous posts. So, this article will introduce an updated workflow using Swift Package Manager tools and how to sign, package and notarize a command line tool for distribution.

Note on nomenclature: Swift Package Manager projects are called ‘packages.’ On macOS, installer files (with the pkg file extension) are also called ‘packages.’ We will be using SPM to build a notarized installation package (a pkg file) from a Swift package project. This is confusing. There is not much I can do about that other than using ‘installer package’ and ‘Swift package project’ to help distinguish.

Prerequisites

I wrote this article using Xcode 14.3.1 and Swift 5.8.1. It should also work with somewhat older or newer versions of Xcode and Swift, but I have not tested any explicitly.

Since I said earlier that using Swift Package Manager allows us to not use Xcode and maybe even build a cross-platform project, you may be wondering why we need Xcode. While we don’t need Xcode for our project, it is one way of installing all the tools we need, most importantly the swift and notarytool binaries. You get those from Developer Command Line tools, as well. We will also see that we can combine Xcode with the command line Swift Package Manager workflow, which I find a very useful setup.

To submit a binary to Apple’s notarization process you will need a Personal or Enterprise Apple Developer account, and access to the Developer ID Application and Developer ID Installer certificates from that account. A free Apple Developer account does not provide those certificates, but they are necessary for notarization

You can follow the instructions in the Xcode article on how to get the certificates and how to configure notarytool with an application specific password. If you had already done this previously you should be able to re-use all of that here. When you reach the ‘Preparing the Xcode Project’ section in that article, you can stop and continue here. Apple also has some documentation on how to configure notarytool.

The sample code we will be using will only work on macOS as it uses CoreFoundation functions. Installer packages and notarization are features of macOS, too, so this is not really a problem here. You can use this workflow to build macOS specific signed binaries and notarized installation pkg files from a cross-platform Swift package project. This will work as long as you keep in mind that the tools to sign, package and notarize only exist and/or work on macOS.

The sample code

We will build the same simple sample tool as in the last article. The prf command (short for ‘pref’ or ‘preference’) reads a default setting’s effective value using the CFPreferencesCopyAppValue function.

The macOS defaults command will read preferences, but only from the user level, or from a specified file path. This ignores one of the main features of macOS’ preferences system as it will not show if a value is being managed by a different preference level, such as the global domain, a file in /Library/Preferences, or (most importantly for MacAdmins) a configuration profile.

You can learn all about preferences and profiles in my book “Property Lists, Preferences and Profiles for Apple Administrators.”

We will build a really simple command line tool, named prf which shows the effective value of a setting, no matter where the value comes from. You could make this tool far more elaborate, but we will keep it simple, since the code is not the actual topic for this article.

We will also be using the Swift Argument Parser package to parse command line arguments and provide a help message. We could build this simple tool without using Argument Parser, but using an external package module is one of the strengths of using Swift Package Manager.

Create the Swift Package project

With all the preparations done, it is time to create our Swift package. We will do all the work in the shell, so open Terminal or your other favorite terminal app and navigate to the directory where you want to create the project.

> cd ~/Projects

Then create a new directory with the name swift-prf. This will contain all the files from the Swift package project. Change directory into that new directory. All following commands will assume this project directory is the current working directory.

> mkdir swift-prf
> cd swift-prf

Then run the swift tool to setup the template structure for our command line tool or ‘executable.’

> swift package init --type executable 
Creating executable package: swift-prf
Creating Package.swift
Creating .gitignore
Creating Sources/
Creating Sources/main.swift

You can inspect the hierarchy of files that the init tool created in the Finder (open .) or in your preferred editor or IDE.

.gitignore
Package.swift
Sources
    main.swift

`

You can open this package project in Xcode. In older versions of Xcode you had to run a special swift package command to generate the Xcode project, but now, Xcode can open Swift package projects directly. Use xed (the ‘Xcode text editor invocation tool’) to open the current directory in Xcode.

> xed .

There is a pre-filled .gitignore (which will be hidden in Finder and probably your IDE), a Package.swift, and a Sources directory with a single main.swift inside. If you want to use git (or another version control) system, now is the time to initialize with git init.

Build the project with swift build and/or run it with swift run. Not surprisingly, the template prints Hello, world!.

> swift build
Building for debugging...
[3/3] Linking swift-prf
Build complete! (0.92s)
> swift run  
Building for debugging...
Build complete! (0.11s)
Hello, world!

After building, there will also be a .build directory (also hidden in Finder, unless you toggle the visibility of invisible files using shift-command-.) which contains all the interim files. In the debug folder, you can find the swift-prf executable. You can run it directly:

> .build/debug/swift-prf
Hello, world!

You can clean all the generated pieces from the .build directory with swift package clean. This will leave some empty folders behind but remove all the interim and final products. This means the next build is going to take much longer, but this can be helpful after reconfiguring the Package.swift file or when the compiler gets confused.

Sidenote: when you use Xcode to edit your Swift package project, and choose Build or Run from the Xcode interface, then it will build and run in a different location (~/Library/Developer/Xcode/DerivedData/swift-prf-<random-letters>/Build). You need to be aware of this when you alternate between Xcode and the command line.

Configuring the Package

The Package.swift file contains the configuration for a Swift package project. You can see that the executable package template has a single target named swift-prf that builds from the files in Sources.

To change the name of the executable file, change the value of the name: of the .executableTarget to just prf. There is another name: earlier in the file, that sets the name of the entire project, you can leave that being swift-prf. They do not have to match.

Then build the project in the command line and run it directly:

> swift build
Building for debugging...
[3/3] Linking prf
Build complete! (0.51s)
> .build/debug/prf          
Hello, world!

We want to add the Swift Argument Parser package to our project as a dependency, so we can use its functionality in our code. For that, we will have to add a ‘dependency’ to the project and then to the target, as well. Modify the Package.swift file to match this:

// swift-tools-version: 5.8
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
  name: "swift-prf",
  products: [
    .executable(name: "prf", targets: ["prf"]),
  ],
  dependencies: [
    .package(url: "https://github.com/apple/swift-argument-parser", from: "1.2.0"),
  ],
  targets: [
    .executableTarget(
      name: "prf",
      dependencies: [.product(name: "ArgumentParser", package: "swift-argument-parser")],
      path: "Sources")
  ]
)

This means that our project uses the package available at the given URL, and our target is going to use the specific product (or module or framework) named ArgumentParser from that package. Some packages have several products combined out of several targets.

You can find more information on the format of the Package.swift file in this overview, and the full documentation.

The next time you build after this change, it will download the repository, build and link to toward your executable. That might take a while. The next build should be much faster again. Also, a Package.resolved file will appear in the project. This file caches the current versions of the included packages protecting you from unexpected changes when a package repo dependency updates. You can force Swift Package Manager to update the file with swift package update.

Sprinkle some code

Now that we have the Swift package project prepared, we can add the code to actually do something.

First, let’s keep the ‘Hello, world!’ for a moment, but put it in the right struct to use ArgumentParser. Change main.swift to:

import Foundation
import ArgumentParser
@main
struct PRF: ParsableCommand {
  func run() {
    print("Hello, world!")
  }
}

This should build and run fine from the command line with swift build and swift run. However, when you open this now in Xcode, you will see an error: 'main' attribute cannot be used in a module that contains top-level code

This comes from a long-running issue in Swift. In older versions of Swift it appears on the command line, as well. The work-around is easy though. It only seems to appear when the @main designator is the main.swift file. We can rename our main file to PRF.swift.

You may want to close the Xcode project window before you do this because this can confuse Xcode. If you manage to get Xcode into a confused state where the project in Xcode does not match what is on disk any more, quit Xcode and delete the .swiftpm/xcode directory, which is where Xcode keeps its generated files.

> mv Sources/main.swift Sources/PRF.swift

Now the project should build and run the same with the Swift Package Manager tools and in Xcode.

Now we can add the ‘full’ code for our tool. Keep in mind that the goal of this tutorial is not to learn how to write complex swift code for command line tools, but to learn the infrastructure requires to create and distribute them, so this code is intentionally simple and basic.

import Foundation
import ArgumentParser
@main
struct PRF: ParsableCommand {
  static var configuration = CommandConfiguration(
    commandName: "prf",
    abstract: "read effective preference value",
    version: "1.0"
  )
  @Argument(help: "the preference domain, e.g. 'com.apple.dock'")
  var domain: String
  @Argument(help: "the preference key, e.g. 'orientation'")
  var key: String
  func run() {
    let plist = CFPreferencesCopyAppValue(key as CFString, domain as CFString)
    print(plist?.description ?? "<no value>")
  }
}

When you compare that to the code from the last article, there are a few differences. We are using the @main attribute to designate the main entry point for the code (this was added in Swift 5.3) and I have added some help text to the tool and argument declarations.

When you use Swift Argument Parser, you should study the documentation on adding help to [commands](I have added some help text to the tool and argument declarations. ) and flags, arguments and options. (To be honest, you should read the entire documentation, a lot has changed since the last article.)

When you now run the tool:

> swift run  
Building for debugging...
[3/3] Linking prf
Build complete! (0.54s)
Error: Missing expected argument '<domain>'
OVERVIEW: read effective preference value
USAGE: prf <domain> <key>
ARGUMENTS:
  <domain>                the preference domain, e.g. 'com.apple.dock'
  <key>                   the preference key, e.g. 'orientation'
OPTIONS:
  --version               Show the version.
  -h, --help              Show help information.

We get the help text generated by Swift Argument Parser with the extra information we provided in the code.

If you want to provide the arguments to the swift run you have to add the executable name, as well:

> swift run prf com.apple.dock orientation       
Building for debugging...
Build complete! (0.11s)
left

Or you can run the executable directly from the .build/debug directory. (This will not automatically re-build the command like swift run does.

> .build/debug/prf com.apple.dock orientation
left

Since we provided a version in the CommandConfiguration, ArgumentParser automatically generates a --version option:

> .build/debug/prf --version       
1.0

Now that we have a simple but working tool, we can tackle the main part: we will package and notarize the tool for distribution.

Preparing the binary

When you run swift build or swift run it will compile the tool in a form that is optimized for debugging. This is not the form you want to distribute the binary in. Also, we want to compile the release binary as a ‘universal’ binary, which means it will contain the code for both Intel and Apple silicon, no matter which CPU architecture we are building this on.

The command to build a universal release binary is

> swift build --configuration release --arch arm64 --arch x86_64

When that command is finished, you will find the universal binary file in .build/apple/Products/Release/prf. we can check that it contains the Intel (x86_64) and Apple silicon (arm64) with the lipo tool:

> lipo -info .build/apple/Products/Release/prf
Architectures in the fat file: .build/apple/Products/Release/prf are: x86_64 arm64 

For comparison, the debug version of the binary only contains the platform you are currently on:

> lipo -info .build/debug/prf
Non-fat file: .build/debug/prf is architecture: arm64

Apple’s notarization process requires submitted binaries to fulfill a few restrictions. They need a timestamped signature with a valid Developer ID and have the ‘hardened runtime’ enabled.

Xcode will always sign code it generates, but the swift command line tool does not. We will have to sign it ourselves using the codesign tool. You will need the full name of your “Developer ID Application” certificate for this. (Don’t confuse it with the “Developer ID Installer” certificate, which we will need later.)

You can list the available certs with

> security find-identity -p basic -v

and copy the entire name (including the quotes) of your certificate. Then run codesign:

> codesign --sign "Developer ID Application: Your Name (ABCDEFGHJK)" --options runtime  --timestamp .build/apple/Products/Release/prf

You can verify the code signature with

> codesign --display --verbose .build/apple/Products/Release/prf

Build the installation package

Now that we have prepared the binary for distribution, we can wrap it in an package installer file.

To cover all deployment scenarios, we will create a signed ‘product archive.’ You can watch my MacDevOps presentation “The Encyclopedia of Packages” for all the gory details.

First, create a directory that will contain all the files we want put in the pkg. Then we copy the binary there.

> mkdir .build/pkgroot
> cp .build/apple/Products/Release/prf .build/pkgroot/

Then build a component pkg from the pkgroot:

> pkgbuild --root .build/pkgroot --identifier com.scriptingosx.prf --version 1.0 --install-location /usr/local/bin/ prf.pkg

The --identifier uses the common reverse domain notation. This is what the installer system on macOS uses to determine whether an installation is an upgrade, so you really need to pay attention to keep using the same identifier across different versions of the tool. The --version value should change on every update.

The --install-location determines where the contents of the payload (i.e. the contents of the pkgroot directory) get installed to. /usr/local/bin/ is a useful default for macOS, but you can choose other locations here.

Next, we need to wrap the component pkg inside a distribution package.

> productbuild --package prf.pkg --identifier com.scriptingosx.prf --version 1.0 --sign "Developer ID Installer: Your Name (ABCDEFGHJK)" prf-1.0.pkg

It is important that you use the “Developer ID Installer” certificate here. The --identifier and --version are optional with productbuild but this data required for some (admittedly rare) deployment scenarios, and we want to cover them all.

You can inspect the installer pkg file with a package inspection tool such as the amazing Suspicious Package. The package file should as a signed “Product Archive.”

We don’t need the component pkg anymore, and it’s presence might be confusing, so let’s remove it:

> rm prf.pkg

Note: If you want to learn more about building installation packages, check out my book “Packaging for Apple Administrators”

Notarization

We are nearly there, just two more steps.

It is important to notarize pkgs that will be installed by a user, because otherwise they will get a scary warning that Apple can’t verify the pkg for malicious software.

notarytool submits the installer package to Apple’s Notarization process and returns the results. Use the keychain profile name you set up, following the instructions in the previous article or the instructions from the Apple Developer page.

> xcrun notarytool submit prf-1.0.pkg --keychain-profile notary-example.com --wait

This will print a lot of logging, most of which is self-explanatory. The process might stall at the “waiting” step for a while, depending on how busy Apple’s servers are. You should eventually get status: Accepted.

If you got a different status, or if you are curious, you can get more detail about the process, including rejection reasons, with notarytool log. You will need the ‘Submission ID’ from the submission output:

xcrun notarytool log <submission-uuid> --keychain-profile notary-example.com

As the final step, you should ‘staple’ the notarization ticket to the pkg. This means that the (signed) notarization information is attached to the pkg-file, saving a round trip to Apple’s servers to verify the notarization status when a system evaluates the downloaded installer package file.

xcrun stapler staple prf-1.0.pkg
Processing: /Users/armin/Desktop/swift-prf/prf-1.0.pkg
Processing: /Users/armin/Desktop/swift-prf/prf-1.0.pkg
The staple and validate action worked!

And with that, we have a signed and notarized installation pkg file! You can verify this with spctl:

> spctl --assess --verbose -t install prf-1.0.pkg 
prf-1.0.pkg: accepted
source=Notarized Developer ID

Automation

While it is instructive to do this process manually, it is also quite complex and error-prone. If you have been following this blog for any time, you will know that I don’t stop at detailed step-by-step instructions with explanations.

You can find a script to automate all of these steps here. The enclosing repository includes the entire project (all three files) for your reference.

There is a section at the beginning with variables to modify with the information specific to your environment and project, such as your developer ID information and the name of the credential profile for notarytool. Then there are a few variables, such as the product name, and the installation package identifier.

Run the pkgAndNotarize.sh script from the root of the Swift package project directory.

./pkgAndNotarize.sh

The script creates the installer pkg file in the .build directory. The last line of output is the path to the final, signed, notarized and stapled pkg file.

The script mostly follows the process described above, with a few extras. For example, the script determines the version dynamically by running the tool with the --version option. It also uses the modern compression options I described in this post.

If any of the steps in the script fail, you can determine what exactly failed from the output, error message and error code.

(I belief that this could probably be a makefile, but I have no experience with that (yet). I guess I will need to ‘make’ time for this…)

Conclusion

Apple provides developers and MacAdmins with amazing platforms and tools to build all kinds of powerful apps, tools and automations. But then they don’t really document any of the processes or best practices at all. The amount of searching, guesswork, and frustrating trial and error necessary to piece all of this together for a workflow like this one is quite the shocking condemnation of Apple’s documentation.

There are glimmers of hope. The documentation for the notarization process and notarytool are exemplary.

But they only cover one piece of this puzzle. A developer building a tool has to still figure out how to

  • sign all the binaries properly
  • assemble the binaries and resources necessary into an installation package payload
  • how (and when, and when not) to use pre- and postinstall scripts
  • which kind of package installer to build and which tools to use
  • securely manage the Developer ID certificates (this is especially challenging for developer teams)
  • automate this workflow with Xcode or Swift Package Manager or a CI/CD system

MacAdmins often complain about poorly built installer pkgs, and often for good reasons. But to be fair, there are no best practices and little to no documentation for this from Apple. How are developers supposed to know all of this? Most MacAdmins can define what an installer package should do and not do, but wouldn’t be able to explain to a developer how to build such an installer package, let alone integrate that into their build automations. And most developers don’t even know a MacAdmin to ask about this.

Apple requires that developers create signed and notarized archives for software distribution. And I agree wholeheartedly with their motivations and goals here. But when impose requirements for distribution, you have to make the process of creating the installers the correct way easy, or at least well documented, whether you use Xcode or a different tool set, whether you want to distribute a simple self-contained app, a single command line tool, or a complex suite of tools and resources.

Apple has their work cut out to improve this. Official best practices and sample workflows for installer creation and distribution that consider and respect the requirements of MacAdmins for deployment, have been disgracefully lacking for a long time. The more requirements and security Apple piles on to application and tool distribution, the more desperately they need to provide documentation, best practices and guidance.

Until that happens, you have my paltry scripts.

RAID and macOS

RAIDs are a strange edge case that are rarely useful outside of servers, but when they are useful, they are very important. RAID is an acronym for ‘Redundant Array of Independent/Inexpensive Disks.’ It is a technology where you combine multiple, physical disks into a single virtual drive for redundancy, speed, or both.

Often the RAID system is handled by a dedicated controller in an external enclosure, but sometimes, you want or need to work with drives directly. macOS has some basic RAID functionality built-in and there are good third party options if you want to go further.

RAID Levels

If you are unfamiliar with RAIDs, we need to get some terminology sorted out first. There are different kinds of RAIDs which are called ‘levels.’

In this post, I am going to focus on level 1 or ‘RAID 1.’ With RAID 1, the data is ‘mirrored’ between two or more drives, so that each drive carries a complete copy of the data. This protects from data loss because of drive failure. Note that RAID 1 does not protect from other common reasons for data loss, such as file system or individual file corruption, accidental or malicious file deletion. A RAID is never a replacement for a good backup strategy. Since the data is mirrored on all the devices, a RAID 0 will only have the capacity of the smallest drive in the set. It is generally recommended that drives in any RAID set should be of the same capacity and type, for best performance and efficiency.

A level 0 RAID (or ‘RAID 0’) is not actually redundant. In a RAID 0 the data is ‘striped’ across two or more drives so that writes and reads happen in parallel, which increases the data bandwidth available. Since the data is spread evenly (striped) across all drives in the RAID 0 set, failure of a single drive will result in complete data loss. The capacity of the stripe raid is the capacity of the smallest drive in the set multiplied by the number of drives in the set. Having drives of the same type and capacity is even more relevant for RAID 0 performance.

There are more RAID levels, such as 0+1, 10 and 5 and dedicated disk controllers will have more options (such as combining multiple drives of different sizes more efficiently), but we will focus RAID 1 (mirror).

Here, there be dragons

Warning: many of the commands shown here to setup and experiment with disk drives and RAIDs may or will lead to loss of the data on the drives involved, so be careful. I strongly recommend disconnecting any drives other than those you are experimenting with from the Mac you are working on.

I would also recommend to experiment with a set of drives that contain no relevant data at all. Two USB sticks will do just fine to explore and test the functionality. Drives do not have to be of the same capacity and type for testing, but I do recommend that for actual use.

Apple RAID

macOS has built-in support for software-based mirror and stripe RAID called “AppleRAID.” This also provides a third option to concatenate drives, but concatenation provides neither redundancy, nor performance, so I do not recommend using it.

You can use the Disk Utility app to setup a RAID. It has a nice assistant that you an access from the File menu called RAID Assistant. It will ask you what kind of RAID you want to setup and allow you to select the drives and create a new RAID volume. This will delete the data on the disk drives and there are few features that are not exposed in the Disk Utility UI, so I will focus on how to do it in the command line.

You can keep Disk Utility open to get a visual representation of what is going on, though the Disk Utility app often has problems keeping up with changes done from the command line. You may have to quit and restart the app to force it to update its status. You want to enable “Show all Devices” from the View menu to see the physical drives as well as the file systems and virtual drives.

First, we need to identify the disks that we want to work with. When you run diskutil list it will list all the disk on the system. Usually disk0 will be the built-in drive, and disk3 will be the (synthesized) APFS container inside (with the System and Data volume). But depending on what Mac you are using, what your configuration is, and what devices you had attached to the Mac before you started this, the numbers may be different.

Again, to be safe, unmount and disconnect any drives or file servers with data that you care about at this point, and the connect the two drives you want to use for experimentation. Their data will be erased!

Run diskutil list again and identify the device identifiers for the drives you will be working with. They should look like this:

/dev/disk4 (external, physical):

For me, the two drives where disk4 and disk5, so I will be using those numbers in my examples, but be sure to replace those with the device numbers on your system, other wise you might be working with the wrong disk or volume.

Promoting a drive to mirror RAID

One of the features you can use from the command line is to ‘promote’ an existing drive to a mirror RAID without data loss. Apple RAID promotion works (as far as I can tell) only with HFS+ formatted Volumes, so let us reformat the first disk (disk4) as such:

> diskutil eraseDisk JHFS+ DiskName disk4 
Started erase on disk4
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Formatting disk4s2 as Mac OS Extended (Journaled) with name DiskName
Initialized /dev/rdisk4s2 as a 59 GB case-insensitive HFS Plus volume with a 8192k journal
Mounting disk
Finished erase on disk4

Using the command line, we will be able to promote this HFS+ drive without have to erase it (again), so copy some (un-important) files to it now.

diskutil has a sub-group of commands dedicated to the RAID functions called appleRAID or ar for short. I am going to use the short form. You can run diskutil ar to get a list of the sub-commands for working with Apple Raid. You can read the diskutil man page for details.

Next we have to enable AppleRAID on the drive. Enabling RAID on single drive seems a bit pointless, but this prepares everything on that drive, so that we can add more drives later.

> diskutil ar enable mirror DiskName
Started RAID operation on disk4s2 (DiskName)
Resizing disk
Unmounting disk
Adding a booter for the disk
Creating a RAID set
Bringing the RAID partition online
Waiting for the new RAID to spin up "8D05B6EB-DCFB-426D-885B-ED8C76DC2484"
Finished RAID operation on disk4s2 (DiskName)

The volume and the files you had copied earlier are still there. But the volume is now listed under “RAID Sets” in Disk Utility. You can see the single drive in the RAID Set in the UI there. You can also get this info in command line with

> diskutil ar list   
AppleRAID sets (1 found)
===============================================================================
Name:                 DiskName
Unique ID:            8D05B6EB-DCFB-426D-885B-ED8C76DC2484
Type:                 Mirror
Status:               Online
Size:                 63.8 GB (63816400896 Bytes)
Rebuild:              manual
Device Node:          disk6
-------------------------------------------------------------------------------
#  DevNode   UUID                                  Status     Size
-------------------------------------------------------------------------------
0  disk4s2   467826B1-BBA2-4671-99CE-5CBB04E06882  Online     63816400896
===============================================================================

To make this a real mirror RAID, we need to add the second drive:

> diskutil ar add member disk5 DiskName                
Started RAID operation on disk6 (DiskName)
Unmounting disk
Repartitioning disk5 so it can be in a RAID set
Unmounting disk
Creating the partition map
Adding disk5s2 to the RAID Set
Finished RAID operation on disk6 (DiskName)

This will add the drive to the RAID. This will delete any data that might be on disk5!

When you look at the RAID in Disk Utility (you might have to restart the app for it to pick up the new status) you will now see both drives, but one of them has the status “Rebuilding” with a percentage. The status of the entire RAID set is now “Degraded.” The RAID system is still in the process of mirroring data to the new member. You can use the volume to read and write data at this time, but it is not redundant yet. If the first drive fails during rebuilding, the data will be gone.

Once the rebuilding is done, the status of the RAID will change to “Online,” which is the “good” status. At this point the data on the RAID will be resilient to the failure or removal of either of the drives.

You can also create the RAID with both drives from the start, but this will erase all the data on both drives (this is what RAID Assistant in Disk Utility does.

Before we can try the other way of creating a mirror RAID set, we need to “break” the mirror we have built so far.

> diskutil ar delete DiskName
Started RAID operation on disk6 (DiskName)
Unmounting volume for RAID set 8D05B6EB-DCFB-426D-885B-ED8C76DC2484
Destroying the RAID set 8D05B6EB-DCFB-426D-885B-ED8C76DC2484
Finished RAID operation on disk6 (DiskName)

If you waited for the rebuilding to be complete, both individual drives will each contain the data of the former mirror RAID. If the rebuilding was not complete yet, only the first drive will contain the data, the second drive will be empty.

Create a new RAID set

Now let’s build a new empty mirror RAID with both drives included from the start:

> diskutil ar create mirror DiskName APFS disk4 disk5
Started RAID operation
Unmounting proposed new member disk4
Unmounting proposed new member disk5
Repartitioning disk4 so it can be in a RAID set
Unmounting disk
Creating the partition map
Using disk4s2 as a data slice
Repartitioning disk5 so it can be in a RAID set
Unmounting disk
Creating the partition map
Using disk5s2 as a data slice
Creating a RAID set
Bringing the RAID partitions online
Waiting for the new RAID to spin up "FE3E7A3F-E4BF-4AEE-BC3C-094A9BFB3251"
Mounting disk
Finished RAID operation

Note that we can build an APFS volume on the RAID set with the command line tool. RAID Assistant in Disk Utility will build a HFS+ volume. In this new empty RAID set both drives will be online immediately.

When you replace the mirror with stripe in this command you will get a striped RAID 0 volume. (performance instead of redundancy)

Drive failure

We can simulate a drive failure by unplugging one of the members. Sadly, macOS does not seem to have a notification for this event. Once you have removed the drive you can run disktutil ar list and see the status is “Degraded” and the which member is “Missing/Damaged.” You can keep using the volume in degraded mode.

Once you plug the drive back in it will appear as ‘failed.’ You can start the process of repairing or rebuilding the mirror with

> diskutil ar repairMirror FE3E7A3F-E4BF-4AEE-BC3C-094A9BFB3251 disk5  
Started RAID operation
Unmounting disk
Repartitioning disk5 so it can be in a RAID set
Unmounting disk
Creating the partition map
Adding disk5s2 to the RAID Set
Finished RAID operation

Note: Syncing data between mirror partitions can take a very long time.
Note: The mirror should now be repairing itself.  You can check its status using 'diskutil appleRAID list'.

The UUID is the UniqueID of the RAID set you see with diskutil ar list. The warning you get at the end is fair. The rebuilding process will take a while. How it takes depends on how full the volume is and how fast the new member drive is.

Downsides of AppleRAID

There are quite a few downsides to the built-in AppleRAID functionality. There is no notification or warning when one of the drives in a mirror goes offline and the RAID is running in degraded state. The RAID will also not automatically rebuild when a missing drive re-appears. (There is an AutoRebuild option mention in man page, but whenever I tried to enable that, the entire disk management stack froze in a way that required a reboot.)

AppleRAID can be useful to quickly stripe some random disks for performance. But generally, if the data was of so much concern that I am considering RAID 1, I would not rely on AppleRAID.

SoftRAID

There is a wonderful third party tool for managing software based RAIDs on macOS called SoftRAID (14-day free trial, then a tiered license). And, much to my delight, it also comes with a command line tool. Creating a RAID is not something you do regularly, so I went ahead and did this in the GUI app. Once that was setup, I used the command line tool to get the RAID’s status:

> softraidtool volume DiskName info

Info for "DiskName":

Mountpoint: /Volumes/DiskName
BSD disk: disk4
Total Bytes: 59.6 GB (64,016,478,208)
Free Bytes: 59.6 GB (64,016,478,208)
Volume format: unknown
Volume is DiskName
RAID level: RAID 1
DiskName ID: 09DF05C72BFFAD20
Optimized for: Workstation
Created: Jul 17, 2023 at 3:33:39 PM
Last Validated: never
Volume state: normal, 
Volume Safeguard: enabled
Total I/Os: 5,610
Total I/O Errors: 0
Total number secondary disks (including offline ones): 1

Disks used for this volume:
bsd disk:    SoftRAID ID:         Location and Size:
disk7     09DF053ECE82F980     (USB3 bus 0, id 4 - 59.8 GB)  secondary disk, 
disk6     09DF053D23386500     (USB3 bus 0, id 5 - 59.8 GB)  primary disk, 

The SoftRAID software also comes with a menubar app that shows the status of the RAID.

When you unplug one of the drives, the ‘Volume state’ changes to ‘missing disk.’ When you plug the missing drive back in, SoftRAID will automatically detect it and rebuild the RAID, when necessary. Rebuilding went so quickly that I had a hard time capturing the state from the command line. The more changes you apply to the degraded RAID the longer the rebuild takes.

> softraidtool volume SoftRAID info
SoftRAIDTool status: waiting for disk5 to finish (00:00:01)

Info for "SoftRAID":

Mountpoint: /Volumes/SoftRAID
BSD disk: disk5
Total Bytes: 59.6 GB (64,016,478,208)
Free Bytes: 59.6 GB (64,016,478,208)
Volume format: unknown
Volume is SoftRAID
RAID level: RAID 1
SoftRAID ID: 09DF06D95024CBE0
Optimized for: Workstation
Created: Jul 17, 2023 at 3:53:16 PM
Last Validated: never
Volume state: rebuiding, out of sync, 
Volume Safeguard: enabled
Volume progress: 15%
Current offset: 7,343,685,632
Time remaining: 00:07:24
Total I/Os: 22,752
Total I/O Errors: 0
Total number secondary disks (including offline ones): 1

Disks used for this volume:
bsd disk:    SoftRAID ID:         Location and Size:
disk4     09DF053ECE82F980     (USB3 bus 0, id 4 - 59.8 GB)  primary disk, 
disk7     09DF053D23386500     (USB3 bus 0, id 5 - 59.8 GB)  secondary disk, rebuiding, out of sync, 

You can parse this output using awk to get just the volume state. This is useful for reporting the state to Jamf Pro with an extension attribute:

#!/bin/sh

# reports the SoftRAID status

softraidtool="/usr/local/bin/softraidtool"

if [ ! -x "$softraidtool" ]; then
    echo "<result>SoftRAID not installed</result>"
fi

volumestate=$(softraidtool volume SoftRAID info | awk -F ': ' '/Volume state/ {print $2}')

echo "<result>$volumestate</result>"

Keep in mind that Jamf Inventory Updates (aka as recon) may run very infrequently (recommended default is once per day, and it shouldn’t run much more often than that to avoid database bloat), so the data in your Jamf Pro may be hours or sometimes longer out of date. If you want to react to changes in the RAID status more quickly, you should rely on other tools than Jamf Pro.

Conclusion

The best solution for RAIDs will always be a dedicated hardware controller. But there also good reasons (cost) to just put together a “bunch of disks” into a RAID. The built-in AppleRAID functionality works, but has limitations, especially for mirror RAIDs. SoftRAID is a great tool to overcome these limitations. For Mac admins, both can be managed and monitored with command line tools, which allows automation and integration with your management system.

Shell Loop Interaction with SSH

If you have a bash script with a while loop that reads from stdin and uses ssh inside that loop, the ssh command will drain all remaining data from stdin ((This is not only true for ssh but for any command in the loop that reads from stdin by default)). This means that only the first line of data will be processed.

I encountered this issue yesterday ((I won’t go into details here, since it is for a very specialized purpose. I will say that it involved jot, ssh, an aging Solaris based network appliance, and some new fangly XML/Web 2.0)). This website explains why the behavior occurs and how to avoid it.

A flawed method to run commands on multiple systems entails a shell while loop over hostnames, and a Secure Shell SSH connection to each system. However, the default standard input handling of ssh drains the remaining hosts from the while loop

from Shell Loop Interaction with SSH.

Some ssh Tricks

I found this website with a bunch of ssh tricks. Some highlights:

Compare a Remote File with a Local File

ssh user@host cat /path/to/remotefile | diff /path/to/localfile -

Useful for checking if there are differences between local and remote files.

opendiff ((Part of the Developer Tools installed with Xcode)) and bbdiff ((One of the tools installed by BBEdit)) do not use stdin for their input, but you can work around that by copying the file to /tmp first:

scp user@host:/path/to/remotefile /tmp/remotefile && opendiff /path/to/localfile /tmp/remotefile

SSH Connection through host in the middle

ssh -t reachable_host ssh unreachable_host

Unreachable_host is unavailable from local network, but it’s available from reachable_host’s network. This command creates a connection to unreachable_host through “hidden” connection to reachable_host.

Using the -t option uses less overhead on the intermediate host. Same trick is used later in the article where you directly attach to a remote screen session:

ssh -t remote_host screen -r

Though I prefer using screen -DR. Read the man page for details.

The next one however didn’t do anything for me, I suspect there is a piece missing in the command somewhere:

Remove a line in a text File

sed -i 8d ~/.ssh/known_hosts

However there is a dedicated tool for this: use

ssh-keygen -R host

instead. I re-image some machines over and over again and then run into the ssh host key errors. This is very useful.