macOS 26.4 brings more default app confirmation prompts

The 26.4 updates have been released and among the many documented changes, there is one that the Apple documentation team seems to have missed to tell us about. There are now several url schemes and file types which will prompt for user confirmation when the default app is changed.

What happened so far

Some years ago—I believe it was in macOS High Sierra 10.13—Apple added a prompt for the user when an app or process changed the default application for http url scheme, i.e. the default web browser.

This was presumably implemented to prevent malware sneakily switching out default browsers to capture web traffic and passwords.

Since the prompt allows the user to keep the current browser, management scripts have to consider that the user might reject the new browser, or not notice the prompt, or notice the prompt and move it to the side and ignore it. This made managing the default browser quite difficult. I have written about this.

Default apps for file types and other url schemes could still be changed using the proper APIs without user interaction.

What changes in macOS 26.4

In macOS 26.4 Apple has added user confirmation prompts to all file type/UTI default app changes.

Note: exactly which kind of default app changes causes the prompt to appear has changed during the beta phase of macOS 26.4. There might be changes to this in later updates. If so, I will update this article accordingly.

Note 2: users can still change default apps for file types in the Finder Info window without an additional prompt.

This is really annoying when you want to pre-set default apps in a managed environment

Yes.

If you have scripts or automations that are changing the default apps for any of these URL schemes, and they run on a Mac with macOS 26.4, they will prompt the user and wait until the user either clicks the “Use” or “Keep” button.

In the best case, the user will click the “use” button and your script will continue as before. A worse scenario is that the user clicks the “Keep” button and the change will not be applied. Scripts will have to be updated to account for this case. At the very worst, a user can ignore the prompt and move it to the side, stalling your script or automation and the process or management service that started it indefinitely.

When your script or automation was setting a list of default apps for many file types, an operation that is very common in development environments, then the user will have to confirm each single change. A horrible user experience.

My command line tool utiluti has an option to change multiple default apps for file types read from a plist file which is now rendered pretty much useless.

Apple has not provided a means to manage default app assignment using a configuration profile or declaration on macOS. (There is a MDM command, which can set the default browser, calling, and messaging app—but only for iOS and visionOS.)

Please file feedback with Apple for this through your AppleSeed for IT accounts.

Management with a configuration profile or declaration would set and lock the default application, which is often not a desireable configuration. In managed environments, admins often want to set a reasonable default (say, Xcode or Visual Studio Code for scripts and code files) but let the user change that later, should they want.

I have built desktoppr and utiluti for exactly these kinds of “set-once” workflow, but the user prompt limits utiluti‘s functionality severely.

What can we do?

First, Mac Admins will have to identify any scripts that set default apps for file types which will now show (and wait for) a user prompt. You will have to determine how to adapt that script.

For scripts which are initiated by the user, for example from a self service application, the prompt should not interfere with the workflow too much, as long as the script is not setting many defaults and creating many prompts. You may still have to update the script to be able to react to the user rejecting the change.

For script that set defaults in automated, silent way, commonly, but not exclusively, as part of an enrollment workflow, you will have to find a different solution. There are some script available, which bypass the prompts by changing the Launch Services property list setting directly, but they are generally only reliable when applied early in the enrollment process and are followed by a logout or restart.

Use the User Template

Recently, Christopher Smith shared an approach on Mac Admins Slack which is detailed a bit more in this blog post of his.

This approach has the advantage of placing the configuration before the user is created and the LaunchServices daemon is running. When a new user account is created, the system copies the plist from the User Template to the new home directory and the LS daemon loads it when it launches.

I have a sample project that builds a pkg to install the proper file in the user template.

Work with the user: Setup Checklist

If you need to change default app settings for file types on a “running” system, any time after the user account creation, you will have to find a way to deal with the user interaction.

We have recently published a public beta of “Setup Checklist” for Jamf Pro and Jamf School. Setup Checklist is a tool that can guide the user through some important configuration that the IT and/or security department want configured, where Apple does not give us the option to manage without user approval.

This app can (among other things) ask a user to confirm a default app, wait until the prompt is confirmed. Should the user choose the “keep” option, the prompt will be presented again (and again…). Setup Checklist can also present a list of apps that the user can choose from.

We designed this step to “manage” setting the default web browser without manipulating undocumented plist files. It now also works for changing default apps for file types. If you need to change the default app for many different file types, then this will still be tedious, since it will generate a prompt for every single change.

Please file feedback

I have said it before: while I hope this article will help you to understand what is going and give you some idea of what you can and need to do, we need to let Apple know that this change is very detrimental to managed environments. Please file feedback with Apple through your AppleSeed for IT accounts that Mac Admins require a means to manage default apps without user interaction.

Swift re-write for QuickPkg and some thoughts on LLM aided coding

I have created a new version of quickpkg, you can get it here.

If all you care about is the new version of quickpkg, go follow the link. There are also a few new features. Hope you like them.

Full disclosure: I used Claude Code to create this new version. I recently got access to Claude Code through work and I chose quickpkg as an experiment to understand where modern “agentic” coding tools are and how they fit in my workflows, coding and learning processes.

I have been and (spoiler) remain a skeptic of modern “AI” hypes and the companies whose business this is. I am not a skeptic in regards to there being useful aspects to Large Language Models (LLMs) and machine learning based solutions in general. For example, I have been living in countries where the main language is not my first for more than twenty years and the progress of translation software, both text, visual and audio based has massively simplified that experience recently.

I have been trying out various LLM based tools over the past years. I always got frustrated very quickly. I was told alot, that I was “holding them wrong” but the frustration always seemed to outgrow the benefit in short time. None of the upsides outweighed my concerns on the social, economical, ecological and ethical impact of the tech. (More on that later.) Certainly not enough to purchase any of the subscriptions which would give me access to the better models, which would be so much better, I was repeatedly told.

I have always believed that I should know and understand the things I criticize, so it was time for an experiment.

Why quickpkg?

This seemed like the perfect experimental project to me. quickpkg addresses a very specific problem, that I happen to know quite a bit about. It is simple, but not trivially simple. It is a command line tool, which is far less complex than a tool with a graphical interface.

quickpkg was originally written in Python 2 and when the demise of that version of Python was evident, I put in minimal effort to make it work with Python 3. Re-building it with Swift to remove that dependency had been on my to-do list for a long time, but it never made it high enough on my priority list.

Converting code from one programming language to another is tedious for humans (part of the reason I procrastinated on this) but something that coding assistants are supposedly very good at. On the other hand, building macOS installer packages is something that is woefully under documented, so I expected a bit of struggle there.

How it went: the translation

To prepare the project I created a new branch on the existing repository and created a template Swift Package Manager folder structure for a ‘tool.’ (a command line executable using the swift-argument-parser library) I set the Swift language version in the Package.swift to the 6.0 expecting/hoping that this would make it use the latest Swift concurrency. I told the agent that I want to translate the python code to Swift using swift-argument-parser and the new swift-subprocess package.

The agent went off for a few minutes to analyze the existing project, created a Claude.md file with its findings and presented me with a plan on how it would split the functionality contained in a single python file into various swift files. The plan looked reasonable to me and told it to go ahead and it started its work. I could watch the code it generated and it asked for a few confirmations.

I had to interrupt it at this point, since it apparently had no idea about the swiftlang/subprocress package I had asked to use and kept choosing either an older and long not updated subprocess repo hosted on the Apple GitHub or one from Jamf, which uses Foundation.Process for running shell commands. Then the agent even preferred building its own functions (also with Foundation.Process) instead of using the subprocess package I wanted. I had to explicitly add the swiftlang subprocess repo to the Package.swift myself and reference its documentation before the agent consistently used it, over the alternatives.

Once I had overcome that problem, the rest of the translation went fairly smoothly. It took maybe 10-15 minutes, which is obviously far faster than I could have done it.

Towards the end of that process, I could watch the agent repeatedly compiling the command line tool, and fixing errors that occurred. This seemed a very human approach to me. When the compile succeeded it started running the command line with a local app to test if it actually did something. The only outcome it tested for was whether a pkg file with expected name existed, not if it was a valid installer pkg file. It’s a good start, but there are obviously more things that would need to be tested.

It even ran the correct security command to determine a Developer ID certificate to test the --sign option. Then realized I had documented the command in the ReadMe file for the python tool, which gave me insight into where it got the information from.

The local application the agent chose to re-package was /System/Applications/Calculator.app which is a poor example for many reasons, but works for generating the pkg file. The resulting pkg file is useless because that folder is part of the signed system volume. I wondered for a moment, whether it had picked that up from the ReadMe, too, but I had used /Applications/Numbers.app in those examples. I had Numbers.app installed on the machine I was running this on, so why it didn’t respect that information from the documentation remains a mystery.

Once the agent told me it was ready, I did some more detailed tests, testing a few more input file types and several combinations of options. Since one of the main use cases I use quickpkg for is re-packaging Xcode, which is also the only real-world example of an app delivered in a xip archive, this took a while, even on a M4 MacBook Pro. Overall, about 90 minutes after giving the first set of instructions to Claude, I determined that the translation had worked.

Success?

Remember that Claude had a working python script to start out with. Nevertheless, aside from getting Claude to accept the (admittedly quite new) subprocess repository, this was a smooth process. I could and probably should have written up a list of commands and sample apps to use for testing and Claude would have done those for me, as well, saving some time in between as I invariably got distracted while larger packages built.

At this point, I could have stopped and called it a success. The code works. I can’t tell for sure how long the translation would have taken me manually (more on that later) but I am certain that I wouldn’t be able to do it in 90 minutes, let alone 15.

So, huge gain in efficiency, right?

Technical and cognitive debt

When I mentor people on scripting and coding I always stress that “working” is the most fundamental success criteria and everyone should be proud when they achieve that.

However, passing “it works” is only the first step along the way. If you plan to support and maintain and possibly build on the code going forward, you need to take the time to clean-up, refactor, and document the code. Especially, if you are planning to share the code.

Since the tool was working, I really wanted to publish and share it on my GitHub. But that means that I will be responsible to support the tool and the code going forward. Regardless of how the code for the tool was created, it is now my responsibility. So, I have the obligation to review and understand the code. This is another reason I chose a small project with a limited scope, since I anticipated that I wouldn’t have the time and energy to review and understand the code for a larger project that an agent could have generated in a fairly short time.

I actually started with the code review while I was testing whether the package build process was working as it should. As I said, some of those packages take a long time to build. Unfortunately, I started editing the generated code immediately, without creating a commit in the repo. I regret this now as I cannot link to these first changes.

Most of the code was good. There were a few cases of code repetition, as if a lazy programmer had copy/pasted certain code instead of abstracting it into a function or method. I have certainly been guilty of this a lot. But this is exactly what the “clean up” phase of a project is for.

There was one big four-way if-then clause in the ShellExecutor type, that was partially redundant. It checked for a nil value on workingDirectory and used two different calls to Subprocess.run, even though that function already takes an optional value. Then it did the same check for input resulting in a big unwieldy if-then clause with four calls to Subprocess.run that were only slightly different. Not wrong. The code did the right thing, but it was very hard to read code.

I actually think the entire ShellExecutor type is redundant and comes from the very many projects that use Process to run shell commands and need a wrapper type. At that moment I was happy to fix only the most egregious issues. (I have since refactored and removed the ShellExecutor type for the 2.0.1 release.)

Again, the code was working before. This is cleanup and refactoring to make the code more readable and understandable. I strongly believe more readable, clean code is easier to understand, maintain, and extend at a later time. I value putting in this extra effort, whether I have written the code myself, or get it from somewhere else. This process also forces me to understand the code, not just read over it and nod and feel “that’s good.”

Until this point, I was mostly editing the code myself. The connection from thinking about a code change to editing it myself in the editor is a long-trained habit for me. But then I remembered that I could tell Claude to do the refactoring. This worked surprisingly well. However, for small code changes, it felt slower and more complicated to phrase the change in ‘normal’ english, rather than just applying the change myself.

For example, I told the agent to create an extension on URL to wrap isFileURL and FileManager.shared.fileExists(atPath:) to make all the checks whether a file exists more readable. It did that and replaced all the uses of the less readable FileManager.shared.fileExists(atPath:) method. But I needed three attempts to phrase the request correctly and feel I would have been faster just writing the extension myself and using find and replace.

The run() function the agent originally generated was very long (again, something I have been guilty of a lot) and I asked it to refactor it with functions to make it more readable, and the result was quite good, but I needed to review these changes again to understand them and be sure the code and functionality remained the same and I feel that took at least as much time as doing it myself.

After a bit of refactoring and cleanup, I felt I understood the code that was generated. There was more cleanup to be done, which I put in the 2.0.1 update. But I was itching to add a few features that I wanted an updated version of quickpkg in 2026 to have.

  • quarantine flags are removed from the payload before packaging
  • minimum OS version is picked up from the app bundle and applied to the pkg
  • pkgbuild‘s compression option is set to latest with a command line option to revert to legacy
  • quickpkg now builds distribution packages/product archives by default

These weren’t complicated additions and the agent did those just fine. I really appreciated that it often (but not always) would update the ReadMe file to match the new options. The inconsistency was a bit frustrating.

Packaging the tool

I did try to use Claude to build a script which compiles, packages and notarizes the command line tool, which quickly turned into a frustrating experience. If the LLM could feel frustration I am sure it would have been mutual. Building, signing, and notarizing are famously under-documented tasks, even though my articles on the subject have been around for a while.

I gave up on that and copied the pkgAndNotarize script from another project. I couldn’t let it be and asked Claude for suggestions on how to improve that script and it suggested checking whether the signature certificates and keychain profile entries actually existed, which I thought was a good idea.

However, it konfabulated a notarytool store-credentials --list command to determine whether the keychain entry exists and I didn’t catch that until later, when I actually tried to build the final pkg. That should teach me to trust the LLM at its edge of competence.

Efficiency?

Compared to my earlier experiments with LLMs for coding, I was surprised how far the ‘agentic coding models’ have come. You cannot argue that they are completely useless anymore.

Translating working code from one language to the other is an easier task than generating code from scratch, but still. The fifteen minutes or so it took to generate a working Swift version is impressively fast.

Human developers are generally quite bad at judging how long a task will take. They are also very bad at judging how long it would have taken them with or without LLM support, compared to however they did it. There is research supporting this claim.

So, take my estimates with a grain of salt, but I estimated (before I started on the Claude project) that re-writing quickpkg and adding the new features would take me four to eight hours.

Now that I have seen and reviewed the generated code, I could re-create that much faster than my original estimate. Had I done the translation by hand before I put the agent to that task, the prompts would have been different and my review of the generated code would have been faster, because I would have had an idea of what to expect. So, either way, there is no fair control test.

Fifteen minutes compared to four to eight hours. I can see how someone might get excited at this point, call it a day and claim a huge efficiency gain.

There is a word for trusting the output of a coding agent without testing and verification: “vibe coding.” I consider it an horrendous lack of standards.

It took me more than an hour to verify that the generated code was actually doing what it was supposed to. I consider this really important since it generates package installers that install files on potentially thousands of devices. I might have been able to save some time by giving the agent more detailed instructions on how to test. Automating tests is good. But it wouldn’t have been much faster and defining the tests would have taken quite some time, as well. Re-packaging Xcode simply takes a long time and is an essential test. Also, I would still have had to verify that the agent was performing and evaluating the tests properly.

Then it took me another three to four hours to understand, review, and cleanup the code.

I would have had to test, review, cleanup the code if I had done the translation myself, but much of that would have happened during the re-writing, so that is part of my original estimate. And, of course, I understand and trust the code I wrote myself much better than code I get from elsewhere.

I do not dare to declare my code as always perfect, but neither is LLM generated code, so that’s a fair comparison. When I have to debug issues in the future though, I will be faster understanding the issue when it is my own code, or when I invested the time to review, understand, and clean up the code.

In the end we have five and half hours of time spent with Claude versus the four to eight hours estimate without. Much less exciting.

There’s a lot of dicussion that could be had here. How good is my estimate? Would I be more efficient with an agent if I spent more time learning the tool and how to write proper prompts? Will future models or agents be much better? Is it necessary to review and understand the generated code, as long as it works?

A comparison

Indulge me for a moment. I will get back to the topic.

For my lunch break, I usually go for a walk. There is a shopping area nearby, with a super market and a bakery, so I usually pick up some groceries. Depending on how much time I have available, I walk either a 2km, 3km or 5km loop. This gets me out of the house for some scenery, sun (weather permitting), and fresh air, provides some exercise, allows—no, forces—me to disconnect for a while from whatever I am doing at the desk and screens. It keeps the groceries stocked and I also get something nice from the bakery for lunch.

I could go get the groceries with the car. It’d be faster and take less time, so if that is your metric, it would be “more efficient.”

Yet I have no desire at all to replace my walk with a car trip. Less time is not what I value for my lunch break.

A car trip would have several downsides. Instead of a relaxing walk through parks and backstreets, I’d have to focus on the road and traffic, bikes, and pedestrians while driving and looking for a parking spot. I wouldn’t get the exercise, little as it is. I wouldn’t get a mental break, which I know will reduce my focus and productivity in the afternoon and evening. I couldn’t enjoy the sun. (Or rain, as it may be.) A car trip would also use far more energy and be more of a burden on the environment.

If I really wanted to optimize my grocery shopping for time spent, I could go to the big super market once per week and not leave the house at all during the week.

It’s not that taking a walk or the car, or going to the big store once a week are “better” or “worse” solutions. Each is an optimization for a different goal. Each has a different metric, different values that it is more optimal for.

quickpkg is a simple project. This was an intentional choice for this experiment, since I didn’t want to spend too much time on it. The quickpkg rewrite was also the first time I used the new Subprocess package in one of my projects, so one of my goals was to learn how that worked. Had I let the agent use the old Process way of launching shell commands that it wanted to use initially, or had I not reviewed and cleaned up the generated code afterwards, I would have learned nothing about the new Subprocess package.

There are other code projects I am currently working on, which are far more complex than quickpkg. Yet I feel no desire to use the assistance of an agent on these projects. For these projects, my main goal is to have full ownership and understanding of the code and their workflows. I am learning a lot about how I can control aspects of the system with Swift code and the macOS native frameworks. A lot of this is new to me, or I am re-visiting things that I thought I knew from a different perspective and challenging my knowledge.

Obviously, a result has to be delivered eventually, but gathering knowledge about the system and how to code these particular problems and exploring the limits of what is possible and, more importantly, what is not possible, has been a goal of these projects from the very beginning. In the course of this project we have already found some limitations we hadn’t antipicated, but also found solutions we had thought completely out of reach when we started planning.

If I didn’t challenge myself to explore the possibilities and craft the code and design the workflows, I believe the project would be far less useful than it is right now. I also believe I will be better at my profession and at implmenting future projects because of these experiences.

Keep in mind that as a consultant in the Mac management and system administration space, we live very much on the edge of things that are commonly documented. Since LLMs work on probabalistic data from large data sets, they get worse when there is less documentation. I could tell that Claude was fairly solid with common tasks, such as building a command line tool and refactoring code, but started konfabulating with pkgbuild and notarytool. When your project is more within well-documented domains, you will have better results.

This is also the reason I don’t use LLMs for writing. For me, the process of writing is a fundamental part of sorting out, challenging, and clarifying the half formed ideas contained in my head. I also generally enjoy the process, or at least gain satisfaction from the finished text. I would not and could not ask another person to do this process for me. How could I ask a machine? Why would I ask a machine?

Why would I take a car trip for my lunch break?

The upside

However, I will admit that I have used the built-in Xcode LLM functionality on a few occasions and found it helpful.

The first situation was a gnarly SwiftUI layout problem that I couldn’t find a solution for on the web. When I asked the Xcode 26 ChatGPT integration, it built a solution that worked, even though it seemed quite elaborate. Just last week, I found a weird crash that would happen when the window was resized a certain way and I couldn’t understand why. I fed the crash log back into the ChatGPT assistant and it pointed to a recursion generated by the interaction of the generated layout code and a seemingly unrelated, different view object. The suggestions to fix the issue from the assistant turned out to be dead ends, but it would have taken me much longer to identify the problem without the agent’s analysis. (I was able to remove the problem by reviewing, refactoring and simplifying the code. At least I hope so…)

When you have the ‘Coding Intelligence’ enabled in Xcode, there will be a “Generate Fix for this Issue” button next to the error and those can be very helpful to explain obscure compiler errors. SwiftUI certainly generates a few of those. Even though I rarely use the suggested fixes, the explanations of the issues usually are very helpful.

I believe it says a more about the sad state of modern IDEs, systems and frameworks when you need a large language model built with thousands of GPUs and hundreds of billions of tokens to understand a crash log or compiler error, than it does about the supposed “intelligence” of the model. But I will admit that has saved me a ton of time and frustration.

Should we focus on improving the frameworks, logs, and developer environments, rather than building monstrous data centers? Well, I guess that depends, like my lunch break walk, on what you are optimizing for…

Conclusion

I have been talking about efficiency and how we measure it, or don’t. I have not addressed all the other externalities that concern me with regards to LLMs and the general AI business these days.

My example illustrates that different solutions can be “best” when you are valuing different outcomes. I think a lot of the discussion of around coding agents and LLM help in general is based on a mismatch of values.

You may care more about “immediate time spent” with no concern for future ramifications and time you may have to spend later on improving the code. Technical and cognitive debt may not be part of your metrics. (They are difficult to measure.) You may not value the habit of building a tool as a means to learn about a particular topic. You may not care about the exploitative practices of the AI industry, which gathered and stole source material from where ever they could with no regards to ownership and licensing and now want to re-sell the digested slop back to us. You may not care about the unintentional—or sometimes fully intentional—political, ethnical, sexist and countless other biases in the data models. You may not care about the impact to your personal learning and growth, and education in general. You may not care how the next generation of experts is supposed to build their experience. You may not care about the ecological impact of the industry and the massive data centers they are planning to build. You may not care about the skewed and possibly fraudulent economics as the infusion of absolutely insane amounts of venture capital is papering over the actual costs. You may be starting to care about the secondary economical impacts of the bubble, as prices for RAM and other components are sky rocketing.

You may disagree on some, or even all, of these points, which will change your evaluation of this technology.

The benefits you gain from this technology also depend very much on what you are using it for. The more data about a certain topic the LLM has ingested, the better the recommendations will be. When you ask it for code to build web solutions and related automations, the recommendations will be much better than when you ask it about building package installers for macOS, since there are orders of magnitude more data for the former, than the latter.

The agent was very prone to inventing options for pkgbuild, productbuild and notarytool, even after I had instructed it to consider the man pages. This is a very important warning for people using agents to write automations in the Mac Admins space. Also, for the same reason, LLMs are “weak” on recent developments, so you may get code that would have worked fine five years ago, but doesn’t take modern changes to macOS and Apple platform deployment into account.

I am glad I did this experiment. For the first time, working with the agent felt really useful. I am not sure I would have ever overcome the writer’s block inherent in the tedious process of translating code. Using the agent to overcome that block was freeing. I experienced the wonder of a fascinating new technology. I can see how that can overshadow the concerns.

I believe the technology has merit. There is undoubtedly a usefulness to it. But in the current form, I think it is irresponsible to focus solely on the technical features and ignore all the other negative side effects. The benefits, when put under scrutiny are much smaller than they initially appear.

I have to hope that society will eventually find a way to build and use these tools in an effective, ethical, and responsible way. I don’t believe this is the case today. I don’t think the benefits outweigh the downsides. For now, I will continue to stay away.

Swift Argument Parser: Exiting and Errors

I have introduced the Swift Argument Parser package before. I have been using this package in many of my tools, such as swift-prefs, utiluti, emoji-list and, just recently, translate.

Argument Parser is a Swift package library which adds complex argument, option, and flag parsing to command line tools built with Swift. The documentation is generally quite thorough, but a few subjects are a little, well…, under-documented. This may be because the implementations are obvious to the developers and maintainers.

One subject which confused me, was how to exit early out of the tool’s logic, and how to do so in case of an error. Now that I understand how it works, I really think the solution is quite brilliant, but it is not immediately obvious.

Our example

Swift error handling use values of the Error type that are thrown in the code. Surrounding code can catch the error or pass it to higher level code. Argument Parser takes errors thrown in your code and exits your code early.

An example: in my 2024 MacSysAdmin presentation “Swift – The Eras Tour (Armin’s Version),” I built a command line tool to set the macOS wallpaper tool live on stage. To replicate that, follow these steps:

In terminal, cd to a location where you to store the project and create a project folder:

$ mkdir wallpapr
$ cd wallpapr

Then use the swift package command to create a template command line tool project that uses Argument Parser:

$ swift package init --type tool

The project will look like this:

📝 Package.swift
📁 Sources
   📁 wallpapr
      📝 wallpapr.swift

Xcode can handle Swift Package Manager projects just fine. You can open the project in Xcode from the command line with:

$ xed .

Or you can open Sources/wallpapr/wallpapr.swift in your favored text editor.

Replace the default template code with this:

import Foundation
import ArgumentParser
import AppKit

@main
struct wallpapr: ParsableCommand {
  @Argument(help: "path to the wallpaper file")
  var path: String

  mutating func run() throws {
    let url = URL(fileURLWithPath: path)
    for screen in NSScreen.screens {
      try NSWorkspace.shared.setDesktopImageURL(url, for: screen)
    }
  }
}

This is even more simplified than what I showed in the presentation, but will do just fine for our purposes.

You can build and run this command with:

$ swift run wallpaper /System/Library/CoreServices/DefaultDesktop.heic
Building for debugging...
[1/1] Write swift-version--58304C5D6DBC2206.txt
Build of product 'wallpapr' complete! (0.16s)

(I’ll be cutting the SPM build information for brevity from now on.)

This is the path to the default wallpaper image file. You can of course point the tool to another image file path.

Just throw it

When you enter a file path that doesn’t exist, the following happens:

swift run wallpapr nosuchfile 
Error: The file doesn’t exist.

This is great, but where does that error come from? The path argument is defined as a String. ArgumentParser will error when the argument is missing, but it does not really care about the contents.

The NSWorkspace.shared.setDesktopURL(,for:), however, throws an NSError when it cannot set the wallpaper, though. That NSError has a errorDescription property, which ArgumentParser picks up and displays, with a prefixed Error:.

This is useful. By just marking the run() function of ParsableCommand as throws and adding the try to functions and methods which throw, we get pretty decent error handling in our command line tool with no extra effort.

Custom errors and messages

Not all methods and functions will throw errors, though. If they do, the error messages might not be helpful or too generic for the context. In more complex tools (and, honestly, nearly everything will be more complex than this simple example) you want to provide custom messages and custom error handling, so you will need custom errors.

Since we have seen that ArgumentParser deals very nicely with thrown errors, let’s define our own.

Add this custom error enum to the wallpaper.swift (anywhere, either right after the import statements or at the very end):

enum WallpaprError: Error {
  case fileNotFound
}

Then add this extra guard statement at the beginning of the run() function:

    guard FileManager.default.fileExists(atPath: path) else {
      throw WallpaprError.fileNotFound
    }

The code checks that the file path given in the path argument actually exists, instead of relying on the functionality in the setDesktopURL(,for:) method. When you run this code, we get our custom error:

$ swift run wallpapr nosuchfile
Error: fileNotFound

This is nice, but fileNotFound is a good name to use in the code, but not very descriptive. We could add more description with a print just before throw statement, but we already saw that the NSError thrown by setDesktopURL() had a detailed description. How do we add one of those to our custom error?

Turns out there are two ways. Either the custom error conforms to LocalizedError and implements errorDescription (which is what NSError does) or you implement CustomStringConvertible and implements description (or both).

There are many good reasons to implement CustomStringConvertible on your types anyway, since description is also used in the debugger and the print statement. There are also situations where you might want a different message for the error description, so it is good to have options. For our example, we just going to implement CustomStringConvertible. Change the code for the WallpaprError enum to:

enum WallpaprError: Error, CustomStringConvertible {
  case fileNotFound

  var description: String {
    switch self {
    case .fileNotFound: "no file exists at that path!"
    }
  }
}

And when you run again, you see the custom message:

$ swift run wallpapr nosuchfile
Error: no file exists at that path!

Note that the error message is written to standard error.

Clean exits

In some workflows, you may want to exit the script early, but without an error. (An exit code of 0.) When you try to use exit(0), you will get an error since ArgumentParser overloads that function. Instead, ArgumentParser provides a CleanExit error that you can throw:

throw CleanExit.message("set wallpaper to \(path)")

Generally it is best to just let the run() function complete for a successful exit, but there are situations where this comes in handy.

Custom Exit Codes

ArgumentParser generally does the right thing and returns a 0 exit code upon successful completion of the tool and an exit code of 1 (non-zero represents failure) when an error is thrown. It also returns an exit code of 64 when it cannot parse the arguments. According to sysexits this represents a bad entry of options and arguments.

(You can customize your shell prompt to show the exit code of the previous command.)

In complex tools, you may want to return other exit codes mentioned in that man page, or custom errors for certain situations. ArgumentParser does have a built-in option: you can throw the ExitCode() error with a custom code. For example, we can replace our custom error with

throw ExitCode(EX_NOINPUT)

This will return an exit code of 66, but now we have lost the custom error message. This is long standing missing feature of ArgumentParser (see discussion in this forum thread), but it is fairly easy to provide a workaround.

Add this extension to your tool:

extension ParsableCommand {
  func exit(_ message: String, code: Int32 = EXIT_FAILURE) throws -> Never {
    print(message)
    throw ExitCode(code)
  }
}

And then you can use this to get a custom message and a custom exit code.

try exit("no file exists at that path!", EX_NOINPUT)

Building Simple Component Packages

Building packages can turn complex very quickly. We will start with a few simple examples first and build up from there.

You should read the first two posts in this series, first:

Boring Wallpaper

For our first simple example, imagine your organization insists that every Mac use the same wallpaper (or desktop picture). The first step for an admin is to provide the image file on each Mac.

Apple changed the naming for the background image in macOS Ventura from ‘desktop picture’ to ‘wallpaper’ in macOS Sonoma to match the naming across all their platforms. As we will see, the old name still appears in some places.

Technically, an image file used as a wallpaper image could be stored anywhere on the disk. When you place image files in /Library/Desktop Pictures/ (still the old name) a user will see them as a choice among the pictures in the ‘Wallpaper’ pane in System Settings. They will have to scroll all the way to the right in the ‘Dynamic Wallpapers’ or ‘Pictures’ section but the images you add in that folder will appear there.

The /Library/Desktop Pictures folder does not exist on a ‘clean’ installation of macOS, but the installation package will create it for us.

All the resources for building the package have to be provided in a certain folder structure. It is easiest to contain all of that in a single project folder.

I will provide the instructions to create and build the projects in the command line as they are easier to represent than descriptions of how to achieve something in the graphical user interface. It does not really matter whether you create a folder from Terminal or in Finder. Do what you are comfortable with. However, Getting comfortable with command line tools in Terminal is an important step towards automated package creation workflows.

My book “macOS Terminal and Shell” can teach you the fundamentals. Buy it on Ko-Fi or Apple Books!

I often find it helpful to have a Finder window for the current working directory open next to Terminal. You can open the current working directory in Finder with the open . command.

Create a folder for the project named BoringWallpaper in a location of your choosing.

> mkdir BoringWallpaper
> cd BoringWallpaper

You can download the sample image file I will be using here. Of course you can use another wallpaper image of your choice.

The BoringWallpaper folder will be our project folder. Create another folder inside BoringWallpaper named payload. Inside the payload folder, we will re-create the path to where we want the file be installed in the file system, in this case /Library/Desktop Pictures.

Since the Desktop Pictures folder name contains a space, we need to quote the path in the shell and scripts. You can also escape the space with a backslash \. The effect will be the same. In general, I recommend using tab-completion in the command line for file paths which will take care of special characters.

We can create the entire folder structure at once with mkdir -p:

> mkdir -p "payload/Library/Desktop Pictures"

Then copy the first desktop picture BoringBlueDesktop.png into payload.

> cp /path/to/BoringBlueDesktop.png "payload/Library/Desktop Pictures"

The payload folder will gather all the files we want the package to install.

Your folder structure inside the BoringWallpaper project folder should now look like this:

📁 payload
    📁 Library
        📁 Desktop Pictures
            📄 BoringBlueDesktop.png

The payload folder represents the root of the target volume during installation.This will usually be the root of the system volume /. We recreated the folder structure where we want the file to be installed in the file system. The installer will create intermediate folders that do not yet exist during installation.

In this example, the /Library folder will already exist, but the Desktop Pictures subfolder will not yet exist on a clean system, so it will be created and the image file will be placed inside.

When you run the installer on a system where the Desktop Pictures subfolder already exists, the image will be placed inside that. Should a file with the same name already exist in that location, it will be overwritten. Other files that might be in that folder will generally not be affected.

Introducing pkgbuild

macOS provides the pkgbuild command line tool to create installer package components. Make sure your current working directory is BoringWallpaper and run

> pkgbuild --root payload --install-location / --identifier com.example.BoringWallpaper --version 1 BoringWallpaper.pkg

The --root option designates the root of our payload, so we pass the payload folder.

The identifier should be a string modeled on the reverse DNS of your organization, e.g. com.scriptingosx.itservices.BoringWallpaper. The exact form of the identifier does not really matter as long as it is unique.

We will use com.example.BoringWallpaper for the identifier. For the exercise, you can use those or replace them with your own. When you build packages for production or deployment they should use your organization’s reverse DNS format.

This should create a BoringWallpaper.pkg file in your project folder.

You should now inspect the resulting pkg file with the tools from earlier:

> lsbom $(pkgutil --bom BoringWallpaper.pkg )
.    40755   0/0
./Library    40755   0/0
./Library/Desktop Pictures    40755   0/0
./Library/Desktop Pictures/BoringBlueDesktop.png    100644  0/0 17393342618871431

(You may see a ._BoringBlueDesktop.png file appear here. That is a resource fork file. Preview.app sometimes creates these for file previews. You can safely ignore those.)

There are two relevant things to notice here: the payload contains the intermediate folders /Library and /Library/Desktop Pictures, which means they will be created, should they not exist on the system yet. This is generally what we want to happen, but a good thing to keep in mind.

Also notice that pkgbuild set the owner and group ID for the folders and the image file to 0/0 or root:wheel. This is the default ownership for files installed by packages, which ensures that non-admin users cannot change, delete or overwrite the files. This is a useful default, but there are options to have more granular control.

pkgbuild will always preserve the file mode or access privileges. When you change the file mode in of the file in the payload folder before you run pkgbuild, the command line tool will use that for the payload and Bom. In this case, the 644 or r-wr--r-- file mode works quite well, but for a test, let’s change the mode to 444 (removing the write access for the owner) and re-run the pkgbuild command.

> chmod u-w payload/BoringBlueDesktop.png 
> pkgbuild --root payload --install-location / --identifier com.example.BoringWallpaper --version 1 BoringWallpaper.pkg
pkgbuild: Inferring bundle components from contents of payload
pkgbuild: Wrote package to BoringWallpaper.pkg
> lsbom $(pkgutil --bom BoringWallpaper.pkg )
.    40755   0/0
./Library    40755   0/0
./Library/Desktop Pictures    40755   0/0
./Library/Desktop Pictures/BoringBlueDesktop.png    100644  0/0 17393342618871431

Note that running the pkgbuild command again overwrote the previously generated pkg file with warning. This is generally not a problem, but something you need to be aware of.

We will want to change the image file in the following steps, so add the writable flag back:

> chmod u+w "payload/Library/Desktop Pictures/BoringBlueDesktop.png"

Handling extended attributes

In recent versions of macOS, pkgbuild will preserve extended file attributes in the payload.

This is a change in behavior to earlier versions of macOS, where you had to use the undocumented --preserve-xattr option to preserve extended attributes.

Most extended attributes contain metadata for Finder and Spotlight. For example, when you open the image file in Preview, you will get a com.apple.lastuseddate#PS extended attribute. You can use the -@ option of the ls command to see extended attributes:

> open "payload/Library/Desktop Pictures/BoringBlueDesktop.png"
> ls -l@ "payload/Library/Desktop Pictures"
total 3400
-rw-r--r--@ 1 armin  staff  1739334 Aug  1 14:03 BoringBlueDesktop.png
    com.apple.lastuseddate#PS        16 

You generally do not want to have extended attributes be part of your package payload. This is especially true of quarantine flags!

There are some exceptions. For example, signed shell scripts store the signature information in an extended attribute. In these cases you will have to carefully build your package creation workflow to ensure only the desired extended attributes are preserved in the package and installed to the target file system.

You can remove extended attributes recursively with the xattr command:

> xattr -cr payload

Then rebuild the package:

> pkgbuild --root payload --install-location / --identifier com.example.BoringWallpaper --version 1 BoringWallpaper.pkg
pkgbuild: Inferring bundle components from contents of payload
pkgbuild: Wrote package to BoringWallpaper.pkg

Creating a Build Script

The command line tools to create installer package files have a large number of options. Even in our simple example, pkgbuild requires several options and arguments. Each one needs to be entered correctly so the installer process does the right thing. An error in the identifier or a version number will result in unexpected behavior that may be very hard to track down. In addition, there are steps like running xattr -c that need to be performed before creating the package.

To avoid errors and simplify the process, we will create a shell script which runs the required commands with the correct options. The script will always repeat the commands with the proper arguments in the correct order, reducing the potential for errors. Updating the version is as simple as changing the variable in the script.

In your favored text editor create a file named buildBoringWallpaperPkg.sh and with the following code:

#!/bin/sh

pkgname="BoringWallpaper"
version="1.0"
install_location="/"
identifier="com.example.${pkgname}"

export PATH=/usr/bin:/bin:/usr/sbin:/sbin

projectfolder=$(dirname "$0")
payloadfolder="${projectfolder}/payload"

# recursively clear all extended attributes
xattr -cr "${payloadfolder}"

# build the component
pkgbuild --root "${payloadfolder}" \
         --identifier "${identifier}" \
         --version "${version}" \
         --install-location "${install_location}" \
         "${projectfolder}/${pkgname}-${version}.pkg"

The script first stores all the required pieces of information in variables. This way you can quickly find and change data in one place and do not have to search through the entire script.

The identifier variable is composed from our com.example reverse-DNS prefix and the pkgname variable set earlier.

Then the script sets the shell PATH variable, which is always a prudent step.

In the next line, the script determines the folder enclosing the script, by reading the $0 argument which contains the path to the script itself and applying the dirname command which will return the enclosing folder. This way we can use the projectfolder variable later to write the resulting pkg file into a fixed location (the project folder), instead of the current working directory.

Finally the pkgbuild command is assembled from all the variables.

The backslash \ in a shell script allows a command to continue in the next line. Instead of a command in a single very long line we can have one line per argument. This makes the script easier to read and update.

In Terminal, set the script file’s executable bit with

> chmod +x buildBoringWallpaperPkg.sh

Delete the original BoringWallpaper.pkg and run the build script.

> rm BoringWallpaper.pkg
> ./buildBoringWallpaperPkg.sh
pkgbuild: Inferring bundle components from contents of ./payload
pkgbuild: Wrote package to ./BoringWallpaper-1.0.pkg 

This will create a new package file named BoringWallpaper-1.0.pkg.

Now you do not have to remember every option exactly but instead can run ./buildBoringDesktopPkg.sh. If you need to change options like the version number, it is easy to do by changing a variable. Package creation is easy to repeat and if you use a version control system (e.g. git) changes to the script are tracked.

You have literally codified the package creation process. If you are working in a team, you can point to the script and say: “This is how we create this package!” and everyone who has access to the script can recreate the package creation workflow. They can also read the script and understand what is going to happen. A script like this does not replace the need for documentation, but is better than no documentation at all.

To be precise, the build script and the files and folder structure in the payload are required together for the package creation workflow. They should be kept and archived together.

Ideally together with documentation describing:

  • motivation/reason for building this package
  • where and how the package is intended to be used
  • whether the software installed requires certain configurations that are not provided by the pkg, such as licenses or default settings through a script or a configuration profile and where and how to obtain and set those
  • macOS and platforms (Intel or Apple silicon) the package was built on
  • macOS versions and platforms the package was tested on
  • where to obtain the resources in the payload, should they get lost or need to be updated
  • person(s) or team responsible for this package project
  • version history or change log
  • an archive of older versions of the pkg file
  • uninstall process or script
  • any other relevant links and history, for example problems and issues that lead to certain design choices

For developers, scripts like this can be part of an automated release or CI/CD workflow.

You do not have to include the version number in the package name, but it helps in many situations. It also helps when you build an archive or history of installers. You never know when you will need an older version of an installer. When vendors/developers provide the version number in their file names, it helps admins and users identify new or outdated versions.

You should (again) inspect the new package file you just created in Suspicious Package and using pkgutil.

Testing the Package

Finally, you can install this package on your test machine.

For this simple package, you can also use the Mac you are currently working on. With more complex packages, especially when we get into installation scripts and LaunchD configuration, a virtual machine or separate testing device is strongly recommended.

When you run the package in the Installer application, note that the dialogs are the defaults and very terse. System administrators will rarely build packages that are meant to be installed with the user interface, so this is not a problem. Most administrative package files will be installed by management systems in the background and never show the UI.

Developers can customize the user interface for installer packages with distribution files, we will get to in a future post.

You can also use the installer command to install the package:

> sudo installer -pkg BoringWallpaper-1.0.pkg -tgt / -verbose
installer: Package name is BoringWallpaper-1.0
installer: Installing at base path /
installer: The install was successful.

After installing, go to /Library/Desktop Pictures and look for the BoringBlueDesktop.png image file, then open System Settings and go “Wallpaper.” You will have to scroll down to the “Pictures” section and all the way to the right, but the picture will appear there!

You can also open Terminal and run the pkgutil commands to inspect what was installed: (replace com.example.* with your identifier)

> pkgutil --pkgs="com.example.*"         
com.example.BoringWallpaper

> pkgutil --info com.example.BoringWallpaper
package-id: com.example.BoringWallpaper
version: 1.0
volume: /
location: 
install-time: 1754058568

> pkgutil --files com.example.BoringWallpaper
BoringBlueDesktop.png

> pkgutil --file-info /Library/Desktop\ Pictures/BoringBlueDesktop.png                        
volume: /
path: /Library/Desktop Pictures/BoringBlueDesktop.png

pkgid: com.example.BoringWallpaper
pkg-version: 1.0
install-time: 1754058568
uid: 0
gid: 0
mode: 100644

If you created an installer package that attempted to install the image file in /System/Library/Desktop Pictures, it would fail. This directory is protected by two technologies, System Integrity Protection and the read-only sealed System volume. Figuring out the proper location to install management files is an important, but sometimes complicated task.

Removing the installed files

If you tested this package on your main Mac, you can remove the installed files with the following commands:

> sudo rm /Library/Desktop\ Pictures/BoringBlueDesktop.png
> sudo rmdir /Library/Desktop\ Pictures
> sudo pkgutil --forget com.example.BoringWallpaper

Note the rmdir command will error when there are other files remaining in it. This is intentional here, since we only want to remove the folder if it is empty and not remove files other than those we installed. The /Library folder is part of the base macOS installation, so we are not going to touch it.

Similar to the build script, it can be useful to maintain the un-install script while developing the package, especially for more complex installs. An un-install script for this project might look like this:

#!/bin/sh

# uninstall Boring Wallpaper

# reverts the installation of com.example.BoringWallpaper


# remove the file
rm -vf "/Library/Desktop Pictures/BoringBlueDesktop.png"

# remove folder
# fails when there are other files remaining inside
# we do not want to affect files we did not install
rmdir -v "/Library/Desktop Pictures"

# forget the pkg receipt
pkgutil --forget com.example.BoringWallpaper

Since the installed file and folder are owned by root, you need to run the entire script with root privileges or sudo:

> sudo ./uninstallBoringWallpaper.sh

Note that many device management services offer the option to run scripts on the Mac clients and they generally run the scripts with root privileges. Consult your management service’s documentation for details.

You have to remember to update the uninstall script when you change the payload and other settings in the package. This will be useful for your testing.

Developers or vendors can also provide the uninstall script to customers in case they need to uninstall the software. Administrators could use the uninstall script in a self service portal to allow end users to remove software they no longer require.

Note that software might create other files while it is running that are not part of the installer package. Use your judgment whether they need to be removed as part of the uninstall script. Some files might contain user data that should be preserved, even when the software is deleted.

Installing Packages

What is a package?

If you have been using a Mac, you will have encountered an installation package file or pkg.

Package files are used to install software and configuration files on your Mac.

Package files come in different flavors and formats, but they can be summarized to these relevant components:

  • a payload
  • installation scripts

A package file may have only a payload, only scripts, or both.

The payload is an archive of all the files that the package will install on the system. The package also contains a “bill of materials” (BOM) which lists where each file should be installed and what the file privileges or mode should be.

Installation scripts can be executed before and after the payload is installed.

Additionally, packages contain some metadata, which provides extra information about the package and its contents. They can also contain images and text files, such as license agreements that can customize the default Installer app interface.

Installer application

The most common way of installing a package file and start its installation process, is to open it with a double-click. This opens the default application for the pkg extension: Installer. The Installer app can be found in /System/Library/CoreServices/. However, you rarely need to open it directly. It is usually started indirectly by opening a package file.

After launch, the Installer app presents different panels to the user. The exact order and content of the panels depends on what the developer of the package configured. At the very least, it will show:

  • a short introduction
  • prompt to authenticate for administrative privileges
  • a progress bar of the installation
  • whether the installation succeeded or failed

A package may also show:

  • a custom background image
  • a detailed introduction
  • a license agreement that needs to be accepted
  • alternative installation location
  • options to select certain subsets of files and apps (components)
  • more custom steps implemented by the developer

Installer app can also list the contents of a package file without installing it first. You can choose ‘Show Files…’ (⌘I) from the ‘File’ menu to get a list of files in the payload. This list can be very extensive and there is a search field to filter the list.

If you want to see more than just the progress bar during the installation, you can select “Installer Log” from the “Window” menu (⌘L) and then increase the output to “Show all Errors and Logs” (⌘3).

You can also review the installation log afterwards, by opening the Console application from Utilities and choosing “install.log” under “Log Reports.” You can also look at /var/log/install.log directly.

This log is notoriously verbose. There will be many entries for each installation and some related system services like software update will log here, too.

Security

Packages can place files and executables in privileged locations in the file system. When you open a package file with the installer application, it will usually prompt for administrator privileges.

In the early days of Mac OS X, package installers had no limitations at all. However, these days, digital security and privacy are important features and criteria for platforms and Apple has implemented several features in macOS which set strong limits on what third-party package installers can do.

Most importantly, System Integrity Protection (SIP, introduced in OS X Yosemite 10.11) and the Signed System Volume (introduced in macOS Catalina 10.15) prevent third-party packages from affecting system files.

Even with these protections in place, packages still provide many options for abuse. Packages are very useful to install legitimate software but can also be used to install and persist malicious tools.

File Quarantine

File quarantine does not directly limit the abilities of package installers, but it is important to understand how it works together with other security features in macOS.

Quarantine flags were introduced in Mac OS X as early as version 10.4 (Tiger) but were not actually used for much until later in 10.5 (Leopard).

When a file arrives on your Mac from an external source, such as a download in a web browser, an email attachment, or an external drive, the process that downloads or copies generally adds a quarantine flag. The quarantine flag is added in an extended attribute to the file.

Note: the examples here reference desktoppr-0.5-218.pkg, the installer for a open source command line tool I wrote. You can download its installer pkg from the GitHub repository. The version and build number might be different.

After the download, you can see some of the metadata added to the download the Finder info window. Select the downloaded pkg file and choose ‘Get Info’ (⌘I) from the context or ‘File’ Menu. In the Info window that appears, you need to expand the ‘More Info’ section, where you will see the URL it was downloaded from. There might be more than one URL if the browser was redirected.

You can get more information in the command line using the xattr command line tool: (xattr stands for ‘extended attribute’, it is often pronounced like ‘shatter’)

> xattr desktoppr-0.5-218.pkg         
com.apple.metadata:kMDItemDownloadedDate
com.apple.metadata:kMDItemWhereFroms
com.apple.quarantine

The quarantine flag has the attribute name com.apple.quarantine. You can also use xattr to show the contents of the extended attribute:

> xattr -p com.apple.quarantine desktoppr-0.5-218.pkg
0083;6842fcbe;Safari;A2964F09-ACDF-430F-8CFF-48BD75C464CD

The contents are (in order, separated by semi-colons):

  • a hexadecimal flag number: always 0083
  • a hexadecimal timestamp: e.g. 6842fcbe
  • the process that created the file: e.g. Safari
  • a universal identifier (UUID)

You can convert the hexadecimal timestamp into a human readable time with

> date -jf %s $((16#6842fcbe)) 
Fri Jun  6 16:35:42 CEST 2025

There are two more extended attributes attached to the downloaded file that are interesting. The awkwardly named com.apple.metadata:kMDItemDownloadedDate and com.apple.metadata:kMDItemWhereFroms contain the download date and the web addresses the file was downloaded from respectively. When you look at them withxattr, you see the data is stored in a binary property list format.

> xattr -p com.apple.metadata:kMDItemDownloadedDate desktoppr-0.5-218.pkg
bplist00?3A????r

To show this in a human readable format, you have to pipe the output through xxd and plutil:

> xattr -x -p com.apple.metadata:kMDItemDownloadedDate desktoppr-0.5-218.pkg | xxd -r -p | plutil -p -
[
  0 => 2025-06-06 14:35:42 +0000
]

The name of the extended attribute gives us a hint that the information is also accessible through the file’s metadata, which is a bit easier:

> mdls -name kMDItemDownloadedDate desktoppr-0.5-218.pkg
kMDItemDownloadedDate = (
    "2025-06-06 14:35:42 +0000"
)

The named kMDItemWhereFroms attribute contains the URLs the file was downloaded from. It might be more than one URL because of redirections.

The URLs for GitHub downloads tend to be very long. After manually downloading Firefox, I see these URLs:

> mdls -name kMDItemWhereFroms Firefox\ 139.0.1.dmg 
kMDItemWhereFroms = (
    "https://download-installer.cdn.mozilla.net/pub/firefox/releases/139.0.1/mac/en-US/Firefox%20139.0.1.dmg",
    "https://www.mozilla.org/"
)

It is important to remember that not all processes add quarantine flags to downloaded or copied files. As a general rule, applications with a user interface add quarantine files and command line tools and background processes do not. But there may be exceptions either way.

For example, when you download a file using the curl command line tool, it will have no quarantine flag or other metadata extended attributes. This is might be exploited by malicious software.

The quarantine flag serves as a signal to the system that this file or archive needs to be checked before opening or running. The part of the system that performs this check is called Gatekeeper.

Gatekeeper

Gatekeeper was introduced to macOS in OS X Lion (10.7) in 2011. Gatekeeper will verify the integrity, signature and notarization status of an app or executable before opening it.

A package installer file can be:

  • not signed at all
  • signed, but not notarized
  • signed and notarized

Gatekeeper works somewhat differently for installer packages compared to applications that you download in disk image (dmg) or other archive formats (e.g. zip). When you copy an application out of a disk image into the Applications folder or unarchive it from an archive, the system transfers the quarantine flag and metadata to the application bundle on the local disk. When you then open the app for the first time, the presence of the quarantine flag signals Gatekeeper to verify the integrity of the application using the signature and verify the notarization status. You can see the dialogs the system might present in this Apple support document.

The signature verifies the integrity and the source of an application or installer package. With an intact signature, you can be certain the package file has not been modified since it was signed. If the signature uses an Apple Developer ID, you can be fairly certain that the package was signed by that developer, or someone from that organization. There have been cases where Apple Developer ID certificates were stolen, but Apple will usually invalidate those fairly quickly.

Notarization is an extra step where the developer uploads the signed package file to Apple’s notarization servers. Apple then scans the package for certain security features and whether known malware signatures are present. Apple then adds the package file’s hash to their notarization database. The developer can also ‘staple a ticket’ to the package file, so that Gatekeeper doesn’t have to reach out to Apple’s servers on the check.

When the Gatekeeper verification of the signature and notarization status succeeds, the user gets a prompt to confirm that they want to launch the application they just downloaded.

When either verification fails, most commonly because the app is not notarized, the user gets a different, quite scary, prompt, stating that the system cannot verify the package is free of malware. You will get the same dialog when the package is not signed at all.

While generally, all software developers should sign and notarize their packages, this is not always the case. Open source projects, often have neither the financial nor logistical means to obtain an Apple Developer account, which provides the required Apple Developer ID certificates.

Bypassing Gatekeeper

Before you choose to install a package file which is not validly signed or notarized, you should first be certain the source is trustworthy. Then you should probably inspect the package using the tools in this post to verify that it only installs what it is supposed to, before actually installing it.

After attempting to open the pkg file with the Installer app by double-clicking and receiving the dialog that it could not be verified, click ‘Done.’ Then navigate to the ‘Privacy & Security’ pane in System Settings. Under the ‘Security’ section, you will see a message that the package ‘was blocked to protect your Mac’ with a button to ‘Open Anyway.’

This extra option will disappear after a few minutes.

You can also remove the quarantine flag using the command line.

> xattr -d com.apple.quarantine desktoppr-0.1.pkg

Without the quarantine flag, the Gatekeeper verification will not be triggered when the file is opened.

Packages installed by a device management service are not checked by Gatekeeper and do not need to be notarized. With some services, the packages may need to be signed, but not necessarily with an Apple Developer ID. Consult the documentation of your device management service for details.

Installer command line tool

You can also install package files from the command line using the installer tool.

To install a package file on the current system volume, use the installer command like this

> sudo installer -package desktoppr-0.5-218.pkg -target /

There are shorter flags for both these options:

> sudo installer -pkg desktoppr-0.5-218.pkg -tgt /

Installing a package with the installer command will not enforce a restart or logout, whether the package requires one or not. You will have to perform or schedule the reboot manually.

Installing with installer will also not trigger GateKeeper checks, whether the quarantine flag is set or not.

The -verbose flag will increase the output of the installer tool which can help when you need to analyze problems. The installer process will also log to /var/log/install.log so you can also monitor or review the installation log in the Console app.

Updates: Setup Manager and utiluti

Setup Manager 1.3

We have released Setup Manager 1.3 today. You can see the release notes and download the pkg installer here.

Most of the changes to Setup Manager in the update do not change the workflow directly. The focus for this update was to improve logging and information provided for trouble-shooting.

With the 1.3 update, Setup Manager provides richer logging information. You will find some entries in the Setup Manager log that were not initiated by the Setup Manager workflow, but are still very relevant to troubleshooting the enrollment workflow. You can see all installation packages that are installed during the enrollment, as well as network changes. This allows an admin to see when managed App Store installations or other installations initiated from the MDM or Jamf App Installers are happening in the enrollment workflow.

These can be very helpful to determine what might be delaying or interrupting certain other installations.

When we started building the “enrollment tool we wanted to use ourselves” more than two years ago, we chose to build a full application, rather than a script-based solution which remote controls some interface. One of the immediate benefits is that we could make the user interface richer and more specialized. Localizing the app into different languages was easier, too. Setup Manager adds Polish localization, bringing the total number of languages to ten!

(We use the help of volunteers from the community to localize to other languages, if you want to help localize Setup Manager into your language, please contact me.)

There was another goal, which took a bit longer to realize.

Swift apps allow us to dive deeper into the capabilities and information available in the operating system. A full blown app is also more capable at analyzing and displaying multiple sources of information at the same time. For example, Setup Manager will display a big warning when the battery level drops below a critical threshold.

These kinds of workflows and user interfaces would be nearly impossible or, at the very least, extremely complex to build and maintain with shell scripts. In this case, Setup Manager is monitoring and parsing other log files and summarizing them down to some important events in the background, while it is working through its main purpose of running through the action list from the profile.

This feature will not be seen by most users or even techs who are sitting in front of the Mac, waiting for the base installation to finish. But when you are trouble shooting problems during your enrollment workflow, these extra log entries can be very insightful. Even during testing, it unveiled some surprises in our testing environments.

We hope you like the new features. But, we are also not done yet and have plenty more ideas planned for Setup Manager!

utiluti 1.2

Since we are talking updates, I have also released an update to my CLI tool to set default apps for urls and file types (uniform type identifiers/UTI). utiluti 1.2 adds a manage verb which can read a list of default app assignments from plist files or a configuration profile. You can see the documentation for the new manage verb here and download the latest pkg installer here.

This allows you to define lists of default apps and push them with your device management system. Then you can run utiluti from a script in the same management system. This should greatly simplify managing default apps.

Note, that while you can set the default browser with utiluti, whether you are using the manage option or not, the system will prompt the user to confirm the new default browser. For this use case, you will want to put the utiluti command in a context where the user is prepared and ready for that extra dialog (such as a Self Service app). There are other tools, such as Graham Gilbert’s make-default CLI tool, which bypass the system dialog. In my experience, tools like this work well in fairly clean setup and require a logout or reboot after the change. This might fit your workflow, but you need to test.

I hope utiluti will find a place in your MacAdmin’s toolbox!

Installomator v10.8

Further chipping away at the backlog of new and updated with merged 200 PRs merged or closed.

The new PR templates and automations are proving to be a big help! Many thanks Bart for working on these and all the maintainers for staying on top of most things.

This release brings Installomator to 1025 (!) labels!

Many thanks to all the contributors, this tool wouldn’t exist without you!

You can find the detailed release notes and the pkg on the repo!

New tool: utiluti sets default apps

A while back I wrote a post on the Jamf Tech Thoughts blog about managing the default browser on macOS. In that post I introduced a script using JXA to set the default application for a given url scheme. (like http, mailto, ssh etc.) The beauty of using JXA/osascript is that it doesn’t require the installation of an extra tool.

However, there was a follow-up comment asking about default apps for file types, i.e. which app will open PDF files or files with the .sh file extension. Unfortunately, Apple has not bridged those AppKit APIs to AppleScript/JXA, which means it is not possible to use them in a script without dependencies.

Back then, I started working on a command line tool which uses those APIs. I didn’t really plan to publish it, since there were established tools, like duti, cdef and SwiftDefaultApp that provided the functionality. It was a chance to experiment and learn more about Swift Argument Parser. Then life and work happened and other projects required more attention.

A recent discussion on the Mac Admins Slack reminded me of this. Also, none of the above mentioned tools have been updated in the past years. As far as I can tell, none of them have been compiled for the Apple silicon platform. They don’t provide installation pkgs either, which complicates their distribution in a managed deployment.

So, I dusted off the project, cleaned it up a bit, and added a ReadMe file and a signed and notarized installation pkg. The tool is called utiluti (I am a bit proud of that name).

You can use utiluti to set the default app for an url scheme:

$ utiluti url set mailto com.microsoft.Outlook
set com.microsoft.Outlook for mailto

or to set the default app to open a uniform type identifier (UTI):

$ utiluti type set public.plain-text com.barebones.bbedit
set com.barebones.bbedit for public.plain-text

There are bunch of other options, you can read the details in the ReadMe or in the command line with utiluti help.

The functionality is quite basic, but please provide feedback if there are features you’d like to have added.

Installomator v10.7

Chipping away at the backlog of PRs and issue, we have released a new version of Installomator today.

Main focus was on releasing a whole bunch of new and updated labels. But the maintainer team has also started work on implementing to templates for issues and PRs and some automation for testing. This should help a lot with the effort to keep up with new issues and PRs going forward.

Many thanks to all the contributors and maintainers for the hard work that went into this!

You can find [the detailed release notes and the downloads on the repo!](https://github.com/Installomator/Installomator/releases/tag/v10.7)