This week’s MacAdmins.news has (more) thoughts on the MacBook Neo, 50 years of Apple, my experiences with LLM assisted coding, many great articles from Mac admins, and lots of updates.
Swift re-write for QuickPkg and some thoughts on LLM aided coding
I have created a new version of quickpkg, you can get it here.
If all you care about is the new version of quickpkg, go follow the link. There are also a few new features. Hope you like them.
Full disclosure: I used Claude Code to create this new version. I recently got access to Claude Code through work and I chose quickpkg as an experiment to understand where modern “agentic” coding tools are and how they fit in my workflows, coding and learning processes.
I have been and (spoiler) remain a skeptic of modern “AI” hypes and the companies whose business this is. I am not a skeptic in regards to there being useful aspects to Large Language Models (LLMs) and machine learning based solutions in general. For example, I have been living in countries where the main language is not my first for more than twenty years and the progress of translation software, both text, visual and audio based has massively simplified that experience recently.
I have been trying out various LLM based tools over the past years. I always got frustrated very quickly. I was told alot, that I was “holding them wrong” but the frustration always seemed to outgrow the benefit in short time. None of the upsides outweighed my concerns on the social, economical, ecological and ethical impact of the tech. (More on that later.) Certainly not enough to purchase any of the subscriptions which would give me access to the better models, which would be so much better, I was repeatedly told.
I have always believed that I should know and understand the things I criticize, so it was time for an experiment.
Why quickpkg?
This seemed like the perfect experimental project to me. quickpkg addresses a very specific problem, that I happen to know quite a bit about. It is simple, but not trivially simple. It is a command line tool, which is far less complex than a tool with a graphical interface.
quickpkg was originally written in Python 2 and when the demise of that version of Python was evident, I put in minimal effort to make it work with Python 3. Re-building it with Swift to remove that dependency had been on my to-do list for a long time, but it never made it high enough on my priority list.
Converting code from one programming language to another is tedious for humans (part of the reason I procrastinated on this) but something that coding assistants are supposedly very good at. On the other hand, building macOS installer packages is something that is woefully under documented, so I expected a bit of struggle there.
How it went: the translation
To prepare the project I created a new branch on the existing repository and created a template Swift Package Manager folder structure for a ‘tool.’ (a command line executable using the swift-argument-parser library) I set the Swift language version in the Package.swift to the 6.0 expecting/hoping that this would make it use the latest Swift concurrency. I told the agent that I want to translate the python code to Swift using swift-argument-parser and the new swift-subprocess package.
The agent went off for a few minutes to analyze the existing project, created a Claude.md file with its findings and presented me with a plan on how it would split the functionality contained in a single python file into various swift files. The plan looked reasonable to me and told it to go ahead and it started its work. I could watch the code it generated and it asked for a few confirmations.
I had to interrupt it at this point, since it apparently had no idea about the swiftlang/subprocress package I had asked to use and kept choosing either an older and long not updated subprocess repo hosted on the Apple GitHub or one from Jamf, which uses Foundation.Process for running shell commands. Then the agent even preferred building its own functions (also with Foundation.Process) instead of using the subprocess package I wanted. I had to explicitly add the swiftlang subprocess repo to the Package.swift myself and reference its documentation before the agent consistently used it, over the alternatives.
Once I had overcome that problem, the rest of the translation went fairly smoothly. It took maybe 10-15 minutes, which is obviously far faster than I could have done it.
Towards the end of that process, I could watch the agent repeatedly compiling the command line tool, and fixing errors that occurred. This seemed a very human approach to me. When the compile succeeded it started running the command line with a local app to test if it actually did something. The only outcome it tested for was whether a pkg file with expected name existed, not if it was a valid installer pkg file. It’s a good start, but there are obviously more things that would need to be tested.
It even ran the correct security command to determine a Developer ID certificate to test the --sign option. Then realized I had documented the command in the ReadMe file for the python tool, which gave me insight into where it got the information from.
The local application the agent chose to re-package was /System/Applications/Calculator.app which is a poor example for many reasons, but works for generating the pkg file. The resulting pkg file is useless because that folder is part of the signed system volume. I wondered for a moment, whether it had picked that up from the ReadMe, too, but I had used /Applications/Numbers.app in those examples. I had Numbers.app installed on the machine I was running this on, so why it didn’t respect that information from the documentation remains a mystery.
Once the agent told me it was ready, I did some more detailed tests, testing a few more input file types and several combinations of options. Since one of the main use cases I use quickpkg for is re-packaging Xcode, which is also the only real-world example of an app delivered in a xip archive, this took a while, even on a M4 MacBook Pro. Overall, about 90 minutes after giving the first set of instructions to Claude, I determined that the translation had worked.
Success?
Remember that Claude had a working python script to start out with. Nevertheless, aside from getting Claude to accept the (admittedly quite new) subprocess repository, this was a smooth process. I could and probably should have written up a list of commands and sample apps to use for testing and Claude would have done those for me, as well, saving some time in between as I invariably got distracted while larger packages built.
At this point, I could have stopped and called it a success. The code works. I can’t tell for sure how long the translation would have taken me manually (more on that later) but I am certain that I wouldn’t be able to do it in 90 minutes, let alone 15.
So, huge gain in efficiency, right?
Technical and cognitive debt
When I mentor people on scripting and coding I always stress that “working” is the most fundamental success criteria and everyone should be proud when they achieve that.
However, passing “it works” is only the first step along the way. If you plan to support and maintain and possibly build on the code going forward, you need to take the time to clean-up, refactor, and document the code. Especially, if you are planning to share the code.
Since the tool was working, I really wanted to publish and share it on my GitHub. But that means that I will be responsible to support the tool and the code going forward. Regardless of how the code for the tool was created, it is now my responsibility. So, I have the obligation to review and understand the code. This is another reason I chose a small project with a limited scope, since I anticipated that I wouldn’t have the time and energy to review and understand the code for a larger project that an agent could have generated in a fairly short time.
I actually started with the code review while I was testing whether the package build process was working as it should. As I said, some of those packages take a long time to build. Unfortunately, I started editing the generated code immediately, without creating a commit in the repo. I regret this now as I cannot link to these first changes.
Most of the code was good. There were a few cases of code repetition, as if a lazy programmer had copy/pasted certain code instead of abstracting it into a function or method. I have certainly been guilty of this a lot. But this is exactly what the “clean up” phase of a project is for.
There was one big four-way if-then clause in the ShellExecutor type, that was partially redundant. It checked for a nil value on workingDirectory and used two different calls to Subprocess.run, even though that function already takes an optional value. Then it did the same check for input resulting in a big unwieldy if-then clause with four calls to Subprocess.run that were only slightly different. Not wrong. The code did the right thing, but it was very hard to read code.
I actually think the entire ShellExecutor type is redundant and comes from the very many projects that use Process to run shell commands and need a wrapper type. At that moment I was happy to fix only the most egregious issues. (I have since refactored and removed the ShellExecutor type for the 2.0.1 release.)
Again, the code was working before. This is cleanup and refactoring to make the code more readable and understandable. I strongly believe more readable, clean code is easier to understand, maintain, and extend at a later time. I value putting in this extra effort, whether I have written the code myself, or get it from somewhere else. This process also forces me to understand the code, not just read over it and nod and feel “that’s good.”
Until this point, I was mostly editing the code myself. The connection from thinking about a code change to editing it myself in the editor is a long-trained habit for me. But then I remembered that I could tell Claude to do the refactoring. This worked surprisingly well. However, for small code changes, it felt slower and more complicated to phrase the change in ‘normal’ english, rather than just applying the change myself.
For example, I told the agent to create an extension on URL to wrap isFileURL and FileManager.shared.fileExists(atPath:) to make all the checks whether a file exists more readable. It did that and replaced all the uses of the less readable FileManager.shared.fileExists(atPath:) method. But I needed three attempts to phrase the request correctly and feel I would have been faster just writing the extension myself and using find and replace.
The run() function the agent originally generated was very long (again, something I have been guilty of a lot) and I asked it to refactor it with functions to make it more readable, and the result was quite good, but I needed to review these changes again to understand them and be sure the code and functionality remained the same and I feel that took at least as much time as doing it myself.
After a bit of refactoring and cleanup, I felt I understood the code that was generated. There was more cleanup to be done, which I put in the 2.0.1 update. But I was itching to add a few features that I wanted an updated version of quickpkg in 2026 to have.
- quarantine flags are removed from the payload before packaging
- minimum OS version is picked up from the app bundle and applied to the pkg
pkgbuild‘s compression option is set tolatestwith a command line option to revert tolegacy- quickpkg now builds distribution packages/product archives by default
These weren’t complicated additions and the agent did those just fine. I really appreciated that it often (but not always) would update the ReadMe file to match the new options. The inconsistency was a bit frustrating.
Packaging the tool
I did try to use Claude to build a script which compiles, packages and notarizes the command line tool, which quickly turned into a frustrating experience. If the LLM could feel frustration I am sure it would have been mutual. Building, signing, and notarizing are famously under-documented tasks, even though my articles on the subject have been around for a while.
I gave up on that and copied the pkgAndNotarize script from another project. I couldn’t let it be and asked Claude for suggestions on how to improve that script and it suggested checking whether the signature certificates and keychain profile entries actually existed, which I thought was a good idea.
However, it konfabulated a notarytool store-credentials --list command to determine whether the keychain entry exists and I didn’t catch that until later, when I actually tried to build the final pkg. That should teach me to trust the LLM at its edge of competence.
Efficiency?
Compared to my earlier experiments with LLMs for coding, I was surprised how far the ‘agentic coding models’ have come. You cannot argue that they are completely useless anymore.
Translating working code from one language to the other is an easier task than generating code from scratch, but still. The fifteen minutes or so it took to generate a working Swift version is impressively fast.
Human developers are generally quite bad at judging how long a task will take. They are also very bad at judging how long it would have taken them with or without LLM support, compared to however they did it. There is research supporting this claim.
So, take my estimates with a grain of salt, but I estimated (before I started on the Claude project) that re-writing quickpkg and adding the new features would take me four to eight hours.
Now that I have seen and reviewed the generated code, I could re-create that much faster than my original estimate. Had I done the translation by hand before I put the agent to that task, the prompts would have been different and my review of the generated code would have been faster, because I would have had an idea of what to expect. So, either way, there is no fair control test.
Fifteen minutes compared to four to eight hours. I can see how someone might get excited at this point, call it a day and claim a huge efficiency gain.
There is a word for trusting the output of a coding agent without testing and verification: “vibe coding.” I consider it an horrendous lack of standards.
It took me more than an hour to verify that the generated code was actually doing what it was supposed to. I consider this really important since it generates package installers that install files on potentially thousands of devices. I might have been able to save some time by giving the agent more detailed instructions on how to test. Automating tests is good. But it wouldn’t have been much faster and defining the tests would have taken quite some time, as well. Re-packaging Xcode simply takes a long time and is an essential test. Also, I would still have had to verify that the agent was performing and evaluating the tests properly.
Then it took me another three to four hours to understand, review, and cleanup the code.
I would have had to test, review, cleanup the code if I had done the translation myself, but much of that would have happened during the re-writing, so that is part of my original estimate. And, of course, I understand and trust the code I wrote myself much better than code I get from elsewhere.
I do not dare to declare my code as always perfect, but neither is LLM generated code, so that’s a fair comparison. When I have to debug issues in the future though, I will be faster understanding the issue when it is my own code, or when I invested the time to review, understand, and clean up the code.
In the end we have five and half hours of time spent with Claude versus the four to eight hours estimate without. Much less exciting.
There’s a lot of dicussion that could be had here. How good is my estimate? Would I be more efficient with an agent if I spent more time learning the tool and how to write proper prompts? Will future models or agents be much better? Is it necessary to review and understand the generated code, as long as it works?
A comparison
Indulge me for a moment. I will get back to the topic.
For my lunch break, I usually go for a walk. There is a shopping area nearby, with a super market and a bakery, so I usually pick up some groceries. Depending on how much time I have available, I walk either a 2km, 3km or 5km loop. This gets me out of the house for some scenery, sun (weather permitting), and fresh air, provides some exercise, allows—no, forces—me to disconnect for a while from whatever I am doing at the desk and screens. It keeps the groceries stocked and I also get something nice from the bakery for lunch.
I could go get the groceries with the car. It’d be faster and take less time, so if that is your metric, it would be “more efficient.”
Yet I have no desire at all to replace my walk with a car trip. Less time is not what I value for my lunch break.
A car trip would have several downsides. Instead of a relaxing walk through parks and backstreets, I’d have to focus on the road and traffic, bikes, and pedestrians while driving and looking for a parking spot. I wouldn’t get the exercise, little as it is. I wouldn’t get a mental break, which I know will reduce my focus and productivity in the afternoon and evening. I couldn’t enjoy the sun. (Or rain, as it may be.) A car trip would also use far more energy and be more of a burden on the environment.
If I really wanted to optimize my grocery shopping for time spent, I could go to the big super market once per week and not leave the house at all during the week.
It’s not that taking a walk or the car, or going to the big store once a week are “better” or “worse” solutions. Each is an optimization for a different goal. Each has a different metric, different values that it is more optimal for.
quickpkg is a simple project. This was an intentional choice for this experiment, since I didn’t want to spend too much time on it. The quickpkg rewrite was also the first time I used the new Subprocess package in one of my projects, so one of my goals was to learn how that worked. Had I let the agent use the old Process way of launching shell commands that it wanted to use initially, or had I not reviewed and cleaned up the generated code afterwards, I would have learned nothing about the new Subprocess package.
There are other code projects I am currently working on, which are far more complex than quickpkg. Yet I feel no desire to use the assistance of an agent on these projects. For these projects, my main goal is to have full ownership and understanding of the code and their workflows. I am learning a lot about how I can control aspects of the system with Swift code and the macOS native frameworks. A lot of this is new to me, or I am re-visiting things that I thought I knew from a different perspective and challenging my knowledge.
Obviously, a result has to be delivered eventually, but gathering knowledge about the system and how to code these particular problems and exploring the limits of what is possible and, more importantly, what is not possible, has been a goal of these projects from the very beginning. In the course of this project we have already found some limitations we hadn’t antipicated, but also found solutions we had thought completely out of reach when we started planning.
If I didn’t challenge myself to explore the possibilities and craft the code and design the workflows, I believe the project would be far less useful than it is right now. I also believe I will be better at my profession and at implmenting future projects because of these experiences.
Keep in mind that as a consultant in the Mac management and system administration space, we live very much on the edge of things that are commonly documented. Since LLMs work on probabalistic data from large data sets, they get worse when there is less documentation. I could tell that Claude was fairly solid with common tasks, such as building a command line tool and refactoring code, but started konfabulating with pkgbuild and notarytool. When your project is more within well-documented domains, you will have better results.
This is also the reason I don’t use LLMs for writing. For me, the process of writing is a fundamental part of sorting out, challenging, and clarifying the half formed ideas contained in my head. I also generally enjoy the process, or at least gain satisfaction from the finished text. I would not and could not ask another person to do this process for me. How could I ask a machine? Why would I ask a machine?
Why would I take a car trip for my lunch break?
The upside
However, I will admit that I have used the built-in Xcode LLM functionality on a few occasions and found it helpful.
The first situation was a gnarly SwiftUI layout problem that I couldn’t find a solution for on the web. When I asked the Xcode 26 ChatGPT integration, it built a solution that worked, even though it seemed quite elaborate. Just last week, I found a weird crash that would happen when the window was resized a certain way and I couldn’t understand why. I fed the crash log back into the ChatGPT assistant and it pointed to a recursion generated by the interaction of the generated layout code and a seemingly unrelated, different view object. The suggestions to fix the issue from the assistant turned out to be dead ends, but it would have taken me much longer to identify the problem without the agent’s analysis. (I was able to remove the problem by reviewing, refactoring and simplifying the code. At least I hope so…)
When you have the ‘Coding Intelligence’ enabled in Xcode, there will be a “Generate Fix for this Issue” button next to the error and those can be very helpful to explain obscure compiler errors. SwiftUI certainly generates a few of those. Even though I rarely use the suggested fixes, the explanations of the issues usually are very helpful.
I believe it says a more about the sad state of modern IDEs, systems and frameworks when you need a large language model built with thousands of GPUs and hundreds of billions of tokens to understand a crash log or compiler error, than it does about the supposed “intelligence” of the model. But I will admit that has saved me a ton of time and frustration.
Should we focus on improving the frameworks, logs, and developer environments, rather than building monstrous data centers? Well, I guess that depends, like my lunch break walk, on what you are optimizing for…
Conclusion
I have been talking about efficiency and how we measure it, or don’t. I have not addressed all the other externalities that concern me with regards to LLMs and the general AI business these days.
My example illustrates that different solutions can be “best” when you are valuing different outcomes. I think a lot of the discussion of around coding agents and LLM help in general is based on a mismatch of values.
You may care more about “immediate time spent” with no concern for future ramifications and time you may have to spend later on improving the code. Technical and cognitive debt may not be part of your metrics. (They are difficult to measure.) You may not value the habit of building a tool as a means to learn about a particular topic. You may not care about the exploitative practices of the AI industry, which gathered and stole source material from where ever they could with no regards to ownership and licensing and now want to re-sell the digested slop back to us. You may not care about the unintentional—or sometimes fully intentional—political, ethnical, sexist and countless other biases in the data models. You may not care about the impact to your personal learning and growth, and education in general. You may not care how the next generation of experts is supposed to build their experience. You may not care about the ecological impact of the industry and the massive data centers they are planning to build. You may not care about the skewed and possibly fraudulent economics as the infusion of absolutely insane amounts of venture capital is papering over the actual costs. You may be starting to care about the secondary economical impacts of the bubble, as prices for RAM and other components are sky rocketing.
You may disagree on some, or even all, of these points, which will change your evaluation of this technology.
The benefits you gain from this technology also depend very much on what you are using it for. The more data about a certain topic the LLM has ingested, the better the recommendations will be. When you ask it for code to build web solutions and related automations, the recommendations will be much better than when you ask it about building package installers for macOS, since there are orders of magnitude more data for the former, than the latter.
The agent was very prone to inventing options for pkgbuild, productbuild and notarytool, even after I had instructed it to consider the man pages. This is a very important warning for people using agents to write automations in the Mac Admins space. Also, for the same reason, LLMs are “weak” on recent developments, so you may get code that would have worked fine five years ago, but doesn’t take modern changes to macOS and Apple platform deployment into account.
I am glad I did this experiment. For the first time, working with the agent felt really useful. I am not sure I would have ever overcome the writer’s block inherent in the tedious process of translating code. Using the agent to overcome that block was freeing. I experienced the wonder of a fascinating new technology. I can see how that can overshadow the concerns.
I believe the technology has merit. There is undoubtedly a usefulness to it. But in the current form, I think it is irresponsible to focus solely on the technical features and ignore all the other negative side effects. The benefits, when put under scrutiny are much smaller than they initially appear.
I have to hope that society will eventually find a way to build and use these tools in an effective, ethical, and responsible way. I don’t believe this is the case today. I don’t think the benefits outweigh the downsides. For now, I will continue to stay away.
Apple 26.3 Platform Updates – February 2026
macOS
- macOS Tahoe 26.3 (25D125): What’s new, Developer Release Notes, Security, Enterprise, IPSW, PKG installer
- macOS Sequoia 15.7.4 (24G517): What’s new, Security, PKG installer
- macOS Sonoma 14.8.4 (23J319): What’s new, Security, PKG installer
iOS and iPadOS
- iOS 26.3 About, Enterprise
- iPadOS 26.3: About, Enterprise
- iOS 18.7.5: About
- iPadOS 18.7.5: About
- iOS and iPadOS 26.2: Developer Release Notes
- Security: 26.3, 18.7.5
Other Platforms
- visionOS 26.3: About, Developer Release Notes, Security, Enterprise
- watchOS 26.3: About, Developer Release Notes, Security
- tvOS 26.3: About, Developer Release Notes, Security
- HomePod Software 26.3: About
Deployment Guides
- Apple Platform Deployment: Welcome, What’s new, Revision history
Applications
- Safari 26.3: WebKit Features, Developer Release Notes, Security
- Xcode 26.3 RC: Developer Downloads, Developer Release Notes
- apps formerly known as iWork: (no update notes)
Managing iWork in 2026, the Creator Studio update
The apps formerly known as iWork: Keynote, Pages, and Numbers received an update this week. Generally these updates aren’t that big of a deal, but this one is different, especially for macOS administrators.
The three apps formerly known as iWork have been available as standalone apps on macOS for a long time. First, as standalone paid apps, then in a bundle and then in the App Store as a one-time-purchase, later for free. They also come pre-installed on all Macs out of the box, but not after a system wipe.
Now, these three apps are joining the Apple Creator Studio bundle which includes the “Pro” apps. The existing functionality will remain free, but there are new additional features that you can unlock by purchasing the Creator Studio subscription.
The new apps bundled with the Creator Studio have the version number 15.1. Why Apple didn’t change these to the ’26’ version numbering is a mystery, as is what happened to the 15.0 release.
The update from 14.x to 15.1
There are a lot of changes for the macOS versions of these three apps. These are especially relevant to Mac admins with managed app deployments, but they will also explain some issues you may be encountering on a personally managed Mac.
While this is a standard update for the iOS, iPadOS and visionOS apps, it is actually an entirely new app for the macOS versions. To be even more precise, the upgrade process consists of an update and a new different app. Apple has published a support article explaining the process.
(The examples will be mostly for Keynote, but Pages and Numbers behave in the exact same way.)
If you had the latest version of Keynote before this Wednesday (14.4), you would see a 14.5 update in the Mac App Store with the release notes: “This update contains bug fixes and performance improvements.”
Not sure about the bug fixes and improvements, and Apple certainly doesn’t go into detail on these. Apple states that you should upgrade to 14.5 before installing and launching the 15.1 apps, so that saved passwords for protected documents are preserved correctly.
After updating to 14.5, there will also be a dialog stating that a “New Verision of Keynote Available” [sic] with a button that links to the new Keynote app in the App Store and a second button “Not Now” which allows you to ignore this for now, because, presumably you opened the app to do some work.
Fellow Mac Admin Neil Martin found a way to suppress this dialog with a configuration profile.
There is one big limitation with staying on the old version that I have encountered so far: you cannot collaborate on shared documents when one or more of the collaborators are using the 15.1 version, which is very likely when they are using iOS, iPadOS, or visionOS.
When you follow the button to download the new app from the app store, you can download the new Creator Studio version (15.1) for free. You will see a second, new app with the new icon in the Applications folder.
Differences between the apps
In the Finder the two apps look the same, except for the icon. But when you look at them in detail, there are two important differences. The new apps have a different name in the file system, which you can see in Terminal. The name in the file system is /Applications/Keynote Creator Studio.app.
This may seem cosmetic, but it will lead to broken dock items when the old version is removed. A user might be confused why Keynote is suddenly a question mark in the dock. When you are a Mac Admin who manages or even just pre-sets the dock, you may want to consider updating any dock items they might have. At the very least you will need to update scripts or profiles that set the default dock at enrollment.
When you further inspect the application bundle by looking at the Info.plist or with a tool like Apparency, you will see that the bundle identifier of the new app is com.apple.Keynote vs. com.apple.iWork.Keynote for the old one. This also has some side effects. In their support article, Apple calls out that the “Open Recent” menu will not be populated. Other customizations such as a customized toolbar may not transfer either.
Why?
The main change is that these apps now appear as a single entry across all the App Stores. You “purchase” the app once and get the app for all platforms. This has already been true for the iOS, iPadOS, and visionOS apps and now the macOS versions join that. This might simplify some things going forward.
This does not mean each platform gets the same app bundle, the actual downloaded apps are still specific and different for each platform. It is also important to point out that the 15.1 version still runs on Intel macOS.
To unify the apps into a single universal entry, Apple had to use the bundle identifier across all platforms.
Well, either that, or Apple could have updated the App Store backend to be more flexible here Apple always claims that their control over the hardware, software and services lead to a much better consumer experience. This would have been a chance to prove that.
I find it absolutely incomprehensible that Apple considers the App Store architecture, which is based on a store designed to sell songs for 99 cents more than 20 years ago, so inflexible that they would rather have their customers, administrators, and their own in-house developers jump through all these hoops.
But more on that later.
The Upsell
The apps are now part of the Creator Studio subscription bundle, so of course, there is upsell in the apps. There are big areas advertising the new themes that only come with the subscription in the dialog to create a new document. In Keynote, there is a big blue notice to “Elevate Your Presentations” in the slide inspector side bar, which thankfully does not appear in other inspectors. There are also purple toolbar items with the new features that are gated behind the subscription.
This enshittification of the app is annoying enough for consumers, but it is worse for managed deployments. Apple provides no means to purchase subscriptions for an organization. Even if you wanted to purchase the subscription for all your employees, you cannot. Instead, everyone is stuck with these garish ads and purple buttons.
Mac Admins have been asking for a way to purchase, manage, and deploy App Store subscriptions and in-App purchases for years. But again, the architecture of the App Store seems to be so inflexible, that Apple cannot provide this. Instead, they are “innovating” by placing more ads.
The Solution that misses
In their Apple Platform Deployment Guide, which was also updated this week, Apple mentions not one, but two solutions.
While the new versions of Keynote, Numbers, and Pages automatically hide these features when the Managed Apple Account appears in Settings, an additional Managed App configuration payload can be deployed using your device management service to provide the same experience for devices without a Managed Apple Account.
When the device has a Managed Apple Account signed in, the upsell ads should not show. While managed Apple Accounts (MAA) are certainly an interesting technology, their adoption among organisations has been slow, as they have very limited use cases, especially on macOS.
The other option is to add “an additional Managed App configuration payload” (also known as AppConfig) the the app deployment. This is actually a nice solution for iOS, but has one problem: while Apple’s MDM specs does allow for AppConfig when deploying Mac App Store apps to macOS, many device management systems do not implement this.
It is fair to ask why many device management services don’t offer this? Until this week, there were no important Mac Apps that used AppConfig for their configuration.
On macOS, configuration profiles have been used to configure the system and apps for more than a decade. This is familiar to admins and (most) developers. Configuration profiles have the advantage that they work for apps distributed outside the App Store, as well as App Store apps, so implementing AppConfig for macOS didn’t seem necessary, since it will not benefit most or any of the apps an admin needs to manage.
Device management service developers are focusing their resources on implementing the new modern DDM specs. It is sad that something like iOS apps on macOS and AppConfig for Mac App Store apps aren’t implemented, but the reality was, that Mac admins, their customers, weren’t asking for it. Until now, there was no need for it.
Apple ignores the reality on the ground. Instead of using an established and proven configuration method, they are using something that should work in theory. This is the cross section of managed deployments and the Mac platform, both regularly blind spots for current Apple.
Where was the beta?
If Apple had done a beta phase for the Creator Studio in AppleSeed for IT, these issues would have come up, been discussed and some of them could have been fixed. Mac admins would have had some time to prepare for the systemic issues that couldn’t be fixed and prepare and test workflows, or at the very least have support documents and communication available for the end users and support techs.
AppleSeed for IT has been successful at this for years now, providing administrators and developers early access to platform upgrades and updates with a dedicated feedback channel. But apparently, the app development teams at Apple haven’t heard of it. Or, if I may hazard a guess, the architecture of the App Store doesn’t allow for a beta deployment test phase. (There is TestFlight, but you cannot perform or test managed deployments through TestFlight.)
What should Mac Admins do?
If you are deploying the apps formerly known as iWork (Keynote, Pages, Numbers), you need to do the following:
- ensure that the clients receive the 14.5 update, so that the settings that are transferred, can be transferred properly, when the new version is launched
- once all clients have the 14.5 version, disable the deployment of the old versions
- add licenses for the new apps to your volume purchasing in Apple Business or School Manager
- configure your device management service to deploy the new apps, if your management service allows this, you may have to search for the new apps using the full app store link, as these are not Mac apps, but universal app entries in the App Store
- scope the deployment to those clients that have the 14.5 version
- if your device management service can add an AppConfig or Managed Application Configuration to a Mac App Store app deployment, add the
suppressPromptskey with a value oftrue. - once the new version is installed, remove the old applications from the client
- if you are managing the dock, replace any items in the dock to match the new app file paths
Going forward
Most importantly, please file feedback through the AppleSeed channel and your AppleCare contacts on this, and what could have been done better. This is my list, but feel free to add more:
- earlier communication and a beta phase for App Store applications and bundles
- management of Creator Studio subscription nags with configuration profiles on macOS
- a better upgrade experience for macOS
- volume purchasing, deployment, and management for App Store subscriptions and in-App purchases
You also want to file feedback to your device management service, they may be able update their interfaces and workflows to make these “updates” easier in the future.
Apple may have unified nearly all of their paid apps in one subscription, and believe they are done for now. No changes to the App Store needed. But third party developers may also want to unify their app offerings and are facing the same challenges. They will model their “upgrades” after the approach Apple has taken here.
Pro apps
Thre are more apps that are a part of the Creator Studio bundle. Apple acknowledges the problems by keeping the one-time purchase versions of (most of) the apps in the App Store. For the apps that have so far been free, Apple didn’t deem this necessary.
There may be issues regarding upgrading the Pro apps, as well. There may be issues with maintaining the one-time-purchase versions going forward. I have not yet had time to dive into these. I am sure other Mac Admins will share their experiences, and I will be sure to share their posts in my MacAdmins.news weekly summary!
Swift Argument Parser: Exiting and Errors
I have introduced the Swift Argument Parser package before. I have been using this package in many of my tools, such as swift-prefs, utiluti, emoji-list and, just recently, translate.
Argument Parser is a Swift package library which adds complex argument, option, and flag parsing to command line tools built with Swift. The documentation is generally quite thorough, but a few subjects are a little, well…, under-documented. This may be because the implementations are obvious to the developers and maintainers.
One subject which confused me, was how to exit early out of the tool’s logic, and how to do so in case of an error. Now that I understand how it works, I really think the solution is quite brilliant, but it is not immediately obvious.
Our example
Swift error handling use values of the Error type that are thrown in the code. Surrounding code can catch the error or pass it to higher level code. Argument Parser takes errors thrown in your code and exits your code early.
An example: in my 2024 MacSysAdmin presentation “Swift – The Eras Tour (Armin’s Version),” I built a command line tool to set the macOS wallpaper tool live on stage. To replicate that, follow these steps:
In terminal, cd to a location where you to store the project and create a project folder:
$ mkdir wallpapr
$ cd wallpapr
Then use the swift package command to create a template command line tool project that uses Argument Parser:
$ swift package init --type tool
The project will look like this:
📝 Package.swift
📁 Sources
📁 wallpapr
📝 wallpapr.swift
Xcode can handle Swift Package Manager projects just fine. You can open the project in Xcode from the command line with:
$ xed .
Or you can open Sources/wallpapr/wallpapr.swift in your favored text editor.
Replace the default template code with this:
import Foundation
import ArgumentParser
import AppKit
@main
struct wallpapr: ParsableCommand {
@Argument(help: "path to the wallpaper file")
var path: String
mutating func run() throws {
let url = URL(fileURLWithPath: path)
for screen in NSScreen.screens {
try NSWorkspace.shared.setDesktopImageURL(url, for: screen)
}
}
}
This is even more simplified than what I showed in the presentation, but will do just fine for our purposes.
You can build and run this command with:
$ swift run wallpaper /System/Library/CoreServices/DefaultDesktop.heic
Building for debugging...
[1/1] Write swift-version--58304C5D6DBC2206.txt
Build of product 'wallpapr' complete! (0.16s)
(I’ll be cutting the SPM build information for brevity from now on.)
This is the path to the default wallpaper image file. You can of course point the tool to another image file path.
Just throw it
When you enter a file path that doesn’t exist, the following happens:
swift run wallpapr nosuchfile
Error: The file doesn’t exist.
This is great, but where does that error come from? The path argument is defined as a String. ArgumentParser will error when the argument is missing, but it does not really care about the contents.
The NSWorkspace.shared.setDesktopURL(,for:), however, throws an NSError when it cannot set the wallpaper, though. That NSError has a errorDescription property, which ArgumentParser picks up and displays, with a prefixed Error:.
This is useful. By just marking the run() function of ParsableCommand as throws and adding the try to functions and methods which throw, we get pretty decent error handling in our command line tool with no extra effort.
Custom errors and messages
Not all methods and functions will throw errors, though. If they do, the error messages might not be helpful or too generic for the context. In more complex tools (and, honestly, nearly everything will be more complex than this simple example) you want to provide custom messages and custom error handling, so you will need custom errors.
Since we have seen that ArgumentParser deals very nicely with thrown errors, let’s define our own.
Add this custom error enum to the wallpaper.swift (anywhere, either right after the import statements or at the very end):
enum WallpaprError: Error {
case fileNotFound
}
Then add this extra guard statement at the beginning of the run() function:
guard FileManager.default.fileExists(atPath: path) else {
throw WallpaprError.fileNotFound
}
The code checks that the file path given in the path argument actually exists, instead of relying on the functionality in the setDesktopURL(,for:) method. When you run this code, we get our custom error:
$ swift run wallpapr nosuchfile
Error: fileNotFound
This is nice, but fileNotFound is a good name to use in the code, but not very descriptive. We could add more description with a print just before throw statement, but we already saw that the NSError thrown by setDesktopURL() had a detailed description. How do we add one of those to our custom error?
Turns out there are two ways. Either the custom error conforms to LocalizedError and implements errorDescription (which is what NSError does) or you implement CustomStringConvertible and implements description (or both).
There are many good reasons to implement CustomStringConvertible on your types anyway, since description is also used in the debugger and the print statement. There are also situations where you might want a different message for the error description, so it is good to have options. For our example, we just going to implement CustomStringConvertible. Change the code for the WallpaprError enum to:
enum WallpaprError: Error, CustomStringConvertible {
case fileNotFound
var description: String {
switch self {
case .fileNotFound: "no file exists at that path!"
}
}
}
And when you run again, you see the custom message:
$ swift run wallpapr nosuchfile
Error: no file exists at that path!
Note that the error message is written to standard error.
Clean exits
In some workflows, you may want to exit the script early, but without an error. (An exit code of 0.) When you try to use exit(0), you will get an error since ArgumentParser overloads that function. Instead, ArgumentParser provides a CleanExit error that you can throw:
throw CleanExit.message("set wallpaper to \(path)")
Generally it is best to just let the run() function complete for a successful exit, but there are situations where this comes in handy.
Custom Exit Codes
ArgumentParser generally does the right thing and returns a 0 exit code upon successful completion of the tool and an exit code of 1 (non-zero represents failure) when an error is thrown. It also returns an exit code of 64 when it cannot parse the arguments. According to sysexits this represents a bad entry of options and arguments.
(You can customize your shell prompt to show the exit code of the previous command.)
In complex tools, you may want to return other exit codes mentioned in that man page, or custom errors for certain situations. ArgumentParser does have a built-in option: you can throw the ExitCode() error with a custom code. For example, we can replace our custom error with
throw ExitCode(EX_NOINPUT)
This will return an exit code of 66, but now we have lost the custom error message. This is long standing missing feature of ArgumentParser (see discussion in this forum thread), but it is fairly easy to provide a workaround.
Add this extension to your tool:
extension ParsableCommand {
func exit(_ message: String, code: Int32 = EXIT_FAILURE) throws -> Never {
print(message)
throw ExitCode(code)
}
}
And then you can use this to get a custom message and a custom exit code.
try exit("no file exists at that path!", EX_NOINPUT)
Apple Platform Updates: 26.2
Unusual Friday night release of the Apple platform updates. Might be because Apple was pushing close to the 90-day deferral limit for 26.0, as I speculated in MacAdmins.news yesterday. Either that or Apple developers are just eager to go on their holiday breaks. (Can’t blame them…)
macOS
- macOS Tahoe 26.2 (25C56): What’s new, Developer Release Notes, Security, Enterprise, IPSW, PKG installer
- macOS Sequoia 15.7.3 (24G419): What’s new, Security, PKG installer
- macOS Sonoma 14.8.3 (23J220): What’s new, Security
- Safari 26.2: WebKit Features, Developer Release Notes, Security
- Xcode 26.2: Mac App Store, Developer Downloads, Developer Release Notes
iOS and iPadOS
- iOS 26.2 About, Enterprise
- iPadOS 26.2: About, Enterprise
- iOS 18.7.3: About
- iPadOS 18.7.3: About
- iOS and iPadOS 26.2: Developer Release Notes
- Security: 26.2,18.7.3
Other Platforms
- visionOS 26.2: About, Developer Release Notes, Security, Enterprise
- watchOS 26.2: About, Developer Release Notes, Security
- tvOS 26.2: About, Developer Release Notes, Security
- HomePod Software 26.2: About
- AirPods: Firmware updates
Fifteen years of Scripting OS X
Fifteen years ago today I published the first post on this website. Turned out a bit lucky that the first post still holds up fairly well today, even though there was a somewhat recent update. The second post did not age so well.
I started out the weblog because I was reading a lot of other Mac admins’ web blogs and thought: “I can do that, too.” And it turns out that I could, even though it took quite a few years of infrequent and mostly ignored posts to find my voice, rhythm, and audience.
Much has changed in those fifteen years and even in the last five years since I last celebrated the anniversary.
Turns out, a few things led to a peak in traffic in 2020, mostly some really successful posts and projects, most of which aren’t really relevant any more. (The exceptions are quite exceptional, though.) The first year or two of the pandemic proved as quite the catalyst for blog posts. Google’s ever changing algorithm probably caused some of the decline. (I never really cared about optimizing the site, still don’t.) I also split out the weekly newsletter to a new domain and service. Even though the weekly news summary has more than doubled subscriptions since 2020, that traffic is now missing from Scripting OS X.
In the last two years, I also suspect that traffic is also leaking towards LLMs. Rather than reading a post on a weblog, people prefer to get the pre-digested summary or solution from their favored LLM. This results in far less traffic to all websites.
Nevertheless, I do not consider the website as a failure. I still get plenty of feedback on relevant posts and content. It fills me with immense joy and pride when people come up to me at conferences and meetings to tell me they found something useful for their work and that is all the motivation I need to continue.
In the last few years, my “real” job, which I enjoy very much, has also required its fair share of time and attention and the remaining energy mostly goes towards MacAdmins.news. I plan to continue to write about things that interest me, when I find the time and energy to do so. There are some long term plans and I am very curious to see how they are going to turn out.
Thank you all so much for reading! On to the next fifteen years!
PS: Five years ago, I was hinting that the name of this weblog might change. I have owned (and been paying for) scripting.blog for quite a while now. Never actually pulled the lever, obviously. I feel that domain name is very close to Dave Winer’s scripting.com and I don’t want to even pretend that my small site is in any way comparable. I also have scriptingmacs.com and scriptingmacos.com but I am reluctant to use those, because I have both hope that Apple will allow more powerful automations on their other platforms and some fear that the Mac platform will lose relevance. (Probably not any time soon.)
Even though Apple has changed the name, I am going to stick with “Scripting OS X.” (For now.)
(In another five years, I might need to explain where the name comes from…)
Apple Platform updates for September 2025
macOS
- macOS Tahoe 26.0 (25G354): What’s new, Developer Release Notes, Security, Enterprise, App Store (not yet?), User Guide, IPSW, PKG installer, Compatibility
- macOS Sequoia 15.7 (24G222): What’s new, Security, PKG installer
- macOS Sonoma 14.8 (23J21): What’s new, Security, PKG installer
iOS and iPadOS
- iOS 26.0 About, Enterprise, User Guide, Compatibility
- iOS 26.0.1 for iPhone 17 and/or iPhone Air
- iPadOS 26.0: About, Enterprise, User Guide, Compatibility
- iOS and iPadOS 26.0: Developer Release Notes, Security
- Security updates: 18.7, 16.7.12, 15.8.5
Other Platforms
- visionOS 26.0: About, Developer Release Notes, Security, Enterprise, User Guide
- watchOS 26.0: About, Developer Release Notes, Security, User Guide
- watchOS 26.0.1 for Apple Watch Ultra 3
- tvOS 26: About, Developer Release Notes, Security, User Guide
- HomePod Software 26.0: About
- AirPods: Firmware updates (7E93 for AirPods Pro 2 and AirPods 4)
Applications
- Safari 26.0: WebKit Features, Developer Release Notes, Security, Guides: Mac, iPhone, iPad
- Xcode 26.0: Mac App Store, Developer Downloads, Developer Release Notes, Documentation
- Shortcuts: What’s new
Another Simple Package: Policy Banner
Previous articles in this series:
When you place a text file named PolicyBanner in the /Library/Security directory, macOS will display this file before the Login Window. The user will have to accept the banner before they can log in.
- Apple Support: About Policy Banners in macOS
The PolicyBanner file can be plain or rich text (txt, rtf, or rtfd file extensions). You can find a very simple PolicyBanner.rtf in the sample files, or create or provide your own.
The support article notes that the PolicyBanner file needs to be readable by every user in order to be displayed.
To build a package that installs your policy banner file, create a new project folder with a payload subdirectory:
> mkdir -p PolicyBanner/payload/Library/Security
> cd PolicyBanner
Then copy the PolicyBanner file to the payload directory, and ensure that the read mode is enabled:
> cp /path/to/PolicyBanner.rtf payload/Library/Security
> chmod 644 payload/Library/Security/PolicyBanner.rtf
Then create a new buildPolicyBannerPkg.sh script file in your favored text editor:
#!/bin/sh
pkgname="PolicyBanner"
version="1.0"
install_location="/"
identifier="com.example.${pkgname}"
export PATH=/usr/bin:/bin:/usr/sbin:/sbin
projectfolder=$(dirname "$0")
# recursively clear all extended attributes
xattr -cr "${payloadfolder}"
# ensure banner file is world readable
chmod 644 "${payloadfolder}/Library/Security/PolicyBanner.rtf"
# build the component
pkgbuild --root "${payloadfolder}/payload" \
--identifier "${identifier}" \
--version "${version}" \
--install-location "${install_location}" \
"${projectfolder}/${pkgname}-${version}.pkg"
This script is very similar to the buildBoringWallpaperPkg.sh script from the previous post. You could easily copy that script and modify the pkgname variable, and add the lines that ensure the correct file mode.
Your folder structure should look like this:
📁 PolicyBanner-1.0
⚙️ buildPolicyBannerPkg.sh
📁 payload
📁 Library
📁 Security
📄 PolicyBanner.rtf
When you run the build script it will generate a package named PolicyBanner-1.0.pkg. Inspect the package with pkgutil or Suspicious Package and verify that it contains the PolicyBanner.rtf file as its payload with the correct install location.
You should always verify your self-built packages with an inspection tool after building and before the first test installation. This step can quickly catch several frequent errors.
Once you have inspected the pkg file to your satisfaction, you can install it on a test client. After running the installation, verify that you can find the PolicyBanner file in the /Library/Security folder and then logout to see if it works.
While the use cases for this kind of simple policy display are limited, this example demonstrates how system administrators use pkg installers to modify settings and behavior in macOS.
Uninstall Policy Banner
In the previous post, we said it makes sense to build an uninstall script alongside the package itself. To uninstall this pkg, you can use the following script:
#!/bin/sh
# uninstall Policy Banner
# reverts the installation of com.example.PolicyBanner
# check for root
if [ "$(whoami)" != "root" ]; then
echo "requires root privileges..."
exit 1
fi
# remove the file
rm -v "/Library/Security/PolicyBanner.rtf"
# forget the pkg receipt
pkgutil --forget com.example.PolicyBanner
A Simple Postinstall Script
Apple’s support article on policy banners mentions:
If the policy banner still doesn’t appear, update the Preboot volume:
diskutil apfs updatePreboot /
To be honest, I have never (so far) encountered this problem and had to apply this fix, but, for the sake of example, we will be extra paranoid… er… thorough and apply this command after installation, just to be sure.
macOS installation packages allow for scripts or binaries to run before or after the payload is laid down on the target volume. We will go into much more detail later. For now, we will create a postinstall script which runs the command above and add it to the package.
In the PolicyBanner project folder, create a new sub-directory called scripts on the same level as the payload directory.
> cd PolicyBanner
> mkdir scripts
Then create a script file named postinstall (no file extension!) in the scripts directory with the following code:
#!/bin/sh
## run update preboot
# extra paranoid interpretation of
# https://support.apple.com/en-us/119845
export PATH=/usr/bin:/bin:/usr/sbin:/sbin
# only run when installing on System Volume
if [ "$3" != "/" ]; then
echo "Not installing on /, exiting"
exit 0
fi
echo "running updatePreboot"
diskutil apfs updatePreboot /
After creating the file, ensure its executable bit is set:
> chmod +x scripts/postinstall
Your PolicyBanner project folder should look like this:
⚙️ buildPolicyBannerPkg.sh
📁 payload
📁 Library
📁 Security
📄 PolicyBanner.rtf
📁 scripts
⚙️ postinstall
⚙️ uninstallPolicyBanner.sh
The diskutil man page mentions that you might break login when running the updatePreboot command against a user database that does not match the system, so we are going to avoid doing that.
The script checks if the third argument $3 matches “/” and exits the script when it does not.
The installation system passes the target volume as the third argument $3, so this check ensures the postinstall will only run when the banner is installed on the current system volume.
Then, having passed that check, it will run the command. There are a few echo commands whose output will appear in the installation log. These are helpful to see what is going on.
We still have to instruct pkgbuild to include the postinstall script in the package file. Open buildPolicyBannerPkg.sh and modify it like this:
#!/bin/sh
pkgname="PolicyBanner"
version="2.0"
install_location="/"
identifier="com.example.${pkgname}"
export PATH=/usr/bin:/bin:/usr/sbin:/sbin
projectfolder=$(dirname "$0")
payloadfolder="${projectfolder}/payload"
scriptsfolder="${projectfolder}/scripts"
# recursively clear all extended attributes
xattr -cr "${payloadfolder}"
xattr -cr "${scriptsfolder}"
# ensure banner file is world readable
chmod 644 "payload/Library/Security/PolicyBanner.rtf"
# ensure postinstall is executable
chmod 755 "${scriptsfolder}/postinstall"
# build the component
pkgbuild --root "${payloadfolder}" \
--identifier "${identifier}" \
--version "${version}" \
--install-location "${install_location}" \
--scripts "${scriptsfolder}" \
"${projectfolder}/${pkgname}-${version}.pkg"
First, update the version of the package. You should update the package’s version every time you update its contents. This allows the installation system to distinguish a re-application of the same package from an installation of a different version.
Then we create a variable referencing the scripts folder, run the xattr command to clear extended attributes from its contents and ensure the executable bit is set on the postinstall.
Finally, we add a --scripts option referencing the scripts folder to the pkgbuild command. Take note of the trailing backslash \ in that line, that allows the command to continue to the next line. Without the backslash, the command will error.
Run the buildPolicyBannerPkg.sh script. This will create a pkg file named PolicyBanner-2.0.pkg file in the project folder. When you expand this package file with pkgutil, you will see a sub-directory named Scripts which contains the postinstall.
> pkgutil --expand PolicyBanner-2.0.pkg PolicyBanner-2.0-expanded
📁 PolicyBanner-2.0-expanded
📄 Bom
📄 PackageInfo
📄 Payload
📁 Scripts
⚙️ postinstall
Installation Log
Install the package file on a test Mac using the Installer.app. When the installation has completed successfully, choose “Installer Log” (command-L) from the “Window” menu and then choose “Show All Logs” (command-3) from the “Detail Level” popup in the log window.
The Installer log is always quite detailed or even noisy. Since we know we are looking for log entries regarding the postinstall script, you can enter ‘postinstall’ in the search field of the log window. This filters down the log to the entries relevant to the postinstall script:
installd[690]: PackageKit (package_script_service): Preparing to execute script "./postinstall" in /private/tmp/PKInstallSandbox.1gFziD/Scripts/com.example.PolicyBanner.rQLtIr
package_script_service[1168]: PackageKit: Preparing to execute script "postinstall" in /tmp/PKInstallSandbox.1gFziD/Scripts/com.example.PolicyBanner.rQLtIr
package_script_service[1168]: Set responsibility to pid: 13061, responsible_path: /System/Library/CoreServices/Installer.app/Contents/MacOS/Installer
package_script_service[1168]: PackageKit: Executing script "postinstall" in /tmp/PKInstallSandbox.1gFziD/Scripts/com.example.PolicyBanner.rQLtIr
package_script_service[1168]: ./postinstall: running updatePreboot
package_script_service[1168]: ./postinstall: Started APFS operation
package_script_service[1168]: ./postinstall: UpdatePreboot: Commencing operation to update the Preboot Volume for Target Volume disk3s1 (Macintosh HD)
package_script_service[1168]: ./postinstall: UpdatePreboot: Commanded forwarding to System-role regardless of target input = InhibitAutoGroupTarget = 0; ForwardingEnabled
(I have removed some columns and text for space and clarification. The process numbers will be different in your log.)
If you do not see entries for the
postinstallscript in the log, you have made an error configuring the package. Most likely errors are that you named thepostinstallscript wrong (usually by accidentally adding a .sh or .txt file extension) or did not set the executable bit correctly.
First, we see a few entries where the installer system is preparing the postinstall script to run, then we see a line:
./postinstall: running updatePreboot
This is the output of the echo command in our postinstall script. Here we can tell that the script passed the system volume check successfully and will run the diskutil command next.
Then we see a lot more lines which are the output from the diskutil command itself. The updatePreboot verb is very verbose, which can actually be helpful when diagnosing problems.
You can also find this output in /var/log/install.log. macOS will append all installations to this log file. That includes regular runs of the software update system, so the install.log will get quite big and noisy over time. When you are debugging package installation issues it is very useful to note the time of your installation, so that you can narrow down the area of the log file you need to inspect.
This has been a very simple example for an installation script. We will re-visit this topic in more detail in a later post.
Books Update — 2025
It’s been a while since I wrote about my books. Life has been tugging me in different directions (in a good way, overall). Things are going well, overall, but there was always this nagging feeling that I really should do something about the books. They were getting a bit… well… old…
If you follow this blog, you may have noticed a few posts about packaging recently:
- Installing Packages
- Inspecting Packages
- Building Simple Component Packages
- Another Simple Package: Policy Banner
If you are a proud owner of my book “Packaging for Apple Administrators” (thank you very much!) these posts should seem somewhat familiar. It has been nearly nine years since I first published “Packaging” and even though it really held up well, it was in desperate need of some updates. More than merely updates, really. Many of the examples not available online anymore. Seriously, some of the examples are to inspect the iTunes and Silverlight installer pkg…
Surprisingly little has changed in the process of actually building packages, so those sections of the book hold up pretty well. But the environment in which packages are used and deployed on macOS has changed. Quite a lot. GateKeeper and Notarization were new and optional, just a few years ago, but now are a core part of Apple’s security strategy on macOS. Bundle package installers, which I covered in a “legacy” appendix in “Packaging” were completely disabled in macOS Sequoia 15. Imaging Macs with NetInstall was still a thing when I originally wrote the book and how to use and prepare installer packages for those workflows took up some space.
Distribution packages were only required in edge cases for Mac admins. Now they are often (but not always) required to work with device management servers.
On the other hand, back then I did not have any experience with the developer side of packaging. Since then I have written about building tools and apps and integrating the packaging (and signing and notarization workflows) in Xcode and Swift Package Manager. These are workflows that are useful to developers, but less so for Mac Admins.
So, I am happy to announce that I have started the work of updating “Packaging.” It’s a work in progress, and I do not want to commit to any timeline yet. However, I plan to continue to share the progress by posting sections on this blog as I update them.
What will happen to old, outdated “Packaging for Apple Administrators” you might ask? Well, I am going to remove the book from Apple Books in two weeks or so. If you really want to own a copy of this old version, this will be your last chance to purchase it. I didn’t want to remove the book without warning. But, honestly, most of you really don’t have to buy the old version anymore, since I will be posting parts as I update and rewrite.
(If you want to buy a copy to support me, don’t do that on Apple Books. The standard 30% of that revenue will go straight to Apple and honestly, they have enough money. There is now a better way, but more on that later.)
In two or three weeks time, I will remove all books, except “macOS Terminal and Shell” from Apple Books.
If you have purchased the book, it should remain available for you in your library, but maybe make a backup to be sure.
I have spent a few days updating “macOS Terminal and Shell” for the current state of macOS. Since this is my latest book and well, the command line situation hasn’t changed very much since Apple switched the default shell to zsh, there wasn’t much to change. I will keep that book on Apple Books and update it as soon as the new version passes Apple’s review. If you have already purchased “macOS Terminal and Shell” you should get the updated version as soon as I have uploaded it. You should then see a notification in the Books app.
I am also starting a new experiment: you can also purchase “macOS Terminal and Shell” on Ko-Fi. (might be more familiar as “Buy me a Coffee“) This is an experiment and new to me, so apologies if there are some rough edges. This should work if you do not want to or cannot purchase on Apple Books.
Also, I get a larger share of the proceeds. And, should you desire to, you can even pay more than the suggested price. (though, really, no-one has to)
As I said, this is a test run, and I am very curious how it goes. I am excited that this should expand the audience for whom the book is available. (Apple Books is not available in many regions, like India and China.) If the experiment works out for this update of “Terminal and Shell” then I will definitely consider this for “Packaging 2.0” and future books, as well. (I have plenty ideas, but so little time)
