During and after WWDC, I wanted to see if I could build a SwiftUI app. I thought that building a user interface for this task would be a nice practice project.
Ironically, since I want the app to work on Big Sur, I could not use any of the new Swift and SwiftUI features Apple introduced this year. Even so, since I had not used SwiftUI to build a Big Sur application, most of the features Apple introduced last year were still new to me.
It was often unexpected to me which parts turned out to be challenging and which parts were really easy to implement. For example, implementing a preferences window, turned out to be super-easy, but it took me two false-starts to find the correct approach. Communicating with the preferences system of macOS is also very easy, but so poorly documented that you are always second guessing if what you are doing is right.
Apple’s documentation for Swift and SwiftUI on this has definite highlights, but is very sparse overall. I am still not sure if some of the decisions I made while putting this together were “good” choices.
The reason work has progressed—quite significantly—even though I was distracted is that Søren Theilgaard and Isaac Ordonez have joined the project as conributors. All of the work from 0.4 to 0.5 was from one of them. We ahve some great plans to move this tool forward, as well.
Many of these new app labels have been provided from others, either through GitHub issues, pull requests, or through comments in the #installomator channel on MacAdmins Slack. Thanks to all who contributed.
What’s new in v0.5:
Major update and now with help from @Theile and @Isaac
Added additional NOTIFY=all. Usuful if used in Self Service, as the user will be notified before download, before install as well as when it is done.
Added variable LOGO for icons in dialogs, use LOGO=appstore (or jamf or mosyleb or mosylem or addigy). It’s also possible to set it to a direct path to a specific icon. Default is appstore.
Added variable INSTALL that can be set to INSTALL=force if software needs to be installed even though latest version is already installed (it will be a reinstall).
Version control now included. The variable appNewVersion in a label can be used to tell what the latest version from the web is. If this is not given, version checking is done after download.
For a label that only installs a pkg without an app in it, a variable packageID can be used for version checking.
Labels now sorted alphabetically, except for the Microsoft ones (that are at the end of the list). A bunch of new labels added, and lots of them have either been changed or improved (with appNewVersion og packageID).
If an app is asked to be closed down, it will now be opened again after the update.
If your MDM cannot call a script with parameters, the label can be set in the top of the script.
If your MDM is not Jamf Pro, and you need the script to be installed locally on your managed machines, then take a look at Theiles fork. This fork can be called from the MDM using a small script.
Script buildCaseStatement.sh to help with creating labels have been improved.
Fixed a bug in a variable name that prevented updateTool to be used
added type variable for value "updateronly" if the label should only run an updater tool.
And if you are counting, there are now more than 260 application labels in Installomator. However, that number is a bit inflated, because several vendors have multiple downloads for Intel and Apple Silicon apps.
Get the script and find the instructions on the GitHub repo.
If you have any feedback or questions, please join us in the #installomator channel on MacAdmins Slack.
This update adds no features. It does provide support for the Apple silicon Macs with a Univeral binary and installer pkg.
In my initial testing desktoppr v0.3 worked fine on Apple Silicon Macs even without re-compiling, so I didn’t feel pressure to build and provide a universal binary.
Either way, having a universal binary and a properly configured installer pkg will be helpful in either case. If you have to support Apple silicon Macs, be sure to use desktoppr v0.4.
Mac users and admins find themselves in yet another major platform transistion. For the duration of the transition, developers and admins will have to deal with and support software and hardware for the Intel and Apple silicon Macs. With Universal applications and Rosetta 2, Apple is providing very efficient tools to dramatically reduce the friction and problems involved.
This post was inspired by comments from Josh Wisenbaker on MacAdmins Slack and Twitter. Thank you!
For most end user level tasks, these tools will provide seamless experience. Universal applications will run on either platform natively and Rosetta 2 will translate applications compiled for the legacy platform (Intel) so they can run on the new Apple silicon chips. There are only a few situations where these tools don’t work: virtualization solutions and Kernel extensions.
In most cases this tools will “just work.” But for MacAdmins there is one major issue that may throw a wrench in your well-oiled deployment workflows. Rosetta is not pre-installed on a fresh macOS installation.
We can only speculate why Apple chooses to deliver Rosetta this way. In “normal” unmanaged installations, this is not a big deal. The first time a user installs or launches a solution that requires Rosetta, they will be prompted to for installation and upon approval, the system will download and install Rosetta.
As a MacAdmin, however, you want your deployments to be uninterrupted by such dialogs. Not only are they confusing to end users, but the user might cancel out of them which will result in your workflow failing partially.
There are two solutions. The first is to install Rosetta as early as possible in the deployment process. Apple provides a new option for the softwareupdate command to initiate the installation. Graham Gilbert and Rich Trouton have already published scripts around this. Have this script run early in your deployment workflow on Apple silicon and subsequent apps and tools that require Rosetta should be fine.
The other solution is to avoid requiring Rosetta and thus the prompt for Rosetta.
I mentioned earlier that we can only speculate as to why Apple has made Rosetta 2 an optional installation. One possible explanation is, that Apple believes Rosetta will not be a necessary installation for very long. An extra dialog and installation will make users and developers more aware of software that “needs an update” and motivate developers to provide Universal applications faster.
When a user opens an application that requires Rosetta for the first time, before Rosetta is installed, the system prompts to install. The same thing can happen with an installer package. The system might prompt to install Rosetta before a certain package is installed. However, not all packages trigger the dialog. I was curious what is required in the package to trigger or to avoid the prompt.
Aside from legacy formats, there are two types of packages. The first are “plain” packages, which are also called component packages. These packages have a payload and can have pre- and postinstall scripts, but other than that, there is little metadata you can add to influence the installation workflow.
This is where “distribution packages” come in. Distribution packages do not have a payload or installation scripts of their own, but contain one or more component packages. In addition, distribution packages can contain metadata that influences the installation workflow, such as customization of the Installer.app interface, system version checks, prompting the user to quit running applications before an installation and software requirements and a few more.
Note: learn more about the detailed differences between component and distribution packages in my book: “Packaging for Apple Administrators“
You can build a distribution package from a component package with the productbuild command:
Since most of the extra features of distribution packages are only effective when the installation package is launched manually in the Installer application, MacAdmins usually just build component pkgs.
The confusing part here is that both component pkgs and distribution pkgs have the same file extension. They are hard to distinguish even from the command line. To tell them apart, you can expand a pkg with the pkgutil command and look at the files in the expanded folder. Component pkgs have (among other files) a PackageInfo file and distribution pkgs have a Distribution file:
# component pkg
> pkgutil --expand component.pkg expanded_component_pkg
> ls expanded_component_pkg
Bom
Payload
Scripts
PackageInfo
# distribution pkg
> pkgutil --expand distribution.pkg expanded_distribution_pkg
> ls expanded_distribution_pkg
component.pkg
Distribution
For distribution pkgs, the Distribution file is an XML file which contains the configuration data for the package. One tag in this XML is the options tag which can have a hostArchitectures attribute. According to [Apple’s documentation on this tag](A comma-separated list of supported architecture codes), the hostArchitectures are a “comma-separated list of supported architecture codes.”
Apple documentation is a bit aged, so it gives i386, x86_64, and ppc as possible values. However, when you read the productbuild man page on macOS Big Sur you will see that arm64 is a new valid value. We will also find these extremely helpful note:
NOTE: On Apple Silicon, the macOS Installer will evaluate the product’s distribution under Rosetta 2 unless the arch key includes the arm64 architecture specifier. Some distribution properties may be evaluated differently between Rosetta 2 and native execution, such as the predicate specified by the sysctl-requirements key. If the distribution is evaluated under Rosetta 2, any package scripts inside of product will be executed with Rosetta 2 at install time.
When a distribution pkg has this attribute and it contains a value of arm64 then the installation process on an Apple silicon Mac will not check if Rosetta is installed. When arm64 is missing from the hostArchitectures, or the attribute or tag are missing entirely, the installation process on an Apple silicon Mac will asume the pkg requires Rosetta and prompt to install when necessary.
There is more good news in the next note in the man page:
NOTE: Starting on macOS 11.0 (Big Sur), productbuild will automatically specify support for both arm64 and x86_64 unless a custom value for arch is provided.
When you use productbuild to create a distribution pkg on Big Sur (Intel and Apple silicon) both arm64 and x86_64 will be added to the configuration by default.
But, when you use productbuild on Catalina or earlier, the attribute will be lacking, when means that when someone installs that pkg on an Apple silicon Mac, it will assume it requires Rosetta and prompt for installation.
Adding both architectures by default is a useful default. But can we set the value explicitly when we build the distribution pkg? And can we do so on Catalina?
Yes, you can, of course. There are even two solutions. First, instead of letting productbuild generate the Distribution xml, you can build and provide a complete Distribution xml file with the --distribution option. That will give you full, fine-grained control over all the options.
The second solution is a bit easier. You can create a requirements.plist property plist file in the form:
This way, productbuild still generates the Distribution xml and merges in your choices from the requirements.plst. There are other options you can add which are documented in the productbuild man page.
Both of these approaches will work on Catalina as well. This way you can explicitly tell the installer system which architectures your packages will run with and not leave anything to chance.
As far as I can tell, when you install a component pkg, no checks for Rosetta are performed. Nevertheless, this is not something I would rely on. For packages that are crucial to the deployment workflow, I would recommend going the extra step and creating a distribution pkg from the component pkg with the proper flags set. This way you can ensure proper behavior.
Of course, if your package installer contains any form of Intel-only, not-universal binary, you should not abuse this just to skip the annoying Rosetta dialog, as it might lead to problems later. But, when the software you are installing is universal, you sould use this to tell the system which platforms your package supports.
When you want to provide automated workflows to upgrade to or erase-install macOS Big Sur, you can use the startosinstall tool. You can find this tool inside the “Install macOS Big Sur” application at:
/Applications/Install macOS Big Sur.app/Contents/Resources/startosinstall
Note: Apple calls the “Install macOS *” application “InstallAssistant.” I find this a useful shorthand and will use it.
Before you can use startosinstall, you need to somehow deploy the InstallAssitant on the client system. And since the “Install macOS Big Sur” application is huge (>12GB) it poses its own set of challenges.
Different management systems have different means of deploying software. If you are using Munki (or one of the management systems that has integrated Munki, like SimpleMDM or Workspace One) you can wrap the application in a dmg. Unfortunately, even though “app in a dmg” has been a means of distributing software on macOS for nearly 20 years, most management systems cannot deal with this and expect an installer package (pkg).
You can use pkgbuild to build an installer package from an application, like this:
This works for all InstallAssistants up to and including Catalina. With a Big Sur installer application this command will start working, but then fail:
% pkgbuild --component "/Applications/Install macOS Big Sur.app/" InstallBigSur20B29.pkg
pkgbuild: Adding component at /Applications/Install macOS Big Sur.app/
pkgbuild: Inferred install-location of /Applications
pkgbuild: error: Cannot write package to "InstallBigSur20B29.pkg". (The operation couldn’t be completed. File too large)
The reason for this failure is that the Big Sur installer application contains a single file Contents/SharedSupport/SharedSupport.dmg which is larger than 8GB. While a pkg file can be larger than 8GB, there are limitations in the installer package format which preclude individual files in the pkg payload to be larger than that.
When you want to distribute the “Install macOS Big Sur” application to the clients in your fleet, either to upgrade or for an erase-and-install workflow, this limitation introduces some challenges.
There are a number of solutions. Each with their own advantages and downsides, some supported and recommended by Apple and some… less so. Different management and deployment styles will require different solutions and approaches.
App Deployment with MDM/VPP
When you have your MDM hooked up to Apple Business Manager or Apple School Manager, you can push applications “purchased” in the “Apps and Books” area with MDM commands. This was formerly known as “VPP” (Volume Purchase Program and I will continue to use that name, because “deploy with Apps and Books from Apple Business Manager or Apple School Manager” is just unwieldly and I don’t care what Apple Marketing wants us to call it.
Since the “Install macOS Big Sur” application is available for free on the Mac App Store, you can use VPP to push it to a client from your MDM/management system.
When you do this, the client will not get the full InstallAssistant application, but a ‘stub’ InstallAssistant. This stub is small in size (20-40MB).
The additional resouces required for the actual system upgrade or installation which are GigaBytes worth of data will be loaded when they are needed. It doesn’t matter whether the process is triggered by the user after opeing the application or by using the startosinstall or createinstallmedia tool. Either workflow will trigger the download of the additional resources.
This has the advantage of being a fast initial installation of the InstallAssistant, but then the actual upgrade or re-installation process will take so much longer, because of the large extra download before the actual installation can even begin. For certain deployment workflows, this is an acceptable or maybe even desireable trade-off.
The extra download will use a Caching Server. This approach is recommended and supported by Apple.
Mac App Store and/or System Preferences
For some user-driven deployment styles, having the user download the InstallAssistant themselves can be part of the workflow. This way, the user can control the timing of the large download and make sure they are on a “good” network and the download will not interfere with video conferences or other work.
You can also use a link that leads a user directly to the Software Update pane in System Preferences and prompts the user to start the download:
# Big Sur
x-apple.systempreferences:com.apple.preferences.softwareupdate?client=bau&installMajorOSBundle=com.apple.InstallAssistant.macOSBigSur
# Catalina
x-apple.systempreferences:com.apple.preferences.softwareupdate?client=bau&installMajorOSBundle=com.apple.InstallAssistant.Catalina
When the InstallAssistant is already installed, this link will open the application. When the Mac is already running a newer version of macOS or doesn’t support the version given, it will display an error.
You can use these links from a script with the open command:
open 'x-apple.systempreferences:com.apple.preferences.softwareupdate?client=bau&installMajorOSBundle=com.apple.InstallAssistant.macOSBigSur'
The downloads initiated this way will use a Caching Server. Linking to the Mac App Store is supported and recommended by Apple. The x-apple.systempreferences links are undocumented.
softwareupdate command
Catalina introduced the --fetch-full-installer option for the softwareupdate command. You can add the --full-installer-version option to get a specific version of the installer, for example 10.15.7.
You can run this command from a managed script on the clients to install the application. The download will use a Caching Server.
This would be a really useful method to automate deployment the InstallAssistant on a client, if it were reliable. However, in my experience and that of many MacAdmins, this command is very fragile and will fail in many circumstances. As of this writing, I have not been able to reliably download a Big Sur InstallAssistant with this command. Most of the time I get
Install failed with error: Update not found
This approach is often recommended by Apple employees, however it will have to be much more reliable before I will join their recommendation.
Please, use Feedback Assistant, preferably with an AppleSeed for IT account, to communicate your experience with this tool with Apple. If this command were reliable, then it would be my recommended solution for nearly all kinds of deployments.
InstallAssistant pkg
With these solutions so far, we have actually avoided creating an installer package, because we moved the download of the InstallAssistant to the client. A caching server can help with the network load. Nevertheless for some styles of deployments, like schools and universities, using the local management infrastucture (like repositories or distribution points) has great advantages. For this, we need a package installer for the InstallAssistant.
A “magic” download link has been shared frequently in the MacAdmins Slack that downloads an installation package from an Apple URL which installs the Big Sur InstallAssistant.
This pkg from Apple avoids the file size limit for the package payload by not having the big file in the payload and then moving it in the postinstall script. Smart hack.. er… solution!
The URL is a download link from a software update catalog. You can easily find the link for the current version with the SUS Inspector tool.
But it would be really tedious to do this on every update. You, the regular reader, know the “tedious” is a trigger word for me to write a script. In this case it was less writing a script than looting one. Greg Neagle’s installinstallmacos.py had most of the pieces needed to find the InstallAssistant.pkg in the software update catalog and download it. I merely had to put the pieces together somewhat differently.
Nevertheless, I “made” a script that downloads the latest InstallAssistant.pkg for macOS Big Sur. You can then upload this pkg to your management system and distribute it like any other installation package.
When you start the script it will download a lot of data into a content folder in the current working directory, parse through it and determine the Big Sur Installers in the catalog. When it finds more than one installers, it will list them and you can choose one. When it finds only one Installer, it will start downloading that immediately.
You can add the --help option for some extra options (all inherited from installinstallmacos.py.
We will have to wait for the 11.1 release to be sure this actually works as expected, but I am confident we can make it work.
This approach is very likely not supported by Apple. But neither was re-packaging the InstallAssitant from disk in Catalina. This deployment method is likely closer to the supported deployment workflows than some common existing methods.
The download does not use a Caching Server, but since the goal is to obtain a pkg that you can upload to your management server, this is not a big downside.
Big Sur signature verification check
You may have noticed that when you launch the Big Sur InstallAssistant on Big Sur for the first time, it will take a long time to “think” before it actually launches. This is due to a new security feature in Big Sur that verifies the application signature and integrity on first launch. Since this is a “big” application this check takes a while. Unfortunately Big Sur shows no progress bar or other indication. This check occurs when the user double-clicks the app to open it and when you start an upgrade or installation with the startosinstall command.
There does not seem to be a way to skip or bypass this check. You can run startosinstall --usage from a script right after installing the InstallAssistant. This will do nothing really, but force the check to happen. Subsequent launches, either from Finder or with startosinstall will be immediate.
AppleScript on macOS is a useful tool for pro users and administrators alike. Even though it probably is not (and shouldn’t be) the first tool of choice for many tasks, there are some tasks that AppleScript makes very simple. Because of this it should be a part of your ‘MacAdmin Toolbelt.’
AppleScript’s strength lies in inter-application communication. With AppleEvents (or AppleScript commands) you can often retrieve valuable information from other applications that would be difficult or even impossible, to get any other way. With AppleScript, you may even be able to create and change data in the target applications.
If you are in any way security and privacy minded this should raise your hairs. Up to macOS 10.13 High Sierra, any non-sandboxed app could use AppleScript and AppleEvents to gather all kinds of personal and private data from various script-enabled apps and services. It could even use script-enabled apps like Mail to create and send email in your name.
Since macOS Mojave, the Security and Privacy controls restricts sending and receiving AppleEvents. A given process can only send events to a different process with user approval. Users can manage the inter-application approvals in the Privacy tab of the Security & Privacy preference pane.
MacAdmins have the option of pre-approving inter-application events with a PPPC (Privacy Preferences Policy Control) configuration profile that is pushed from a DEP-enrolled or user-approved MDM.
Privacy approval
You can trigger the security approval from Terminal when you send an event from the shell to another process with osascript:
> osascript -e 'tell application "Finder" to get POSIX path of ((target of Finder window 1) as alias)'
When you run this command from Terminal, you will likely get this prompt:
You will not get this prompt when you have approved or rejected the Terminal app to send events to this particular target application before. You can check the permissions granted by the user in the Automation section of Privacy tab in the Security & Privacy pane of System Preferences.
For any given source/target application combination, the prompt will only be shown once. When the user approves the privilege (“OK” button), future events will just be allowed.
When the user rejects the connection (“Don’t Allow” button), this event and future events will be rejected without further prompts. The osascript will fail and the AppleScript will return an error –1743.
> osascript -e 'tell application "Finder" to get POSIX path of ((target of Finder window 1) as alias)'
79:84: execution error: Not authorized to send Apple events to Finder. (-1743)
If you want to get the approval dialogs again, you can reset the state of the source application (Terminal) with the tccutil command:
> tccutil reset AppleEvents com.apple.Terminal
This will remove the Terminal application and all target applications for it from the Automation (AppleEvents) area in the Privacy pane and show dialogs for every new request going forward. This can be very useful during testing.
Dealing with rejection
You should write your code in a ways that it fails gracefully when access is not granted. in this case osascript will return an error:
if ! osascript -e ' tell app "Finder" to return POSIX path of ((target of Finder window 1) as alias)'
then
echo "osascript encountered an error"
exit 1
fi
However, osascript will return errors for all kind of failures with no easy way to distinguish between them. As an example, the above will also fail when there are no Finder windows open.
If you want to distinguish AppleScript errors, you need to do so in the the AppleScript code:
if ! osascript -s o <<EndOfScript
tell application "Finder"
try
set c to (count of Finder windows)
on error message number -1743
error "Privacy settings prevent access to Finder"
end try
if c is 0 then
return POSIX path of (desktop as alias)
else
return POSIX path of ((target of Finder window 1) as alias)
end if
end tell
EndOfScript
then
echo "osascript failed"
fi
Note: the -s o option of osascript makes it print AppleScript errors to standard out rather than standard error, which can be useful to find the errors in logs of management systems.
Note 2: when you are running osascript from management and installation scripts (which run as the root user) you need to run them as the current user to avoid problems.
Avoiding Privacy prompts
So, we know of one way to deal with the privacy prompts. Ideally, you would want to avoid them entirely. While this is not always possible, there are a few strategies that can work.
Don’t send to other Processes
In past versions of Mac OS X (I use this name intentionally, it’s that long ago.), scripts that showed dialogs might not display on the highest window layer. In other words, the dialog was lost behind the currently active windows. To avoid “lost” dialogs, it became best practice to send the display dialog command (and similar) to a process that had just received an activate command as well:
tell application "Finder"
activate
display dialog "Hello, World!"
end tell
As an alternative for Finder, the System Events process is often used as well. Jamf MacAdmins often used “Self Service.” This had the added bonus, that the dialog looks as if it comes from the Finder or Self Service, including the bouncing dock icon.
Over time, even though the underlying problem with hidden dialog has been fixed, this practice has persisted. You often even see AppleScript code use this with commands other than user interaction, where it wouldn’t have made sense in the first place. With the privacy restrictions in macOS Mojave, this practice has become actively trouble some, as you are sending the display dialog (or other) command to a separate process. The process running this script will require approval to send events to “System Events.”
In current versions of macOS, you can just use display dialog and may other commands without an enclosing tell block. Since your AppleScript code isn’t sending events to another process, no privacy approval is provided. This code has the same effect as above, but does not trigger an approval request.
To determine whether an AppleScript command requires a tell block, you have to check where it is coming from. Many AppleScript commands that are useful to MacAdmins are contained in the ‘StandardAdditions’ scripting extension. Scripting extensions, as the name implies, extend the functionality of AppleScript without requiring their own process.
The useful commands in the Standard Additions extension include:
user interaction: choose file/folder/from list, display dialog/alert/notification
file commands: mount volume
clipboard commands: get the clipboard, set the clipboard to
sound control: set volume, get volume settings
system info
When your script uses only these commands, make sure they are not contained in tell blocks. This will avoid unnecessary prompts for access approval.
Exempt AppleScript commands
Some AppleScript commands are treated differently and will not trigger privacy approval:
activate: launch application and/or bring to front
open: open a file
open location: open a URL
quit: quit the application
For example, this will work without requiring approval:
osascript <<EndOfScript
tell application "Firefox"
open location "https://scriptingosx.com"
end
EndOfScript
Use non-AppleScript alternatives
Sometimes, similar effects to an AppleScript can be achieved through other means. This can be difficult to figure out and implement.
As an example, I used this AppleScript command frequently for setup before Mojave:
tell application "Finder" to set desktop picture to POSIX file "/Library/Desktop Pictures/BoringBlueDesktop.png"
While Mojave was in the beta and it wasn’t really clear if or how the PPPC exemptions could be managed, I looked for a different means. I discovered Cocoa functions to read and change the desktop picture without triggering PPPC, and built a small command line tool out of that: desktoppr.
The downside of this approach is that you know have to install and/or manage a command line tool on the clients where you want to use it. There are different strategies for this, but it is extra effort compared to “just” running an AppleScript.
Build PPPC profiles to pre-approve AppleEvents
Even after you have considered the above options to avoid sending AppleEvents to another process, there will still be several situations where it is necessary. For situations where a MacAdmin needs to run a script on several dozens, hundreds, or even thousands of Macs, user-approval is simply not a feasible option.
MacAdmins can pre-approve AppleEvents (and most other privacy areas) between certain processes with a Privacy Preferences Policy Control (PPPC) configuration profile. PPPC profiles can only be managed when pushed from a user-approved or automatically enrolled MDM.
You can build such a profile manually, but it is much easier to use a tool to build these:
Your MDM solution might have a specific tool or web interface for this, consult the documentation or ask you vendor.
There is one big requirement here, though: only applications and tools that are signed with a valid Apple Developer ID can be pre-approved this way, as the signature is used to identify and verify the binary.
Determining the process that needs approval
While you can sign shell scripts and other scripts this is often not necessary. As we have seen earlier, when we ran our script from Terminal, it wasn’t the script that requested approval but the Terminal application. When your scripts run from a management system or another tool, it may not be easy to determine which process exactly needs approval.
The most practical approach to determine this, is to log the output of the ’Transparency, Consent, and Control” system (tcc) and look which process is sending the requests.
First, either use a clean test system, or reset the approvals for the processes that you suspect may be involved with tccutil.
Then open a separate Terminal window and run this command which will show a stream of log entries from the tcc process:
Then run the script in question, the way you are planning to run it during deployment. If you are planning to run the script from a management system, then do that right now. You will get a lot output in the stream above.
Even when you don’t have a good idea what the parent process is going to be, you can filter the output for osascript since this is usually the intermediary tool used.
In my example I found several entries similar to this:
The important information here is the responsible path which give me the binary and the enclosing application that tcc considers ‘responsible.’ This is the application you need to approve.
When you are running your scripts from a management system, your MDM vendor/provider should already have documentation for this, to save you all this hassle.
With all this information, you can build the PPPC profile with one of the above tools, upload it to your MDM and push it to the clients before the deployment scripts run.
Conclusion
While the added privacy around AppleEvents is welcome, it does add several hurdles to automated administration workflows.
There are some strategies you can use to avoid AppleScripts triggering the privacy controls. When these are not sufficient, you have to build a PPPC profile to pre-approve the parent process.
This post is an update to an older post on the same topic. macOS has changed and I had a few things to add. Rather than keep modifying the older post, I decided to make this new one.
As MacAdmins, most of the scripts we write will use tools that require administrator or super user/root privileges. The good news here that many of the management tools we can use to run scripts on clients already run with root privileges. The pre– and postinstall scripts in installation packages (pkgs), the agent for your management system, and scripts executed as LaunchDaemons all run with root privileges.
However, some commands need to be run not as root, but as the user.
For example, the defaults command can be used to read or set a specific setting for a user. When your script, executed by your management system, is running as root and contains this command:
defaults write com.apple.dock orientation left
Then it will write this preference into root’s home directory in /var/root/Library/Preferences/com.apple.dock.plist. This is probably not what you intended to do.
Get the Current User
To get the correct behavior, you need to run the command as a user. Then the problem is as which user you want to run as. In many cases the answer is the user that is currently logged in.
I have written a few posts about how to determine the currently logged in user from shell scripts and will use the solution from those:
This will return the currently logged in user or loginwindow when there is none. This is the Posix sh compatible syntax, which will also run with bash or zsh.
Running as User
There are two ways to run a command as the current user. The first is with sudo:
sudo -u "$currentUser" defaults write com.apple.dock orientation left
The launchctl command uses the numerical user ID instead of the user’s shortname so we need generate that first.
It used to be that the sudo solution would not work in all contexts, but the launchctl asuser solution would. This changed at some point during the Mojave release time.
Now, the lauchctl asuser works and is required when you want to load and unload LaunchAgents (which run as the user), but it does not seem to work in other contexts any more.
So, for most use cases, you want to use the sudo solution but in some you need the launchctl form. The good news here is, that you can play it safe and use both at the same time:
This works for all commands in all contexts. This is, however, a lot to type and memorize. I built a small shell function that I use in many of my scripts. Paste this at the beginning of your scripts:
# convenience function to run a command as the current user
# usage:
# runAsUser command arguments...
runAsUser() {
if [ "$currentUser" != "loginwindow" ]; then
launchctl asuser "$uid" sudo -u "$currentUser" "$@"
else
echo "no user logged in"
# uncomment the exit command
# to make the function exit with an error when no user is logged in
# exit 1
fi
}
and then you can use the function like this:
runAsUser defaults write com.apple.dock orientation left
runAsUser launchctl load com.example.agent
Note: the function, as written above, will simply do nothing when the Mac is sitting at the login window with no user logged in. You can uncomment the exit 1 line to make the script exit with an error in that case. In your script, you should generally check whether a user is logged in and handle that situation before you use the runAsUser function. For example you could use:
if [ -z "$currentUser" -o "$currentUser" = "loginwindow" ]; then
echo "no user logged in, cannot proceed"
exit 1
fi
Insert this at the beginning of your code (but after the declaration of the currentUser variable) and you can assume that a user is logged in and safely use the $currentUser variable and the runAsUser function afterwards. The exact detail on when and how you should check for a logged in user depends on the workflow of your script. In general, earlier is better.
When to Run as User
Generally, you should run as the user when the command interacts with the user interface, user processes and applications, or user data. As MacAdmins these are common commands you should run as the user;
defaults, when reading or changing a user’s preferences
osascript
open
launchctl load|unload for Launch Agents (not Launch Daemons)
This is not a complete list. Third party configuration scripts may need to be run as root or user. You will need to refer to documentation or, in many cases, just determine the correct action by trial and error.
Sample Script
I have put together a script that combines the above code into a working example.
Last week at WWDC, Apple had two big announcements for the Mac platform.
The first one was a new user interface design, much closer to iPadOS and iOS. Apple considers this the “biggest design upgrade since the introduction of Mac OS X.” Because of this, Apple also gives this version of macOS the long-withheld ‘11’ as the major version number.
You can take a look at the new UI on Apple’s Big Sur preview page or you can download the beta from your AppleSeed for IT or Developer account. It shares many elements, styles and icons with iOS or iPadOS.
The other major announcement is that the Mac platform will have a transition from Intel CPUs to ‘Apple Silicon’ chips built by Apple themselves, just like the iPhone and the iPad. The Developer Kit for testing purposes is powered by the A12z chip that powers the iPad Pro, but Apple was insistent that future, production Macs would have chips designed specifically for Macs and not be using iPad or iPhone chips.
These are big announcements, for sure. But what do they mean for the macOS platform? And for MacAdmins in particular?
Apple’s commitment to Mac
There was a time not so long ago, where you got the impression that the Mac platform was merely an afterthought for Apple. I think it started after the release of the ‘trashcan’ Mac Pro. During those years, I think there was legit concern that Apple would lock down macOS as tightly as they did iOS, breaking what makes the Mac special.
Some of the recent additions to macOS, such as the increased privacy controls with their incessant prompts for approval, deprecation of built-in scripting run-times like Python and Ruby and even the deprecation of bash in favor of zsh, have made some ‘Pro’ users nervous and afraid that Apple wants to turn macOS in to iOS.
Now the unification of the user interface can add to those concerns: will macOS turn into iOS and iPadOS in more than just look and feel?
On the other hand, Apple has been more vocal and open about their plans for the Mac. This started when Apple announced they were working on a new Mac Pro in April 2017.
In Mojave (2018), and then Catalina (2019), Apple introduced several technologies unique to macOS:
System and Network Extensions
File Providers
DriverKit
Notarization
zsh as new default shell, dash
These technologies exist because Apple wants (or needs) to increase the security of macOS. Kernel extensions, which provide unfettered access to all parts of the system are replaced with System and Network extensions and DriverKit. Notarization allows Apple to check and certify software delivered and installed outside of the Mac App Store. zsh allows Apple and their users to move forward from a 13-year old bash version.
But, if Apple wanted to lock down macOS as completely as iOS and iPadOS, they wouldn’t have to introduce these new technologies to macOS. Instead, they are introducing new technologies to allow certain characteristics of macOS to continue, even with increased security. This is a lot of effort from Apple, which convinces me that Apple sees a purpose for macOS for years to come.
What are these characteristics that Apple thinks are special for the macOS? Apple told us in the Platforms State of the Union session this year. Starting at 15:10 Andreas Wendker says:
“Macs will stay Macs the way you know and love them. They will run the same powerful Pro apps. They will offer the same developer APIs Macs have today. They will let users create multiple volumes on disks with different operating system versions and they will let users boot from external drives. They will support drivers for peripherals and they will be amazing UNIX machines for developers and the scientific community that can run any software they like.”
This short section makes a lot of promises:
Pro Apps: including third party pro apps, like Affinity Photo, Cinema 4D, Photoshop, shown previously, and Microsoft Office, and Maya which were shown in the Keynote
Developer APIs: no reduced feature set
Disk and OS management: multiple volumes, external storage and boot, multiple versions of macOS on one device
Peripheral ports with custom drivers
UNIX machines for developer and science tools (this includes Terminal, Craig Federighi confirmed this in John Gruber’s interview)
‘any software you like’
‘flexibility and configurability’ (earlier in the presentation)
Apple wants to assure us that they understand what the macOS platform is used for. Remember that Apple uses macOS themselves for many of these tasks and it is unlikely they would want to switch to Windows or Linux based PCs for their work.
With all these assurances you can consider the UI changes to go merely ‘skin deep.’ Whether you like the new UI or not, the wonderfully complex innards of macOS should still be there for you to explore and (ab)use.
Mac Transition
When Apple announced the transition to Apple Silicon in the keynote, it felt like a repeat of the 2006 Keynote where Steve Jobs announced the Intel transition. Apple is even re-using the names for the technologies ‘Universal’ and ‘Rosetta,’ albeit with version ‘2’ attached. This is of course entirely intentional. Apple wants to assure that they have done this before and it worked out well.
How well this will really work will depend, not only on Apple alone, but on the third party developers. While Rosetta worked surprisingly well during the Intel transition, there was noticeable lag in some cases, and the soft couldn’t really unlock all of the hardware until there was a re-compiled version. I remember that every developer would proudly announce the availability of a universal binary.
Some solutions never made the jump. Some software solutions got lost when Apple finally turned off Rosetta in Mac OS X 10.7 Lion, the same way some solutions did not make the jump the to 64bit and are ‘lost’ unless you hold on to Mojave.
It is fair to blame the software developer for the lack of maintenance. Not all developers have the time to put in the effort to continually update a product, or they moved on to other companies or projects. Not all software products generate enough revenue to warrant any maintenance effort. From the user perspective, software that they paid for, has an arbitrary expiration date, the software vendor blames Apple, Apple blames the vendor. This is understandably frustrating.
Apple and macOS are certainly in a different place in the market than they were in 2007, but we will have to see how well the third-party developers and vendors take to the transition this time.
macOS 11 for MacAdmins
Enterprises, schools, universities, and organizations and their users are also in a different place these days. The addition of mobile devices (phones and tablets) as essential tools for the employees has forced many organizations to change their management and access strategies to be more flexible. The massive requirement to work remotely from the Coronavirus pandemic has accelerated this shift.
But once you have reworked your deployment and management strategies to work with one different platform, then adding a third or fourth platform to the mix will be less of a barrier. It will still be a significant effort, but it will not be as daunting and impossible as that first change. The changing infrastructure requirements have worked in favor of Apple platforms for the past years, lead by iOS, but pulling macOS behind them. But Apple has not yet had enough time to lock-in to these kind of deployments.
In education, ChromeBooks are gaining ground, mainly because of the price point, but also because of a powerful management framework. Dual booting your Mac to Windows with Bootcamp will not be possible on Apple Silicon. Additional problems stemming from the transition might just be enough to push users and organizations ‘over the edge’ to switch platforms.
Apple must have considered all this and believes the benefits from building their own chips for the Mac platform outweigh the downsides. Less heat and better battery life are obvious, quick wins. Apple’s A-series chips have a dedicated Neural engine for machine learning processes, which was already demonstrated.
Apple has brought some of the security benefits from iOS to the Mac platform with the T1 and T2 chips. These provide Touch ID and a secure enclave for certificates and encrypted internal storage. By removing the Intel chipset, Apple can tighten the security even more. The new Apple Silicon based system will have new startup options and more flexible secure boot settings. External boot will not only still be possible, but not be disabled by default which will simplify many workflows for techs and admins. When you have multiple macOS systems on a drive, you will be able to disable security feature per system, so you can have a ‘less secure system’ for experimentation or development, while keeping all security features enabled for the system with your personal data.
Device Management
There weren’t many news about MDM at WWDC itself. The changes that were shown are refinements to existing workflows rather than big changes. With all the other changes, stability in MDM and management will be helpful.
We have finally been promised a true zero-touch deployment for Macs with “Auto Advance for Mac,” but are still lacking details about the exact implementation.
But there are still some huge gaps in the MDM strategy. Application deployment (VPP) is still unreliable. There is no way for organizations to purchase and manage in-App purchases and subscriptions in quantity. Many essential settings and features of macOS still cannot be set or controlled with configuration profiles or MDM commands. MDM still has no solution for installing and managing software from outside the App Store. PPPC settings are still changing and complicated to manage for admins.
Apple considers the ability to run iOS and iPadOS on macOS a huge bonus. How useful this will be in reality, outside of games, remains to be seen. But it will certainly make managing apps from the Mac App Store more essential than it is now.
The changes MacAdmins got for device management are useful and necessary, but evolutionary in nature. (There is nothing wrong with that.) The Fleetsmith deal shows the possibility of more and larger changes to Apple’s device management strategy in the future. It might take years before we will see the implications of this.
Versioning is always influenced by marketing. The switch from version 10 to version 11 is more than just the end of an odd versioning convention. The time where Mac OS X stands apart from the other Apple platforms is over. Apple is promising a family of devices where the user interface, hardware, and software will be unified, while preserving the special characteristics of each platform.
Apple is has explained why and how they want to distinguish macOS from the other Apple platforms. They will have to live up to these promises over the next few years. There is a balance to be kept between implementing beneficial features from the other Apple platforms and maintaining the ‘flexibility and configurability’ of macOS. There is also the possibility that some of these Mac characteristics will make their way to other Apple platforms. (multi-boot, virtualization, or custom device drivers on iPadOS?)
Not everyone follows the WWDC announcements closely. As MacAdmins we will get many questions about the news from last week that does surface. We have to inform our organizations and our fellow employees what these changes means for them and their workflows and help them make an informed decision on which platform (Apple or other systems) matches their requirements.
There are bound to be issues with Apple’s plans. We will need to watch Apple’s strategy, give feedback on missteps and requirements. It is certainly a frustrating process, but Apple has changed features because of feedback from the MacAdmin community in the past.
If you haven’t enrolled in AppleSeed for IT yet, now is the time! Download the beta, start testing and providing feedback!s
Since then, it has gotten lots of feedback from others and many contributions. As the changes, fixes and additional apps have accumulated, I have created a 0.2 release to get a stable new version. If you like living on the edge you can also use the dev branch for the latest update.
Changes in this version:
many fixes for broken URLs and other bugs
pkgInDmg and pkgInZip now search for the first pkg file in the archive in case the file name varies with the version
notification on successful installation can be suppressed with the NOTIFY variable
Apple signed installers and apps that don’t have a Team ID are verified correctly now
improved logging
several new applications: count increased from 62 in v0.1 to 87 in v0.2
Since I built the script, you’d think I’d have pretty good idea on how it should be deployed. But then Mischa van der Bent showed me a better way of using Installomater with Jamf Pro and I asked him to write it up for a blog post. Since he doesn’t have a blog of his own (yet), he has allowed me to post his instructions here.
Note: Installomator is designed so it can work with other management systems, too. If you have implemented Installomator with a different management system, let me know!
Everything that follows is from Mischa:
Preparation
After you have downloaded or cloned Installomator from Github, you can run Installomator.sh from the command line or from your management system:
The first thing we need to do is create a new Script in Jamf by going to Settings > Computer Management > Scripts.
In the General section you can give the Script a Display Name. I called mine Installomator. Assign a category and add the link to the GitHub repository to the notes as a reminder of the source of this script.
In the Script section, paste the entire code from the Installomator.sh file.
Important: Change the DEBUG variable from 1 to 0 for using Installomator in procduction, otherwise it will not actually install the new software.
The script requires a single argument and designed to use argument 4 from Jamf when present.
We can set the Parameter Label of parameter 4 to “Application name” in the Options section. This is going to be a reminder that we need to fill in the argument when we are creating a policy. You can leave the labels for the other parameters empty or fill in “DONT-USE” because the script does not use the other arguments.
We are done here and you can save the Script.
Scoping
To make sure that we are targeting to the right devices with an older release version we need to create a couple of things.
I’m going to use Jamf Patch Management to determine the latest release version of Google Chrome. Jamf will check the version before publishing this into the Patch Management. And if the software title is not in Jamf default Patch Management list you can create your own Patch Management source and add this on to Jamf Pro. You can also join the community patch server.
Go to Patch Management under Computers > Content Management and create a New Software Title. We are going to use Jamf Repository. Scroll down the list and select Google Chrome.
The only thing we need to set here is the Software Title Settings and assign a Category. You can select the Jamf Pro Notification option to get emails when an update is posted..
Jamf Patch Management will query the inventory and list the clients where Google Chrome is installed and their versions. We now have the all the information we need!
Two Smart Computer Groups
Go to Smart Computer Groups and create a new one. I called this “Google Chrome not installed or out of date”
In the ‘Criteria’ section I add two criteria:
Patch Reporting Software Title: after choosing this select the right report; for our example select “Patch Reporting: Google Chrome”
change the ‘Operator’ to “Less than” with the ‘Value’ “Latest Version.”
add a second line and Changed the AND/OR to “or” and for the second criteria I used “Application Title”
change the ‘Operator’ to “does not have” with the ‘Value’ “Google Chrome.app”
This Smart Group will contain the clients where the application is not installed or is not up to date.
Unfortunately, we cannot use this smart group with a Policy. When you try you will get this error ‘Policy scope cannot be based on a smart computer group that uses the “latest version” criteria.’
But there is a work around:
create a second Smart Group, I called this one “Member of Google Chrome not installed or out of date”
in the ‘Criteria’ section add the criteria “Computer Group” changed the ‘Operator’ to “member of” with the ‘Value’ to “Google Chrome not installed or out of date”
The result is the same as the Smart Computer Group “Google Chrome not installed or out of date” but we can use this in a policy.
Policy
Let’s put all the bits and pieces together and create one policy that will install or update to the latest release version of Google Chrome. We also want to promote this in Self Service and we want to push this out as a mandatory update with a deferral duration of 7 days.
go to Policies and create a new one. I called this policy “Google Chrome”
use “Recurring Check-in as the trigger, and set the custom event value to ”googlechrome.” With the custom trigger name, we can use this policy in a script or can test with the terminal command sudo jamf policy -event googlechrome -verbose
set the ‘Execution Frequency’ to On-Going.
add the Installomator script to the payload
the Priority doesn’t matter, because there is no package, so leave it default ‘After’
in the Parameter values you see that the first one is ‘Application name’ (which we set earlier). Set “googlechrome” as value.
I removed the payload “Restart Options” because we don’t need to restart after we install Google Chrome , we can leave it there, but I like to keep my policies clean.
We need to report back to the Jamf Pro Server that we just installed the latest version so we are going to add the payload “Maintenance” and enable “Update Inventory” (this should be enabled by default).
We are done with the payload and need to set the Scope:
under target we add the Smart Computer Group: “Member of Google Chrome not installed or out of date”
Self Service
enable “Make the policy available in Self Service”
leave the Display Name the same as Policy.
Button Name Before Installation: use “Install”
Button Name After Installation: use “Update”
give a Description to display for the policy in Self Service like “Install or Update to the latest release of Google Chrome”
upload or select the Google Chrome icon for making the Self Service pretty (you can use the macOS Icon Generator app)
under User Interaction we change the Deferral Type to “Duration” and use 7 days.
we don’t need to set a Start or Complete Message (Installomator can notify on success)
Now, we can save and test the policy.
Testing
I tested this Policy with a couple of scenarios;
The first scenario is: no Google Chrome installed. Second: old version Google Chrome installed, notification for update, end user deferral, and later installation from the Self Service. Third: Google Chrome Beta is installed
The first scenario is easy, after running the policy latest version get installed.
In the second scenario I got prompted with the following message, and I submitted 1 hour.
I can’t install this update before the hour because I got this message in the jamf log “Policy ‘Google Chrome’ will not be executed because it was deferred by the user.”
The last scenario I installed the Google Chrome Beta version 84.0.4147.30, the latest version in Patch Management (for this moment) is 83.0.4103.61. This beta version registers as an “Unknown Version” and it will not fall into scope.
I can use this policy with the Installomator script to install the latest version on a clean machine, and I can push out an update (with a deferral time) to push a mandatory update in a polite way 😉
Because Installomator is checking the Developer Team ID of Google directly, I can be confident that it is the real installer from Google. So, we get security with less effort.