This article is for Day 12 of Mercari Advent Calendar 2021, brought to you by Thi Doan from the Mercari Tools & Infrastructure team.
This article will describe how to configure your Bazel builds for iOS to take advantage of an Apple silicon remote execution build farm.
Introduction
Bazel is one of the build tools that we use to build our internal iOS apps and frameworks. By default, Bazel runs builds and tests on your local machine. A Bazel build with remote execution (also called remote build execution or RBE for short) allows you to distribute builds and tests across an unlimited number of remote workers, which may significantly improve your build and test times given a highly modular project.
Supporting Bazel builds on Apple silicon with remote execution brings up several challenges. While we want to make use of the fantastic Apple silicon performance where we can, there are existing systems we may not replace right away. This article will also describe some strategies and trade-offs you may want to consider to get the most out of the performance while ensuring the transition to Apple silicon is as smooth as possible.
Bazel on Apple Silicon
Bazel has started supporting building on Apple silicon hardware starting version 4.2 and supporting building for the iOS Simulator on Macs with Apple silicon since version 5.0 (which is—at the time of writing—still in the release candidate phase). macOS on Apple silicon is technically a different platform—it uses a different architecture than macOS running on Mac computers with Intel processors. In a Bazel build, this is denoted as darwin_arm64
, as opposed to darwin_x86_64
on Intel-based macOS. If you only care about the building, there is literally nothing you need to do differently when building on different platforms. Bazel will do the right thing, from auto-detecting the host platform to compiling tools for the architecture the tools need to run on. This, however, will result in the host tools being compiled into darwin_x86_64
on Intel-based Mac computers and darwin_arm64
on Macs with Apple silicon. This behavior shouldn’t cause any problem for the build itself but would cause problems for remote caching and remote execution:
- For remote caching, if you populate remote cache from Intel-based Mac computers, people who use a Mac with Apple silicon would not get any cache hits and would have to build everything from scratch, and vice-versa.
- For remote execution, if you use Mac computers with Intel processors as the remote executors, even though remote actions would still be able to fall back to running on local Macs with Apple silicon with the help of the Rosetta translation environment, the build performance would degrade. However, if you go with the other way—deploying a remote execution cluster with all Apple silicon—the build would no longer be able to fall back to run on local Intel-based Macs because there is no such a translation environment for the reverse way. The ability to fall back to local builds is crucial. It ensures the builds are resilient, for example, when the remote workers go down because of technical issues or when you want to disable remote execution in an environment with a poor network connection.
Universal Binaries
To get the most out of the native performance, Apple has always recommended creating universal macOS binaries—binaries that run natively on both Apple silicon and Intel-based Mac computers. Although this comes with a cost of larger binaries and slower compilation times, in practice, the pros and the cons depend a lot on the nature of your codebase. Let’s take a look at a list of the tools that a typical iOS build with Bazel may rely on:
- Xcode command-line tools: Starting with Xcode 12, command-line tools in Xcode are released as universal binaries. Regardless of your build running on an Intel-based Mac or Apple silicon, if you’re using the same Xcode version, you will be using the same command-line tools. We don’t need to worry about the build being diverged from here.
- Tools compiled during the repository rules’ initialization process. These include all the tool wrappers in the auto-configured C++ toolchain (a.k.a. the
local_config_cc
repository). These tools are the inputs to all C/C++/Objective-C compilation actions and linking actions. Starting with Bazel 5.0, they will be compiled to universal binaries regardless of the host machine’s architecture. - The Swift persistent worker: This tool wraps all the
swiftc
invocations (even if you don’t build with theworker
strategy); therefore, it is an input to every Swift compilation action. If you build this into a single architecture binary, each of your Swift compilation actions’ inputs will change when building on different macOS platforms. - Tools used in the Bazel rules: All tools used in the Apple rules are written in Bash and Python, which means they will continue to work without any modification. In the Swift rules, if you depend on swift_grpc_library and/or swift_proto_library, you will also need to handle the universal conversion for the @com_google_protobuf//:protoc, @com_github_apple_swift_protobuf//:ProtoCompilerPlugin, as well as the @com_github_grpc_grpc_swift//:protoc-gen-swiftgrpc targets.
- Tools you may have in your custom rules if any.
With a codebase mainly written in Swift, the cost of duplicating every Swift invocation for two different architectures outweighs the cost of compiling the worker and several other tools into universal binaries. These tools are compiled at an early stage of the build and rarely need to be rebuilt unless you change the C++ toolchain or update Xcode, Bazel, or the rulesets.
There is no such a flag to tell Bazel to build all your tools into universal binaries. To create universal binaries, you will need to migrate all of your rules that compile tools from the source. The following section will describe how to achieve that.
Configuration transition
Bazel has a concept of “transition”, that allows rules to modify command-line options on the fly. A simple example is, building an ios_application
target implies building for the iOS platform type, so passing --apple_platform_type=ios
to the command-line is unnecessary—the apple_rule_transition
will automatically handle that for you. By leveraging transition, we may force tools to always compile into universal binaries. A recently introduced universal_binary
rule uses a fake split transition to “split” any macOS CPU to a list of all macOS CPUs, then combine the built binaries into a single universal binary using the lipo
tool. Using this rule as the wrapper of your binary targets (e.g., cc_binary
, swift_binary
), you may produce universal binaries that run natively on both macOS platforms.
Custom Swift toolchain
Among the tools used in the Swift rules, the Swift persistent worker has the most impact because it is an input to every Swift compilation action. At the time of writing, the Swift rules don’t support configuring the worker to be built as a universal binary yet, so you will need to use a custom Swift toolchain. This section will describe creating a custom Swift toolchain based on the default toolchain.
- Create a universal binary wrapper for the Swift worker in your workspace.
# toolchains/swift_worker/BUILD
load("@build_bazel_apple_support//rules:universal_binary.bzl", "universal_binary")
universal_binary(
name = "swift_worker",
binary = "@build_bazel_rules_swift//tools/worker",
visibility = ["//visibility:public"],
)
-
Copy the xcode_swift_toolchain.bzl file to your workspace, for example, into the
toolchains
directory. This file is the default Swift toolchain file. We will make modifications to it to use our universal worker. -
Replace all the relative labels in the load statements in that toolchain file with absolute labels. Below is an example diff:
diff --git a/swift/internal/xcode_swift_toolchain.bzl b/swift/internal/xcode_swift_toolchain.bzl
index e88ef98..c771514 100644
--- a/swift/internal/xcode_swift_toolchain.bzl
+++ b/swift/internal/xcode_swift_toolchain.bzl
@@ -23,11 +23,11 @@ load("@bazel_skylib//lib:dicts.bzl", "dicts")
load("@bazel_skylib//lib:partial.bzl", "partial")
load("@bazel_skylib//lib:paths.bzl", "paths")
load("@bazel_tools//tools/cpp:toolchain_utils.bzl", "find_cpp_toolchain")
-load(":actions.bzl", "swift_action_names")
-load(":attrs.bzl", "swift_toolchain_driver_attrs")
-load(":compiling.bzl", "compile_action_configs", "features_from_swiftcopts")
+load("@build_bazel_rules_swift//swift/internal:actions.bzl", "swift_action_names")
+load("@build_bazel_rules_swift//swift/internal:attrs.bzl", "swift_toolchain_driver_attrs")
+load("@build_bazel_rules_swift//swift/internal:compiling.bzl", "compile_action_configs", "features_from_swiftcopts")
load(
- ":feature_names.bzl",
+ "@build_bazel_rules_swift//swift/internal:feature_names.bzl",
"SWIFT_FEATURE_BITCODE_EMBEDDED",
"SWIFT_FEATURE_BITCODE_EMBEDDED_MARKERS",
"SWIFT_FEATURE_BUNDLED_XCTESTS",
@@ -44,16 +44,16 @@ load(
"SWIFT_FEATURE_SUPPORTS_SYSTEM_MODULE_FLAG",
"SWIFT_FEATURE_USE_RESPONSE_FILES",
)
-load(":features.bzl", "features_for_build_modes")
-load(":toolchain_config.bzl", "swift_toolchain_config")
+load("@build_bazel_rules_swift//swift/internal:features.bzl", "features_for_build_modes")
+load("@build_bazel_rules_swift//swift/internal:toolchain_config.bzl", "swift_toolchain_config")
load(
- ":providers.bzl",
+ "@build_bazel_rules_swift//swift/internal:providers.bzl",
"SwiftFeatureAllowlistInfo",
"SwiftInfo",
"SwiftToolchainInfo",
)
load(
- ":utils.bzl",
+ "@build_bazel_rules_swift//swift/internal:utils.bzl",
"collect_implicit_deps_providers",
"compact",
"get_swift_executable_for_toolchain",
- Replace the
_worker
attribute’s default value with your own worker, e.g.@//toolchains/swift_worker
.
diff --git a/swift/internal/xcode_swift_toolchain.bzl b/swift/internal/xcode_swift_toolchain.bzl
index c771514..abe1b3c 100644
--- a/swift/internal/xcode_swift_toolchain.bzl
+++ b/swift/internal/xcode_swift_toolchain.bzl
@@ -847,7 +847,7 @@ toolchain (such as `clang`) will be retrieved.
cfg = "exec",
allow_files = True,
default = Label(
- "@build_bazel_rules_swift//tools/worker",
+ "@//toolchains/swift_worker",
),
doc = """\
An executable that wraps Swift compiler invocations and also provides support
- Create a repository rule that defines this toolchain.
# toolchains/swift_autoconfiguration.bzl
def _xcode_swift_toolchain_impl(repository_ctx):
repository_ctx.file(
"BUILD",
"""\
load("@//toolchains:xcode_swift_toolchain.bzl", "xcode_swift_toolchain")
xcode_swift_toolchain(
name = "toolchain",
features = ["swift.module_map_no_private_headers"],
visibility = ["//visibility:public"],
)
""",
)
xcode_swift_toolchain = repository_rule(
implementation = _xcode_swift_toolchain_impl,
)
- Load it in your WORKSPACE.
load("//toolchains:swift_autoconfiguration.bzl", "xcode_swift_toolchain")
xcode_swift_toolchain(name = "build_bazel_rules_swift_local_config")
Note that you may not use register_toolchains()
to register this toolchain because the Swift rules are configured to always use a target labeled @build_bazel_rules_swift_local_config//:toolchain
as their toolchain. Because of this, also note that this has to be defined above the load("@build_bazel_rules_swift//swift:repositories.bzl", "swift_rules_dependencies")
statement in your WORKSPACE file to override the default toolchain.
Remote Execution on Apple Silicon
Bazel understands three types of platforms in your build:
- Host – the place where Bazel itself runs; in this case, it may be Intel-based macOS or macOS on Apple silicon.
- Execution – a place where Bazel runs your tools to produce build outputs; in this case, it may also be Intel-based macOS or macOS on Apple silicon.
- Target – a platform where Bazel produces the build outputs for; in this case, it may be the iOS Simulator on the macOS and the iOS on a physical device.
Host platform
The host platform is auto-detected by Bazel when you start a build. Bazel initializes a local repository called local_config_platform
that defines the host platform that Bazel runs on. Below is how it looks like when your build is initiated from an Intel-based Mac:
platform(
name = "host",
constraint_values = [
"@platforms//cpu:x86_64",
"@platforms//os:macos",
],
)
and from Apple silicon:
platform(
name = "host",
constraint_values = [
"@platforms//cpu:arm64",
"@platforms//os:macos",
],
)
(The actual cpu
value generated by Bazel for the Apple silicon host platform is aarch64
, but since one is only an alias to the other, in this article, we only use arm64
when referring to this architecture for simplicity. Similarly, we only use macos
instead of osx
for the value of the os
.)
Execution platform
The host is one of the execution platforms. In fact, it is the only execution platform if you run your build on a single machine. At this point, even though you build all of your tools as universal binaries, the action cache still splits between two platforms because platform properties are part of every action cache’s input key. In the next step, we will define an extra execution platform using the native platform
rule, and register it from the WORKSPACE
file via register_execution_platforms
. You may also register a new execution platform from the command line via the --extra_execution_platforms
flag. This example demonstrates the former way.
# WORKSPACE
register_execution_platforms("//platforms:macos_x86_64")
# platforms/BUILD
platform(
name = "macos_x86_64",
constraint_values = [
"@platforms//cpu:x86_64",
"@platforms//os:macos",
],
exec_properties = {
"Arch": "arm64",
"OSFamily": "darwin",
},
)
The exec_properties
values are specific to your remote execution implementation or provider. If your RBE backend cannot handle multiple machine types in the same executor pool, you may want to configure these properties so that build and test actions are sent to remote M1 executors. Even though the remote executors are M1 Macs, this execution platform is Intel-based macOS from the Bazel perspective. Since we build tools as universal binaries, they will always run natively.
The Apple rules are one of the rulesets that do not support platforms to select toolchains yet. Until this is supported, you may use a platform_mapping
file placed at the workspace root to map this extra execution platform to a set of command-line options. For example:
platforms:
//platforms:macos_x86_64
--cpu=darwin_x86_64
flags:
--cpu=darwin_x86_64
--apple_platform_type=macos
//platforms:macos_x86_64
Here we use a platform
target with a @platforms//cpu:x86_64
constraint value (and a corresponding --cpu
flag in the platform mapping), but this does not matter now as we force building tools to be universal. This CPU constraint may be either @platforms//cpu:x86_64
or @platforms//cpu:arm64
, but setting it to @platforms//cpu:x86_64
has a minor advantage that if you have tools that aren’t yet able to migrate to universal binaries, you may defer doing that and still be able to have them running under Rosetta.
Now we have two execution platforms: the auto-detect host platform and the extra execution platform we just added. The host platform still diverges when you initiate the build from different macOS platforms, but it is not in use anymore unless you still have tools in your rules configured with the “host” transition. We recommend auditing all of your rules to ensure that all tools have the proper “exec” transition.
With all that setup, you now have a remote execution cluster compatible with both Intel-based Mac computers and Apple silicon Mac computers. Regardless of whether your build is initiated from either type of Mac, with remote execution on or off, the build will always produce the same intermediate and final outputs for everything. Happy reproducible builds!
Tomorrow’s article will be by @Rakesh_kumar. Looking forward to it!
P.S. We are hiring!