Hi, I’m @Zuma. I’ve been with the Web Platform Team for three months, and I’m excited to share my internship project: Turborepo remote cache.
Note: This article was originally written in March 2025. Please note that the implementation details and team names reflect the organization at that time.
Introduction
In web development, speed and efficiency are critical. A slow Continuous Integration (CI) pipeline can become a major bottleneck, hindering our ability to iterate quickly and receive feedback promptly. In essence, slow CI pipelines make it challenging to truly “Move Fast,” one of our group values. In many web repositories, the build time is a primary bottleneck that slows down the CI pipelines.
Turborepo has emerged as a powerful tool for managing monorepos, offering efficient task parallelization and caching capabilities. Ideally, Turborepo should speed up local development as well as CI pipelines. However, there is a catch in that the local Turborepo cache cannot be reused across different workflows because most CI runners including our self-hosted GitHub Actions runner are ephemeral. This limitation necessitates a caching strategy tailored to our needs, going beyond the capabilities of typical cache action which doesn’t account for task dependencies.
To overcome this challenge, we implemented Turborepo remote caching, enabling us to share a single Turborepo cache across multiple CI pipelines. This approach avoids redundant work throughout the CI workflow, significantly reducing build times and accelerating the overall CI process. While Vercel provides this functionality as a fully managed feature, its adoption is not universal. Specifically, we do not use Vercel, making self-hosted implementations of a Turborepo remote cache essential.
This blog post will go over the implementation of a Turborepo remote cache, covering the proposed architecture, performance results, future considerations, and key takeaways from the project.
Proposed Architecture
The Turborepo remote cache consists of two main components: a remote cache server and storage for saving cached artifacts. Several community implementations of remote cache servers are available, so we adopted one of them for the initial implementation.
When considering the architecture for the remote cache server, we evaluated three main approaches. The following sections detail our findings for each option.
1. Deploying a Microservice on GKE
The first approach we considered was deploying the cache server as a microservice on Google Kubernetes Engine (GKE), which aligns with our standard company practices. However, this strategy introduces significant challenges regarding latency, cost, and isolation.
Our CI cluster is located in the US, whereas our primary GKE cluster is hosted in Japan. This geographical separation results in increased latency as well as prohibitive data transfer costs of $0.08/GiB (as of this writing). After a rough cost estimation, we considered this expense to be too high, especially given the ephemeral nature of CI pods. Additionally, using a single cache server across all repositories raises concerns about cache pollution and permission management.
2. Serverless Deployment on Cloud Run
Running the cache server on Cloud Run is a popular solution in the community. Deploying Cloud Run in the US would minimize data transfer costs, and integration would be relatively straightforward with a unified TURBO_API URL.
However, each repository requires its own isolated cache to prevent cache pollution and ensure security. Similar to the GKE approach, using a single Cloud Run instance for all repositories would lead to cache pollution and complex permission management. Therefore, achieving strict separation of artifacts between repositories would require numerous Cloud Run instances, which would drastically increase computational costs.
3. Custom GitHub Action (Adopted)
Finally, we explored leveraging GitHub Actions to reduce latency and utilize existing Workload Identity on our self-hosted runners. Since our team already provides many custom GitHub Actions for web development, creating a dedicated remote cache action was a reasonable choice.
Although using GitHub Actions to run the cache server as a background job is unconventional, this approach proved to be the most cost-efficient and high-performance solution.
After comparing cost and performance, the third approach was chosen, and two custom GitHub Actions were created to provide self-service caching capability.
To support different CI workflows, we implemented two distinct patterns using custom GitHub Actions.
3-1. Background Process for Standard Builds

For standard tasks executed directly on the runner (e.g., turbo run build), we developed a custom JavaScript action. This action initializes the remote cache server as a background Node.js process.
Our custom action abstracts away this complexity. It handles the server startup and automatically configures necessary environment variables (such as TURBO_API, TURBO_TOKEN, and TURBO_TEAM) and Workload Identity.
Users only need to add a single step to their workflow:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout
- uses: org/platform/actions/auth
+ - uses: org/web-platform/packages/turborepo-remote-cache
- run: turbo run build
3-2. Sidecar Container for Docker Builds

For builds running inside Docker containers, accessing a process running on the runner container (as described in the previous section) is restricted due to network isolation. To solve this, we enhanced an existing custom action to launch the cache server as a sidecar container sharing the same network namespace.
We deliberately chose not to use GitHub Actions’ builtin service containers for this setup. Service containers are initialized at the start of a job, but we needed the server to start after explicitly obtaining Google Cloud credentials via Workload Identity in a preceding step.
Users can enable this feature simply by specifying an input parameter, as shown below:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout
- uses: org/platform/actions/auth
- uses: org/web-platform/packages/nextjs-build
with:
dockerfile-path: Dockerfile
+ remote-cache-enabled: true
Performance Results
The Turborepo remote cache was initially tested on our team’s monorepo. Initially, the observed improvements were minimal because our build times were already quite fast. We did achieve very high cache-hit rates due to the presence of many packages.
However, extending this to another large-scale, well-modularized repository yielded significant improvements. We achieved approximately a 50% reduction in Turbo task duration and a 30% reduction in total job duration by adjusting the CI workflow and integrating the remote cache using our custom GitHub Actions.
These figures represent the results from a workflow job building a large application on a pull request.


It’s important to note that these improvements are highly dependent on the number of applications or internal packages changed in a given commit. In fact, with a large number of changes, some cases may exhibit slower performance. This is primarily because the current remote cache server has a startup time of approximately 10 seconds. This cold start delay is particularly problematic for shorter tasks, where the startup time can negate the benefits of caching. To address this, we are considering developing a custom lightweight remote cache server to minimize startup latency and enhance efficiency, especially for shorter tasks.
Overall, despite some caveats, this resulted in a substantial reduction in the overall CI pipeline time.
On the other hand, we encountered difficulties on another repository that contains a large application lacking dependencies on internal packages. As a result, the proof-of-concept (PoC) on their pull requests did not produce an impactful outcome. However, this outcome could serve as an incentive to further modularize those repositories.
Conclusion
The Turborepo remote cache project has yielded a self-service tool that can significantly reduce CI time, enabling teams to “Move Fast”. Even with the remote cache, effective modularization remains crucial for achieving optimal speed improvements.
Through my intern project, I learned the importance of collaboration between product and platform teams. We built a remote cache solution that’s now available as a self-service tool. However, simply providing the tool isn’t sufficient. By working closely with product teams, we were able to iterate based on real user feedback.
Also, I would like to thank my mentor, azrsh, and the members of the Web Platform Team. Thanks to their feedback, especially regarding key architectural decisions, I was able to make decisions without regrets.




