When Speed Wasn’t About Coding Faster: Our Journey to ‘One Person One Release’

This post is for Day 23 of the Mercari Advent Calendar 2025.

Introduction

Hi, I’m Jieqiong Yu, an Engineering Manager working with the Shops & Ads Mobile Enabling team at Mercari.

Over the past six months, my team in Shops & Ads Mobile Enabling has been supporting cross-platform feature development on iOS and Android. On paper, we had everything right: strong engineers, solid foundations, and a clear roadmap. Yet, we started noticing something subtle: Delivery felt slow.

Not "heavy" in terms of coding speed—our engineers could generate code quickly enough—but heavy in the coordination required before a single line of code is written.

This is the story of our experiment with the ‘One Person, One Release’ philosophy – why we tried it, how it worked in practice on iOS and Android, and what it taught us about ownership, coordination, and growing engineering capability with AI as an enabler.

The Hidden "Coordination Tax"

At some point, we realized something uncomfortable: the bottleneck wasn’t implementation speed anymore.

Engineers across iOS, Android, Web, and Backend were fully capable of building complex features within their own domains. When work was clearly defined, teams moved fast.

The friction lived in the "in-between."

Specifications arrived incomplete and needed to be shaped through discussion. Even with a shared Figma design, multiple engineers across platforms had to align on the same problem statement, clarify expected behavior, and agree on edge cases. API contracts became a frequent coordination point. Defining request and response structures, naming fields, and agreeing on domain vocabulary required repeated conversations across mobile, web, and backend. Each platform brought its own conventions, and aligning those conventions took time.

By the time implementation began, a significant amount of energy had already been spent just reaching a shared understanding of what we were building and how the pieces fit together.

We still shipped. But delivery felt heavier every cycle.

What we were losing wasn’t engineering capability – it was shared understanding, clear ownership, and the space for engineers to move fast and grow beyond a single platform without paying constant coordination costs.

‘One Person One Release’ philosophy emerged as an experiment to restore our momentum. We asked ourselves a fundamental question:

Can a single engineer lead a feature from design to delivery across various tech stacks (iOS, Android, Web and backend)?

Delivery slows down – not because engineers move slower, but because too many people need to move together.

Starting Small: The First Experiment

Our first opportunity came when we began migrating one of our web view screens to native code on iOS and Android (Shops Item Detail migration), but we deliberately started small. Instead of tackling a large migration or a complex surface, we chose a contained user story: showing the last purchased date on the item detail page. The scope was simple enough to experiment safely, yet real enough to reflect how we build production features.

Rather than splitting the work by platform – Android here, iOS there – we asked a single engineer to deliver the feature end-to-end across iOS and Android.

We didn’t change the definition of done. We didn’t relax code review standards.

What we changed was how the work was owned.

The engineer started on the platform they were most familiar with. Before moving to the other platform, they spent time learning the basics: understanding the code architecture, setting up the environment, figuring out how to build, test, and debug. This wasn’t something an AI agent could do magically on its own.

To make that learning curve manageable, we leaned heavily on pair programming sessions. Engineers walked each other through platform-specific patterns, common pitfalls, and project conventions. This human knowledge transfer was essential.

Once that foundation was in place, AI agents became a powerful enabler.

Engineers used AI agents to help translate pull requests from one platform to the other, generate boilerplate code, and surface relevant platform APIs. Instead of starting from a blank file, they could focus on validating behavior, adapting logic idiomatically, and ensuring quality. Reviewers stepped in where deeper platform expertise was needed – not to take over, but to guide.

The result was eye-opening.

The feature shipped faster than expected, with fewer inconsistencies and far less back-and-forth alignment. More importantly, the engineers gained confidence that they could deliver beyond their primary platform – without sacrificing quality.

That small win gave us the signal we needed. We realized it was a workflow worth scaling.

Turning an Experiment into a Habit

As we applied new ways of working to more features, patterns started to emerge – not as principles on a slide, but as friction we could feel day to day.

The first thing we noticed was how much the experience depended on the foundation underneath. When core pieces like networking, navigation, or analytics behaved similarly on iOS and Android, engineers could move between codebases with confidence. When they didn’t, progress slowed immediately. Even small inconsistencies forced engineers to stop and re-orient themselves, breaking the flow that was meant to be created?

Naming conventions proved far more critical than we anticipated. Over time, independent naming patterns for screens, data models, and component boundaries had drifted apart, creating significant cognitive load. When a single engineer was responsible for developing on both platforms, those differences surfaced instantly. Aligning conventions didn’t just make the code easier to read – it made it easier to think about the system as a whole, and it made AI-assisted translation far more effective.

The role of code reviewers shifted fundamentally to support the ‘One Person One Release’ philosophy. Platform specialists moved away from being final gatekeepers to becoming early-stage guides. The most effective reviews focused on identifying platform-specific nuances and sharing best practices. This helped engineers course-correct before small issues became structural ones. That shift required trust on both sides, but it paid off quickly.

AI played a critical role – but not in the way we first imagined. It didn’t magically produce correct cross-platform implementations. Engineers still had to invest time learning the basics of the other platform: how the code was architectured, how the files were structured, how to build and test, how state flowed through the app. Pair programming sessions were essential here. Once that understanding was in place, AI became a real accelerator – translating pull requests, generating boilerplate, and reducing the cost of repetitive work – while engineers remained firmly in control of correctness and quality.

Over time, ‘One Person One Release’ philosophy stopped feeling like an experiment we were “trying out”. It became a lens that exposed where our systems were easy to work with – and where they weren’t. And that, more than speed alone, turned out to be its real value.

Applying ‘One Person One Release’ philosophy to Real, Complex Features

The earlier experiments taught us an important lesson: the approach of implementing on one platform and then using AI agents to translate that work to the other platform was effective – but only within clear limits.

For small, contained user stories, it worked surprisingly well. An engineer would build on the platform they knew best, and the AI agent could help carry that logic across to the other platform. With stable foundations, consistent conventions, and careful reviews, we could move fast without drifting.

Those limits became obvious when we tried something bigger.

When we moved to more complex work – for example, implementing the coupon features on the Shops item detail page – the using the AI to translate PR from one platform to another platform approach started to fail. The scope was wider, dependencies were heavier, and the behavior had more edge cases. Translating after the fact became noisy: the generated code needed too much correction, and the feedback loop got slower instead of faster.

That pushed us to try a second approach.

By then, engineers had already spent enough time working across platforms that they weren’t just “visiting” the other codebase anymore. They could navigate it, understand low-level implementations, and reason about platform-specific trade-offs. That gave us a new foundation to build on.

Instead of translating a PR from one platform to another, we started generating both iOS and Android code from the same prompt. We built a set of cross-platform prompts that embedded what we had learned: our architecture choices, best practices, and constraints for each platform. Engineers would feed in the same spec, generate both implementations, and then debug and refine them directly on each platform until they were correct and shippable.

In practice, this felt very different. The “source of truth” stopped being a PR on one platform. The source of truth became the shared spec plus the shared prompt structure – and engineers validated the output by running, testing, and reviewing it on both iOS and Android.

What We Learned About Engineering Through ‘One Person One Release’

‘One Person One Release’ philosophy wasn’t only about speed. It taught us lessons about architecture, quality and engineering culture.

  • It exposed the cracks in our foundation – When a single engineer drives features across all platforms, inconsistencies surface immediately. We uncovered areas where naming conventions drifted, common patterns diverged, and design mismatches forced unnecessary rework. Fixing these issues didn’t just help the immediate release—it hardened the entire codebase.
  • It cultivated systems thinking – Working across boundaries forced engineers to broaden their perspective. This led to richer design discussions and a better ability to anticipate the downstream consequences of technical decisions.
  • It proved that AI demands structure – We learned that AI-native engineering thrives on predictability. Without consistent architecture and naming, AI tools generate noise. But with strong guardrails, they transform from simple assistants into true force multipliers.

And finally, ‘One Person One Release’ philosophy clarified that engineering velocity isn’t only about “writing code faster” – it’s about reducing friction in the entire development loop.

Moving Forward

We’re still learning.

‘One Person One Release’ philosophy is not a magic solution, and it’s not something we expect to apply everywhere. There are still areas – especially deeply platform-specific surfaces – where starting with platform experts is simply the right choice. The philosophy acknowledges this constraint rather than attempting to replace it.

What it has given us, though, is another option.

We’ve found it particularly effective in situations where specifications are evolving quickly, where teams need fast feedback to make progress, or where a feature spans multiple platforms with largely shared structure. In those moments, reducing coordination overhead and clarifying ownership early makes a noticeable difference.

As we continue building features and strengthening our engineering foundations, ‘One Person One Release’ philosophy has become one of the tools we reach for when we need both speed and consistency. It pushes us to think more holistically about the system, to design architecture that’s easier to move across, and to treat AI not as a shortcut, but as a broader development model that still depends on solid engineering judgment.

Looking back, this initiative has been one of the most meaningful engineering experiences for me this half year – not because it changed how we write code, but because it changed how we think about building together.

We’re still exploring. Still experimenting. Still refining what works and what doesn’t. And that feels exactly right for where we are headed next!

Tomorrow’s article will be by @kiko and @aisaka. Look forward to it!

  • X
  • Facebook
  • linkedin
  • このエントリーをはてなブックマークに追加