Hello! My name is Gary and I am a member of the Cross Border (XB) Client Core team. Our team is working to provide the core functionality of our global applications with the aim to enable developers to be able to quickly develop features across multiple regions.
This article is part of the series discussing how we developed a new global service and covers some of the architectural decisions made for the web application and where these decisions were rooted.
If you haven’t already, I would suggest checking our deeeeeet’s article here for an overview of the project.
First let me give some context to where we were at with web when the project first started:
History
At Mercari we have a number of web applications, with our main customer-facing offerings being our Japan Marketplace web service (https://jp.mercari.com) and US web service (https://mercari.com).
In 2024 with the growth of our proxy partner purchases, we recognized a growing appetite for Japanese goods from the global market and decided to begin work on allowing international users to purchase our sellers. Given we were a relatively small team with an already feature-rich application, we decided not to create a new web application and instead reuse the existing Japan Marketplace web service.
Users residing in Taiwan then had the ability to register for a new account, view and discover items, and purchase directly from our service through our proxy partner Buyee. From the technical perspective this was the simplest path. We already supported internationalization so adding additional languages was relatively straightforward, and features other than purchase (e.g. search) needed few changes to support international users.
In this first phase we were able to quickly roll out to production, and we saw good indications of growth in users and usage. We had the green light to continue on this path to open our Japanese inventory to foreign users.
Following on from this we rolled out the service to Hong Kong users and added additional features, such as a cart to consolidate shipping of multiple items into a single package, to reduce shipping costs. These again had good results, but development was grueling and it became clear that continuing to extend our existing Japan website was not scalable in the long term.
Building better
Here are a few of the main issues we ran into with our existing service, and how we worked to make the global service better.
Engineers speaking different languages
Ironically, working at an international tech company like Mercari, the biggest communication problem I encounter does not relate to engineers’ preferred spoken language but rather how our frontend applications communicate with the backend. Backend engineers, quite rightly, think in terms of resources and entities, whereas frontend engineers think in terms of view models. For our flea market website we use a microservice architecture, and it’s not uncommon to see features added with the frontend needing 100+ lines of code just to manipulate the data returned from the microservice even though it’s a new service. Client and backend engineers speak in different languages.
With the global service and addition of native applications as well as web, this is something we very much wanted to avoid. Doing this orchestration and data manipulation on the web alone is painful; doing it on all three client platforms slows us down and is a recipe for bugs.
We therefore decided to adopt the backend-for-frontend pattern, and have an interface layer responsible for converting backend resource-oriented structs to view models that we can use on the clients. Since we currently have very similar product specifications for all three platforms we decided to have a single shared BFF.
We first considered GraphQL but decided instead to use Protobuf Definition files to define the API, and to keep the transport mechanism the same as between backend modules, namely ConnectRPC (our chosen Remote Procedure Call framework). This helped minimize the number of technologies we used across the stack, and made it easier for all engineers to contribute.
The BFF layer is built for the clients but resides on the backend. We therefore pioneered a joint ownership model, and although the backend is written in Go we are working to create utilities and guidance to allow both client and backend engineers to easily contribute.
This removes a lot of the complexity from the client applications and allows us to more easily maintain feature parity. For our previous rewrite of the Japan Marketplace web application it took us around eigteen months, whereas for this project we were able to complete feature development in just six months. A large part of this can be attributed to the orchestration and business logic residing in the BFF layer.
For details of how we have configured data fetching for the global website, check back later in the series for VB’s post.
Performance issues
The Japanese Marketplace web application was originally constructed as a JAMstack application built using Gatsby.js and using dynamic pre-rendering for SEO. Spinning up a headless browser to dynamic prerender a request proved to be expensive, however, and a few years ago, we migrated to Next.js with some server-side rendered pages. The application however still remains largely influenced by that initial client-side centric approach and using Next.js’s Pages Router we essentially have a big client-side application that we inject data into for server-side rendering of SEO critical pages. With time and the addition of new features, the amount of JavaScript has increased and additionally given the backend architecture, to render even a relatively simple page like the Item Details page we need to make over 20 separate fetch calls with some of these cascading requiring multiple round trips to our API gateway. That is to say, performance isn’t great.
With the new global web service we are targeting a wide variety of users, and whereas most in Japan have modern smart phones and 5g connection, that isn’t a guarantee in all regions.
Web development has been through a lot in the past 30 years. We have seen simple PHP server-side rendered pages with little client-side interactivity evolve into (somewhat bloated) client-side rendered single-page applications and now in the last couple years, the advancement and entrance of the hybrid-era of web applications.
Through the introduction of React Server Components and the client-server boundary it has now become much simpler to get all the initial speed and performance benefits of rendering on the server without having to ship your entire React application code to the browser to hydrate the application.
React Server Components render to a simple string requiring no additional JavaScript, minimizing network transfer and removing the need for scripts to run in the browser, improving performance.
We therefore decided to provision a new Next.js application using App Router and thankfully our Frontend Enabling team have created a Web Bootstrap tool which simplifies set up of this. By running a npm script, teams can quickly generate a boilerplate Next.js application alongside corresponding PRs to provision the required infrastructure using Terraform and Kubernetes manifest files.
React Server Components (RSC) were new to most of the team and in the early days of the project we were a little surprised that although RSCs have been stable for some time, the ecosystem and tooling around them is still very naive. In particular for testing we had to change from using mostly Jest and React Testing Library for UI tests to Storybook with a custom wrapper to enable nested async components. For running in CI, we use Vitest to run these.
Web development has evolved. Whereas before we would spend most of our time considering optimizing effects and re-renders, now we need to think about where we want a component to render and how we interface with that environment whether it be through Web APIs, Node.js etc.
Thankfully React and Next.js hide a lot of that complexity but nonetheless it’s a huge paradigm shift. On the browser side, data is propagated through re-renders and effects, whereas on the server we use promises and suspense boundaries.
Through this paradigm shift though, we are already seeing good indications of its potential. Looking at real user metrics for our application, although we have yet to do any optimization or caching, we are already seeing a notable improvement in performance compared to Japan Marketplace application.

Beyond performance for the end-user, our inherited architecture also created significant problems for search engine optimization, which was critical for our global growth.
Domain Strategy with Middleware
When reusing the Japan Marketplace initially for common features like the Item Details we decided to reuse the same page regardless of whether a user was visiting from Taiwan, Hong Kong or Japan. This had its benefits in that we immediately got all the same functionality out of the box. However it created a few issues:
- Some page contents such as currency depended on where the user was visiting from (inferred from their IP address). This resulted in the page content no longer being deterministic for the URL, with a request made from Taiwan returning different HTML from a request made from Japan. Given bot requests do not typically originate from each region, but rather typically from the US, this made SEO optimization per region essentially impossible.
- Similarly, testing variations of each page required the creation of dev tooling to select the region or configuration of a VPN with exit nodes in each region for testing what real users would see. This creates a big barrier to entry for dogfooding and QA.
- Feature development of the Japan Marketplace web service for Japanese users is still active. As teams added new features to shared pages, issues frequently arose when functionality was intended solely for the Japanese market.
For the new global web service we looked to avoid these issues and additionally build a better foundation for SEO and growth.
Domain name plays a big role in SEO and a number of options exist when localizing a web service:
- Country top-level domain, a.k.a. cTLD (e.g. .co.uk)
- ⭐ Global top-level domain with regional sub-domain (e.g. uk.example.com)
- Single domain with regional folders (e.g. example.com/uk)
Looking around at other e-commerce websites you’ll see all of the above in use. They each have their pros and cons but for us, global top-level domain with regional sub-domain made the most sense:
- It is a strong indicator to bots that pages under this domain are intended for users of that region.
- We already employ the strategy for the Japan Marketplace web service (https://jp.mercari.com).
- From a network management perspective it is easier than having to acquire and manage multiple cTLDs.
- Unlike regional folders, routing traffic to the user’s closest server can be done via DNS entries resulting in improved performance.
- If we want to create a more bespoke variation of the service for a specific region in the future (due to highly divergent product requirements) it is easier to migrate to a separate service.
Next.js App Router provides an elegant mechanism for structuring applications and organizing UI through the use of nested layouts, error and loading pages etc. It however is completely un-opinionated when it comes to internationalizing and localizing a service. To address this we needed to determine how to achieve a global top-level domain with regional sub-domains given that Next.js App Router works with paths (e.g. /account/user-info/address) and has no understanding of the domain.
Middleware to the rescue
The way we achieved this was through the use of Next.js middleware to rewrite the request to an “internal” path that includes the region. Given a request to hk.mercari.com/en, our middleware rewrites the path for this request from /en to /tw/en where tw is the region and en is the language to render the page in.
A simplified example:
// middleware.ts
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'
const REGIONS = ['hk','tw']
export function middleware(req: NextRequest) {
const url = req.nextUrl
const host = req.headers.get('host')
const [subdomain] = host.split('.')
if(!REGIONS.includes(subdomain) {
throw new Error('unsupported region')
}
// Example: hk.mercari.com/en → /hk/en
const [locale, ...rest] = url.pathname.split('/')
// validate locale etc
...
url.pathname = `/${region}/${locale}${rest.join('/')}`
return NextResponse.rewrite(url)
}
export const config = {
matcher: ['/((?!_next|favicon.ico|robots.txt|sitemap.xml).*)'],
}
Given this middleware runs before all requests, for the application code itself we have a very simple setup analogous to what we would have for a single domain with regional folders.
Our folder directory relies on Next.js’s dynamic segments using […] nomenclature and looks something like this:
app
[region]
[locale]
layout.tsx
page.tsx
// routes
account/
page.tsx
layout.tsx // per‑region layout if needed
These route parameters are then exposed by utility functions to developers and passed to the BFF to allow for easy localization of features. For example in Hong Kong we want to display all prices in Hong Kong Dollars.
Note: when developing locally we can also rely on any sub-domains of localhost resolving to 127.0.0.1 meaning we have no environment specific logic and simple set up. E.g. tw.localhost:port/en
Linking regions
Finally, given we now have multiple domains, we need to be careful to avoid internal competition with pages from multiple domains being indexed and competing against each other. We do this by adding metadata to the pages HTML that bots can parse to infer that each page has multiple localized variants. First we define the lang in html element corresponding to the combined region and language,
<html lang="en-TW" dir="ltr">
Through this, bots will understand that this content is intended for users residing in Taiwan who have a preference for English. When typing in a search, your search engine will typically use a combination of factors such as your IP Address, browser preferred language etc to present you with the best possible results.
Additionally we add alternate links for all other variations of the page:
<link rel="alternate" href="https://tw.mercari.com/en" hreflang="en-TW"/>
<link rel="alternate" href="https://tw.mercari.com/zh-hant" hreflang="zh-Hant-TW"/>
<link rel="alternate" href="https://hk.mercari.com/en" hreflang="en-HK"/>
<link rel="alternate" href="https://hk.mercari.com/zh-hant" hreflang="zh-Hant-HK"/>
This further signals to the bot that each regional sub-domain is intended for users of that specific region and prevents, for example, Taiwanese pages showing up for Hong Kong users when searching etc.
Moving forward
We have created a solid base and in the coming months will be working on removing the remaining feature gap with the Japan Marketplace application to ensure optimum UX for our users in addition to optimizing the application for improved performance rollout to multiple regions in the next year.
If you’re interested in web technologies please check back later in this series where I will be discussing how we have used modularization to enable greater shareability of our frontend code and specifically how we developed a new library to stitch the i18n resources of these modules together for an application.
Thanks for reading and please check back again tomorrow for Ryuyama’s article