30 Languages, Zero Translations: Why We Generate, Not Translate: Building merchi.ai Chapter 8
The Death of the “Translated” Description
In the e-commerce landscape of 2026, the old method of “Translate from English” is effectively dead. For decades, retailers followed a predictable, albeit flawed, workflow: write a product description in English, then run it through a translation engine, or an expensive agency to produce versions for international markets. The result was often syntactically correct but culturally hollow. In an era defined by Hyper-Localization and Agentic Commerce, where AI shopping agents scan for cultural relevance as much as technical specs, “translated” content is a liability. At merchi.ai, we realised that to help our customers scale, we had to move beyond translation entirely.
We don’t translate product descriptions. We generate them natively in 30+ languages. There is a profound difference between the two. When you translate, you are porting the structure, idioms, and cultural biases of the source language into a new context where they may not belong. When you generate natively, the AI acts as a local copywriter in Paris, Tokyo, or Berlin. It uses the specific vocabulary, styling conventions, and emotional triggers that resonate with a native consumer in that specific market. This is the only way to maintain a brand’s “soul” while crossing international borders.
As a small team, building this capability into merchi.ai required a fundamental shift in our automation logic. We had to move away from a linear “Source -> Target” pipeline to a parallel “Asset -> Multi-Native” architecture. By leveraging the multimodal capabilities of our AI engine, we can take a single ZIP file of images and produce high-converting, brand-aligned content in eight languages simultaneously. This isn’t just a speed improvement; it’s a qualitative leap in how international commerce functions.
In this chapter, we explore the technical and philosophical reasons why native generation is the only sustainable path for global merchandising in 2026. We will dive into our locale aware prompt systems, the orchestration of parallel AI calls, and the internal discipline required to maintain the merchi.ai platform itself in eight different languages. This is the story of how we built a system that speaks to the world without ever needing a dictionary.
The Cultural Nuance of Native Generation
Why does native generation win? Consider a high-end fashion item being sold in both the UK and Japan. A British description might focus on “understated elegance” and “versatility for the weekend,” using a tone that is confident and direct. However, the Japanese market often values a higher level of technical detail regarding fabric weave, precise measurements in centimetres, and a more humble, service-oriented tone of voice. If you simply translate the British copy, you miss these subtle but vital cultural expectations.
By using native generation, the merchi.ai engine applies specific Writing Knowledge rules for the Japanese locale. It understands that the “Styling Advice” block should be framed differently for a consumer in Tokyo than for one in London. This cultural adaptation is baked into the very atoms of the generation process. We aren’t just changing the words; we are changing the perspective. This ensures that the product feels like a “local” item in every market, significantly increasing conversion rates and reducing the bounce rate associated with “clunky” translations.
This approach also handles the linguistic complexity of technical attributes. In industries like electronics or industrial hardware, the way a “hexagonal shank” or a “lithium-ion battery” is described involves specific industry jargon that often doesn’t have a 1:1 translation. A native AI generator, informed by local technical taxonomies, uses the exact terminology a professional tradesman in Germany or Italy would expect. This precision is essential for searchability; if you use the wrong technical term in a translated description, your product becomes invisible to local search engines and shopping agents.
Furthermore, native generation allows for “market-specific highlights.” Perhaps a product is marketed as “eco-friendly” in the Nordic markets but focuses on “durability and cost-saving” in North America. Our schema-driven system allows retailers to define these regional priorities. Instead of a one-size-fits-all description that is translated and distributed, the AI generates eight distinct pieces of content that each highlight the most relevant value proposition for that specific audience. This is the level of sophistication required to compete in a globalised 2026 economy.
Orchestrating the 30x Multiplier: Technical Logic
From a technical perspective, generating content in eight languages simultaneously is an exercise in high-scale orchestration. When a user initiates a “Run” for 10,000 SKUs in eight languages, we aren’t just making 10,000 AI calls—we are making 80,000. This is where our integration with Trigger.dev and OpenRouter becomes critical. We have designed our worker architecture to handle this 30x multiplier without bottlenecks, ensuring that the entire catalogue is enriched in a matter of minutes.
Each language generation is treated as a distinct, atomic task in our pipeline. When the parent “Run” starts, it fans out parallel tasks for each enabled locale: en, de, es, fr, it, ja, zh, and zh-TW. Each of these tasks receives a locale-specific context. Our Prompt Assembly engine (discussed in Chapter 4) injects the target language, cultural instructions, and the brand’s specific localized Writing Knowledge. Because these are separate AI calls, a failure in the Japanese generation (due to a transient API error) doesn’t stop the French or German versions from completing.
/**
* merchi.ai Multi-Locale Orchestration
* Trigger.dev fans out parallel tasks per language
*/
export const processMultilingualSKU = f.task({
id: "process-multilingual-sku",
run: async (payload: { skuId: string; locales: string[] }) => {
// Fan-out parallel generations for each locale
const tasks = payload.locales.map((locale) =>
generateNativeContent.trigger({
skuId: payload.skuId,
locale: locale,
// Cultural context is injected at the worker level
context: getLocaleContext(locale),
})
);
// Wait for all language variants to complete
const results = await Promise.all(tasks);
return consolidateResults(results);
},
});
This parallel architecture also allows us to use different models for different languages. As we noted in our OpenRouter deep-dive (Chapter 3), some models have a higher degree of fluency and cultural nuance in Japanese or Chinese than others. Our Model Registry allows us to route the ja task to a model that excels in Kanji and polite Keigo forms, while routing the fr task to a model known for its sophisticated literary tone. This “best-in-class” routing is impossible in a traditional translation workflow.
Finally, our Review UI is built to handle this multi-native reality. Instead of showing a list of translations, we provide a “Language Switcher” that allows merchandisers to jump between native versions of the product. They can see the Japanese description alongside the German one, with the UI highlighting how the “Styling Advice” or “Technical Specs” have adapted to the local requirements. This transparency gives the user total control over their global brand presence, ensuring that nothing is lost in “generation.”
The i18n Challenge: Maintaining the merchi.ai Platform
It wasn’t enough for our AI output to be multilingual; the merchi.ai platform itself had to be a native experience for our global users. Building a SaaS tool for an international audience in 2026 means more than just a “Language” dropdown. It requires a disciplined commitment to internationalisation (i18n) across the entire stack. We currently support eight languages for the UI: English, German, Spanish, French, Italian, Japanese, and both Simplified and Traditional Chinese.
We use i18next as our primary framework, which means every single string in our Next.js frontend—from the “Book a Demo” buttons to the complex error messages in our Trigger.dev logs—lives in eight separate JSON locale files. For a small team, the discipline required to maintain these files is immense. Every time we build a new feature, such as our Web Scraping tool or our Schema Builder, we must ensure that every label, tooltip, and placeholder is translated into all eight languages before the code is merged.
This “i18n-first” development cycle is a hard constraint. If we add a new “Processing Status” to our database, that status must have a corresponding key in every locale file. We use strict TypeScript keys to ensure that we never accidentally reference a missing translation. If a developer (which is usually just one of us) tries to use a string that doesn’t exist in the German or Japanese locale file, the build will fail. This prevents the “half-translated” UI experience that plagues many rapidly growing SaaS platforms.
// locales/ja/common.json
{
"dashboard": {
"welcome": "merchi.aiへようこそ",
"status": {
"processing": "処理中...",
"completed": "完了しました",
"failed": "エラーが発生しました"
}
}
}
Maintaining this at scale is a significant logistical effort. We don’t personally speak all eight languages fluently, so we rely on a combination of professional review and “Evaluator LLMs” to check the quality of our interface translations. We treat our own UI strings with the same rigour as our customers’ product data. If a Japanese user finds a grammatical error in our dashboard, it undermines our credibility as an AI merchandising expert. Therefore, every tooltip and error message is vetted to ensure it meets the professional standards of each market.
The Economics and Quality of Global Scale
The “30x Multiplier” isn’t just a technical challenge; it’s a financial one. In the AI economy of 2026, every token has a cost. Generating native content in eight languages means our token consumption is eight times higher than a single-language platform. As a small team, we have had to be incredibly thoughtful about the Unit Economics of this approach. We don’t just “throw tokens” at the problem; we use our Writing Knowledge system to ensure each generation is efficient and concise.
To balance quality and cost, we use a tiered generation strategy. For high-value “Hero” products, we might use the most expensive, high-reasoning models to ensure the native copy is flawless. For thousands of “long-tail” SKUs, we might use faster, more cost-effective models that still maintain high linguistic accuracy but at a fraction of the cost. Our Usage Tracking system (Chapter 6) allows us to monitor these costs in real-time, ensuring that we can offer competitive pricing to our customers while maintaining healthy margins.
Quality control in eight languages is the final piece of the puzzle. How do we know the Japanese generation is actually good? We use a “Cross-Check” pattern where a second, independent LLM (often from a different provider) acts as a native-speaking editor. This evaluator model reviews the generated copy against the original brand tone and product attributes, providing a “Quality Score.” If the score falls below a certain threshold, the system automatically triggers a re-generation. This automated “sanity check” is what allows us to process 125 years of human labour in a day without sacrificing the native quality of the content.
This commitment to quality is what separates merchi.ai from the sea of “AI translation” wrappers. We aren’t just giving our customers a way to translate text; we are giving them a way to launch a native presence in eight global markets simultaneously. By choosing generation over translation, we have built a platform that respects the complexity of human language and the necessity of cultural context. We have turned internationalisation from a bottleneck into a growth engine.
Conclusion: The World is Your Market
The decision to generate rather than translate was a leap of faith for merchi.ai, but it has become the cornerstone of our global strategy. By moving away from the “English-centric” model of e-commerce, we have empowered retailers to speak to their customers as locals, no matter where they are in the world. Our parallel AI orchestration, locale-aware prompt systems, and disciplined platform i18n ensure that merchi.ai is ready for the truly globalised retail environment of 2026.
We have now explored the infrastructure, the intelligence, the scale, and the language of merchi.ai. In the next chapter, we move into the most exciting phase of any building-in-public journey: The Launch. We will discuss the transition from “It Works on My Machine” to having real customers paying real money, and the challenges of scaling a team of two while maintaining the breakneck speed of our development cycle.
Ready to launch your brand natively in 8 languages today? Book a Demo or Start Automating for FREE with merchi.ai.
