Choosing a Tech Stack When You’re a Team of Two: Building merchi.ai Chapter 2

    Choosing a Tech Stack When You’re a Team of Two: Building merchi.ai Chapter 2

    Merchi Team

    The Social Contract of a Two-Person Stack

    In the first chapter of this series, we talked about the “invisible friction” of manual merchandising; the unsexy, grinding labor that powers modern e-commerce. But once you’ve identified a problem that requires processing a million products per day, you’re no longer just “building an app.” You’re building a factory. When we sat down to define the merchi.ai tech stack in early 2025, we had to acknowledge a fundamental truth: we aren’t a solo act. We are a team of two.

    Being a team of two is a unique constraint. You have twice the brainpower of a solo founder, but you also have twice the potential for communication overhead. In 2025, the competitive advantage isn’t just having a great AI model; it’s having a stack that acts as a “source of truth” between two engineers. We didn’t just need tools that worked; we needed a stack that allowed us to “divide and conquer” without stepping on each other’s toes or spending six hours a day in sync meetings.

    The theme for this chapter is Reliability in the Wild. In the current “Agentic Era” of 2025, where AI agents are increasingly responsible for the structured data that powers global commerce, “moving fast and breaking things” is a liability. If our system fails to generate accurate attributes for a technical hardware SKU, an autonomous shopping agent might ignore it entirely, resulting in real lost revenue for our customers. Our stack had to be a fortress of type-safety, observability, and asynchronous resilience.

    The Spine: TypeScript Strict and the Next.js Contract

    Every line of code in merchi.ai is written in TypeScript strict mode. For a team of two, this isn’t just a technical preference; it’s a social contract. When one of us refactors the core “Writing Knowledge” configuration, the logic that defines a brand’s unique voice and taxonomy, the compiler acts as a real-time auditor for the other person. It ensures that a change in the backend data structure doesn’t silently break a front-end component three layers deep.

    We chose Next.js 14+ with the App Router for its aggressive move toward server-side logic. In 2025, the trend of “Server-First AI” means we want as much of our LLM orchestration happening as close to the data as possible. By using Server Components, we keep the client bundle light, ensuring our dashboard remains snappy even when a user is managing a catalog of 10,000 items. We rely on React Query to manage server state, providing the “optimistic updates” that make a complex data-heavy application feel like a local tool.

    The combination of Tailwind CSS and shadcn/ui was a strategic choice to eliminate “design debt.” As two engineers, we don’t have time to bicker over CSS variables. Shadcn/ui gives us a high-fidelity, professional design system that we can extend without losing consistency. This allows us to focus our “innovation tokens” on the complex parts of the platform, like how we ingest and sanitise data from ZIP files or web scraping, rather than reinventing the button.

    The Backend Fortress: Supabase and Row Level Security

    When handling multi-tenant SaaS data, the nightmare scenario is a data leak between customers. For merchi.ai, we bypassed the need to build a custom, complex auth and permissions layer by choosing Supabase. It provides us with a managed PostgreSQL database, an integrated Auth system, and a robust Storage solution for the thousands of product images our users upload daily.

    The “killer feature” for our two-person team is Row Level Security (RLS). By defining security policies directly at the database level, we ensure that the “merchandising logic” is protected regardless of how it’s accessed. Whether it’s a request coming from our main web app or a background job triggered via our API, the database itself enforces that a user only sees their own “Customer Data” and “Output Data”. This centralised security model drastically reduces the cognitive load on both of us during development.

    However, we chose a dual-workspace architecture, splitting our high-traffic web-app from our heavy-duty API. While Supabase Edge Functions (built on Deno) are excellent for low-latency tasks, they sometimes feel restrictive compared to a full Node.js environment. By maintaining a separate API workspace, we can utilise specialised libraries for image processing and technical attribute extraction that Deno doesn’t yet support natively. This separation of concerns allows one of us to optimize the user experience while the other scales the processing pipeline.

    Orchestrating the Async Chaos with Trigger.dev

    The core value proposition of merchi.ai is high-volume automation: a human would take 125 years to do what our system can do in a day. But processing thousands of product images asynchronously is a recipe for chaos if you don’t have proper orchestration. This is where Trigger.dev comes in.

    In 2025, general-purpose serverless functions (like Vercel’s) are often too ephemeral for complex AI workflows. Generating a multilingual SEO description for a complex technical item might take 30 seconds of model “thinking” time, which risks timing out on traditional serverless platforms. Trigger.dev allows us to define “long-running” jobs that can survive these timeouts. It handles retries, provides deep observability, and manages the flow of data between our “Writing Knowledge” rules and the final output.

    When a user uploads a large CSV of 5,000 SKUs, Trigger.dev breaks that into thousands of atomic tasks. If a specific model call via OpenRouter fails due to a rate limit or a transient network error, the system doesn’t crash. It simply retries that specific task with exponential backoff. This “Reliability in the Wild” is what allows a two-person team to sleep at night while the system processes massive batches of data for global retailers.

    The Intelligence Layer: OpenRouter and AI Gateway Logic

    We don’t believe in “Model Lock-in.” In the volatile AI landscape of 2025, the leading model today might be obsolete by Tuesday. To remain agile, we use OpenRouter as our unified AI gateway. This provides us with a single API to access models from OpenAI, Anthropic, Google, and open-source providers, allowing us to swap models based on cost, speed, or accuracy without rewriting our core generation logic.

    Our proprietary Writing Knowledge configuration acts as the middleware between the raw model and the user. It takes the raw output, sanitises it against the brand’s specific taxonomy, and ensures it follows the user-defined tone. Because we use a unified gateway, we can run “A/B tests” on different models to see which one performs best for specific categories, like technical hardware versus luxury fashion, ensuring the highest quality for the end customer.

    This modularity is key for our Multi-language generation feature. Different models have different strengths in specific languages. By using a gateway, we can route a Japanese translation to the model most culturally nuanced in that region while using a different model for technical English descriptions. It’s this level of granular control that separates a “wrapper” from a professional merchandising engine.

    Trade-offs, Regrets, and the Path Forward

    No stack is perfect, and we’ve had our share of friction. The decision to use Supabase Edge Functions meant learning the nuances of Deno’s permission model, which felt like a “speed bump” early on. Additionally, the dual-workspace architecture (web-app + api) adds complexity to our CI/CD pipeline on Vercel. Every time we push code, we are managing two different environments that need to stay in perfect sync.

    However, the “Boring Technology” principle has served us well. By using PostgreSQL, TypeScript, and Next.js, we’ve built on foundations that are stable enough to support our rapid innovation in the AI space. We didn’t waste time building a custom queue system or a bespoke auth provider. We used high-leverage tools that allowed us to focus on the actual problem: the 10,000+ SKUs that need high-quality data to survive the 2026 e-commerce landscape.

    Looking back, we wouldn’t change the “Strict” nature of our stack. It has saved us from dozens of potential production outages. As we move into Chapter 3, we’ll dive into how this stack supports our most ambitious feature yet: the “Brain” of merchi.ai, where we turn raw image pixels into structured, brand-aligned marketing gold.

    Ready to stop managing spreadsheets and start managing scale? Book a Demo or Start Automating with merchi.ai.