When LLMs try to be helpful
LLMs are great at abstraction. That’s exactly what makes them frustrating in technical workflows.
I’m currently rolling out multiple international language versions of a website for an affiliate project. It’s moving fast and the results are genuinely impressive.
One thing I keep running into when working with LLMs in this context is their tendency to simplify. Even when you give them a structured file — like a JSON that needs to be translated — they often return something that is cleaner, more readable, and easier to digest. From their perspective, that’s helpful.
From a technical perspective, it’s often disastrous.
They strip away structure. They remove fields they deem unimportant. They optimize for meaning, not for integrity. And suddenly the file that comes back can’t actually be used to render the page it was meant for.
The fix is simple, but not intuitive: you have to be extremely explicit. You need to tell the model to return the full, raw file, unchanged in structure, and only translate specific elements. Without that clarity, you end up in back-and-forth loops that burn time and patience.
This is where the limits of chat-based interfaces show. LLMs are powerful, but chatting your way through technical workflows is often the least efficient path. At some point, it’s better to formalize the prompt, move it into an automated flow, and run it via APIs or scripts instead of conversational UI.
Ironically, I still start many of these workflows in chat. It’s useful for a first pass. But the experience is often underwhelming enough that by the time I get what I need, the motivation to fully automate it is gone.
That’s the trap.
The public-facing tools are optimized for accessibility, not for professional execution. Learning to work with LLMs properly also means learning when not to rely on the apps — and when to move into more structured, technical setups.
Once that’s in place, the leverage is enormous. Today, we can localize hundreds of pages across a dozen languages in a single day and ship something that’s good enough to rank and iterate on almost immediately.
That simply wasn’t possible a year (or two) ago.
This is mostly a note to self: don’t give up at the frustrating part. The payoff is usually just on the other side of it.
This is a great, technical "Note." It captures that specific frustration of being a professional user in a world of consumer-grade tools. It positions you as someone who doesn't just "use" AI, but understands the architecture behind it.
Frequently Asked Questions
1. Why do LLMs tend to "simplify" technical files like JSON during translation? Most LLMs are fine-tuned for conversational helpfulness. Their default setting is to be concise and readable for humans. When they see a complex data structure, they often interpret the technical metadata as "noise" and strip it away to deliver what they think is a "cleaner" answer. In a professional workflow, this "helpfulness" becomes a technical error.
2. How do you prevent a model from altering the structure of a file? The key is shifting from a "request" to a "specification." You must be explicit: "Return the exact JSON structure provided, maintaining all keys and nesting. Translate only the values associated with [specific keys]. Do not add commentary or formatting markdown." Providing a "One-Shot" example (showing one correctly translated object) also drastically improves reliability.
3. When should a business move from a Chat UI to an API-based workflow? The moment a task becomes repetitive or requires high structural integrity, the Chat UI becomes a liability. If you find yourself correcting the model’s formatting more than once, it’s time to move the prompt into a script or an automated flow via API. This removes the "conversational" variability and ensures a predictable output.
4. What are the risks of "good enough" localized content for affiliate projects? The risk is never zero, but the speed-to-market often outweighs the initial perfection. By localizing quickly, you can gather real-world data on which languages or regions are gaining traction. Once you see ranking signals, you can go back and perform human-in-the-loop (HITL) reviews on the high-performing pages, rather than over-investing in a dozen languages before testing the market.
5. How has this changed your approach as a Fractional Marketing Executive? It has fundamentally shifted the timeline of what is possible. Projects that used to take six months and a massive localization budget can now be prototyped in a weekend. My role is no longer just about managing the creative; it’s about building the "leverage" systems that allow a small team to produce the output of a global agency.