Language is sometimes treated like mathematics: the result is either correct or incorrect.
If we look only at grammar, spelling, and sentence structure, it’s easy to see where this idea comes from. One word plus another word equals right or wrong. End of story.
But anyone who works with writing, translation, or localisation knows that language is not that simple.
If it were, the fastest and cheapest option would be to feed everything into an AI engine and let it run the show. Yet most organisations don’t want that, at least not all the time. What they are looking for is not just technical correctness, but something harder to define.
AI can deliver a technically correct translation, and it can still be rejected by stakeholders.
Why?
Because there is rarely just one “correct” way to express something.
Word choice, tone, cultural nuance, audience expectations, and brand voice all influence how a translation is perceived. Even if a sentence is grammatically perfect, it may still be perceived as wrong or incorrect. The feeling of the translation is not right somehow.
In other words, language rarely produces a single definitive outcome.
Why language has more than one “correct” answer
Even if we remove cultural references, metaphors, and idioms from the equation, translation still allows for multiple valid outcomes.
AI and Machine translation works precisely because language can be analysed to some degree through patterns and probabilities. But algorithms still face the same fundamental reality as human translators: In many cases, there are several technically correct options.
Choosing is a matter of opinion.
When translating a text, translators must constantly decide which elements to prioritise. Should the focus be literal meaning? Tone? Readability? Brand voice?
No matter which path the translators or posteditors choose, they will have to sacrifice something.
Most importantly, does that choice match what the receiver would have expected or chosen?
In that sense, translation is less like solving an equation where 2 + 2 = 4, and more like choosing between several good options each of which works slightly differently depending on the situation.
How do you measure quality when personal opinion is a major factor?
That’s exactly the challenge Senior Quality Excellence Specialist Victoria Samuelsson works with every day.
She helps global organisations turn subjective language quality into something structured, measurable, and actionable. In her work with LanguageWire customers, she focuses on one key question: how to define and improve translation quality.
Her answer is a four-step approach.
1. What does “good quality” mean to you?
Before measuring anything, you first need to define what “good” looks like. At LanguageWire, the Quality Excellence Team and the Account Team help you arrive at this definition – whatever it may look like for you.
“Two cars from different brands may both be excellent cars,” Victoria says. “But depending on who you ask, one brand will often be considered superior. That doesn’t mean the other car is bad – it just means people value different things.”
The same principle applies to translation. If you want to measure quality, you first need to define what matters most to your stakeholders.
In other words: what does your version of a “good car” look like? Only then can quality and impact be measured in a meaningful way.
2. Put quality into practice
Once your goals are clear, you need to make them tangible. Linguistic assets are the tools to make this happen:
Termbases define approved terminology
Style guides describe tone, voice, target audiences, and writing conventions
Translation memories store approved translations for future reuse
AI Terminology uses your Termbase to improve Machine Translation output
These assets help create consistency across content and teams, so translators and localisation experts can make decisions that align with expectations.
What does this look like in real life?
Victoria shares an example:
“If you have a one-liner or catchphrase that you want translated in a specific way, or not translated at all, you can add this to the translation memory to be reused in future translation projects.”
Think of Volkswagen's global slogan “Das Auto”. It stays the same in every language.
Formalising decisions results in trust that the quality you have defined is reproduced across all projects.
3. Choose workflows that capture nuance
Once the foundation is in place, you can start handling nuance and shifting style preferences.
In-country review (validation) is perhaps the most important step. It allows you to review the translation against the source text, using the termbase and translation memory for context.
If the reviewer has input on terminology, style, or contextual nuance, they can add their changes and preferences directly. This helps localisation experts become even more in tune with the customer’s preferences.
As Victoria puts it: “Rather than trying to reduce language to a simple right-or-wrong score, this workflow allows organisations to evaluate translation performance in context.”
4. Build feedback loops for continuous improvement
Next, Victoria recommends keeping a structured record of feedback. By collecting and analysing feedback over time, patterns begin to emerge.
At LanguageWire, this data is shared with customers to help identify recurring issues, understand root causes, and prioritise improvements. Over time, this makes it possible to move closer to the quality targets that organisations have defined.
LanguageWire has a defined process for feedback, so you do not have to invent one yourself. It is based on human assessment of quality feedback. This is what it looks like:
1: Assess the feedback
Assess the feedback and identify the root cause. This allows you to take the right action to prevent the issue from recurring. It also helps determine whether the feedback has a root cause.
2: Establish a root cause
Feedback is the alert that points you to the problem. The root cause analysis (RCA) uncovers the conditions that produced the problem. Or in other words; feedback tells you what is wrong while root cause analysis tells you why it went wrong. It is in its essence a cause-and-effect analysis.
3: Build an action plan
A common framework used in quality management is CAPA, which stands for Corrective and Preventive Action. CAPA is a combined action plan to correct current issues based on root causes and prevent future recurring issues. The plan is basically a response to feedback and includes corrective and/or preventive action. Corrective actions address issues that have already occurred. Preventive actions focus on identifying and eliminating the underlying causes before they create repeat problems. In localisation, this might mean updating a style guide after recurring tone issues or improving terminology management to avoid inconsistent wording or adjusting workflows to reduce review bottlenecks.
4: Agree on insights and solution
There is no standard CAPA solution that fits everyone. What works for you might not work for the next organisation, or even the next project. CAPA is always discussed and agreed between you and LanguageWire on a case-by-case basis to ensure alignment of expectations. As we review the list of CAPA together, your input is key. If something doesn’t align with your organisational processes or needs, we can adjust it. LanguageWire’s goal is to make sure the actions we take are effective and meaningful for you, not just in resolving this issue, but in preventing it from happening again.
5: Implement CAPA solution
All feedback is moved through the methodology. Every piece of feedback you provide is used to identify Corrective or Preventive Action (CAPA). Some of these actions may require your input. When that’s the case, we involve you directly to make sure expectations are clearly aligned and that the actions we take truly support the way you want to work. For example, your input may be needed to improve terminology, so it reflects how your organisation prefers to communicate. If your input is not needed, LanguageWire will implement the CAPA necessary, and inform you once everything is completed. You will always know what we did and why.
Continuous improvement is an ongoing partnership, not a one‑time action.
Your CAPA action plan should be tested and validated in real projects. For every new project, you’d go in and record the feedback, to make sure the solution truly works and adds value.
The reason this method is so strong is that it allows processes to evolve over time. This in turn reduces risk and strengthens collaboration.
No news is... no news: Quality means choosing your preferred option
Ultimately, quality means choosing the option that is closest to the one you would have made yourself. That is what makes AI translation feel right.
If this can be defined, even at the most basic level, then we have found a pathway to great translations.
Victoria’s advice is clear: Let your provider know when you see those “right” options. This feedback is what paves the road to the desired outcome.
"The saying may be 'no news is good news'. But we believe that 'no news is no news'.”
Reaching the right level of quality means defining and communicating your preferences directly - before refining them through tools, feedback, and processes.
If you are ready to bring structure to your localisation setup, we're here to support you. We can help you define what “good” looks like, put the right technology in place, and build workflows that continuously improve quality at scale.
Let's talk it throughThis article is the April 2026 edition of Lost & found in translation, a monthly newsletter sharing insights, opinions, and reflections by real people who work in localisation. Subscribe to get notified when the next edition comes out.
Subscribe