Mon 12 May 2025

Software Localisation

American websites format the date as MM/DD/YYYY and this can be confusing for Europeans. If I see the date 05/03/2025, I can't be sure if we're dealing with March or May.

Localisation extends further than just the format of dates. There are many things that require localisation the most obvious is language. If your site does not support the dominant language within a geography you're creating a language barrier between you and your customers. In Typeform's case, their customer's customer.

Translating and localising your software opens your business to new markets where relying on English won't cut it. Providing your system in a locale that's familiar to the user allows your system to feel natural and trustworthy. Luckily for us the internet has been around for more than 40 years and this, is sort of an old problem. There's been a good effort put towards enabling multilingual support.

ID or string?

When translating software, the first thing you'll need to determine is how to identify text that requires translation.

There's two ways you can do this:

  1. Mapping a key to the text. This key will be used to lookup the correct message given the user's preferred language. Something like this:

    message_key: "MISSING_NAME_TEXT"

  2. Alternatively; provide the text as is and using that as the message key:

    message: "You're missing your first name"

Systems have been written using both styles so there's no consensus on which one you should pick (I wish you luck in driving consensus in your own place of work). Here are some things to consider.

Subtle punctuation can change the whole meaning of a sentence. This is why systems tend to favour the entire sentence as the key for translation. Updating the sentence, even if you're just adding punctuation, should invalidate the translation or at least flag the translation so that it can be double checked.

It's also useful keeping the full text within the context of where it's being used. This way the developer or engineer can determine themselves if it makes sense. It is harder to determine if you're using the correct message if you're relying on message keys like: "missing.name_text" and "missing.text_name", the full text provides a clearer indication of the output.

Scaling the message keys can also be tricky as you'll need to avoid name clashes. The best thing to do is use them with some sort of namespacing e.g. "signup.error.missing_name" and redefine the key for every use-case, even if the full text ends up being the same, this allows you to change each text independently.

Localisation built in

For those of us gifted enough to using a Unix based system you might have access to gettext and xgettext in the command line. These are tools used to translate "natural language messages into the user's language, by looking up the translation in a message catalog".1

Python has some built-in libraries which allow you to manage internationalisation and localisation. Unless you've dealt with localisation, I think very few people are aware of the existence of gettext.

Localization for Python Applications

The python gettext library provides an interface which allows you to define your program in a core language and uses a separate message catalog to look up message translation. As an example we can define a message that requires localisation like so:

from gettext import gettext as _

_("Welcome!")

Using xgettext we can construct a .pot file. Which will be used as a template for our language catalogues.

xgettext -o messages.pot --language=Python src/*.py

The pot file should look like this after running xgettext.

#: main.py:3
msgid "Welcome!"
msgstr ""

It's pretty neat that it has provided us the file name and the line number for the text, although more useful in larger codebases, we can use this to track redundant translation strings. You'll also notice that it's using the full string as the msgid instead of assigning it to a code or number.

From this we create .po files (unrelated to the teletubby)2 , these are the concrete versions of the .pot file which contain the translations, if we were to make a .po file for Norwegian this would look like:

#: main.py:4
msgid "Welcome!"
msgstr "Velkomst"

Now that we have a localised form of our language catalogue we can use msgfmt to compile a binary version of our .po file, like so:

msgfmt -o messages.mo no_NO/messages.po

This command takes our no_NO (Norwegian) messages and compiles a precomputed hash table of the msgid -> msgstr and outputs it to the .mo file. These files are stored in binaries so they're not human readable, but are efficient to load into the application at start up.

When you start accumulating a lot of these catalogues they might require their own system to manage. These systems are also useful interfaces for the person providing the translation. As an example you can see PoEditor or Lokalise

People that work in localisation and translation will be familiar with .po files since they're often the file format used with translation software.

Translation within context

If our app supports more than one localisation we have to indicate which localisation should be returned to a user. For an API we can set the user's locale within the context of a request.

Flask offers a library called Flask-Babel which allows you to set this locale. So if a Norwegian was to hit our API we'd have the client set a header on the request: Content-Language: no_NO3, on returning the response all the strings instantiated with gettext will be translated into Norwegian.

There are some cases where you'll need to switch the locale context mid request or mid process, for example; a Norwegian user triggers an alert to an English user. We can instantiate a context manager with flask-babel, which will translate the strings to a specified locale:

def handler():
    # <norwegian scope>
    with force_locale(to_user.locale):
        # <english scope>
        send_email(to_user)
    # <norwegian scope>

Plural Forms

Language is weird and there's nearly an edge-case for everything. One of these cases that gettext supports is defining rules for plural forms. For example in English we might say "one apple" and "two apples". However in a language like Hebrew the plural form for two apples can't be used for three apples, so to account for this gettext provides ngettext. Which is used like so:

from gettext import ngettext as n_

n_("%(num)d apple", "%(num)d apples", 3) % {"num": 3}

This allows gettext to pull the correct plural form given the int 3 and then formats the returning string, replacing %(num)d with 3.

Lazy strings

If you're reusing the same string across your application and defining it at module level this string will be translated as soon as the module is instantiated. The module will always fallback to your app's default locale and your strings will not be translated. To get around this we use something called lazy_gettext. This allows us to define the string and reuse it across the application as lazy_gettext will keep a reference to the msgid and defer translation until the text is needed.

You can see support for lazy_gettext in the django documentation

Wikipedia

Wikipedia manages content in over 300 languages. There are numerous volunteers which help to translate wiki into other languages and they do this through an interface called translatewiki.net. There's an entire team managing the infra and tools used in localisation.

Similar to Wikipedia, I've seen interfaces used to manage and update translation files as well as a process that can be triggered automatically or on a schedule to update the .mo files that a service references. After updating the .mo file you can automatically roll out a deployment. The new deployment should then load the new .mo files into memory when the service is instantiated.

You don't need to be working on a system the scale of Wikipedia to include translations. You can rely on a user's system locale to translate CLI tools. My Fiancee's system is set to Norwegian, if I ever write a CLI for her I think it would be fun to provide a Norwegian interface.


  1. $ man gettext 

  2. I'd like to draw attention that I stole this joke from myself. I don't want to draw attention to the poorly performed lightening talk I did. It was my first time and I tried to fit this entire post into 5 minutes. Link for posterity 

  3. Mozilla: Content-Language 

S Williams-Wynn at 12:03 | Comments() |

Mon 05 May 2025

A Piecemeal Approach

The "Technical Debt" series:

      1: (here) A Piecemeal Approach

The piecemeal engineer knows, like Socrates, how little he knows

Karl Popper (1944)

Karl Popper's reflections on totalitarianism has had one of the largest impact on my approach to software engineering.

Utopias exist in business, engineering and societal context. There are always fervent believers that have a do or die attitude to process. This often gets in the way of pragmatism.

Popper

In his reflections KP makes the argument that if we are to progress as a society we should not attempt such large scale shifts of policy in pursuit of largely frivolous utopians. There exist peoples with high levels of confidence in their ability to understand how the world and society work. We should be wary of those that are uncompromising on their ideals.

Utopian views are often stated as goals in startups and engineering. Partially due to the need to sell the dream or idea before it's realised and when they're presented to potential investors and stakeholders. "I can solve all your problems with my solution" sounds more valuable than "I can solve half of an existing problem, maybe we can think of solving the rest later?"

Within a totalitarian regime, similar promises are made to the governed by painting a picture of a utopian society, a dream world envisioned by a leader sold at the price of handing over control and power. In these cases the marketing strategy is to stoke fear and shift blame.

I have no bother with utopias if they form part of the ideation or they're used as a perspective to view a problem. It's when they're used as a justification to keep heading down a failing path, I find them to be dangerous.

If you find yourself hearing leadership or a colleague making unfalsifiable claims or using the "well it doesn't apply in this situation" instead of conceding that perhaps they were wrong; you've found the charlatan. A fear of being wrong and being averse to pivoting leads projects and businesses into failure. If you know something isn't going to work, the sooner you know and respond the better.

Sometimes, the best thing you can do is just say "I don't know".

Software Engineering at Google (pg. 40).

Ceteris paribus

The business world is run on pragmatism. If it were plagued with "too much unscientific thought"1 it would be brought down by complexity and mess. Dijkstra attempted to reign in on software complexity in business, by advocating for writing systems that allow an engineer to focus on one single concern at a time. Least they be overwhelmed by all the moving pieces.

Similar to Popper, this is a focus on changing one thing at a time in order to determine the effect of that action. Modern day vampire Bryan Johnson, Founder of braintree, attempts to live forever by running hundreds of tests on himself. One of the largest criticisms with his approach is in how the doctors measure causality when he consumes ~106 pills every morning.

Startups and businesses that aim to solve everything are at risk of not being able to measure what's working and what's failing. They also risk avoiding their core business issues until it's too late and they're out of runway. Start ups have limited time so finding and tackling the areas of highest value to the business should be a priority, also known as, finding product market fit.

Many successful businesses started out tackling issues by focusing on a niche market. Targeting a small user base that struggles the most with an issue allows them to focus on a core problem and refine their product without being distracted by the myriad of different people and their individual needs. You can look at PayPal targeting people with thousands of transactions over Ebay, in order to refine making payments online. Revolut focused on problems that travellers faced, starting specifically with currency exchange. Nintendo got it's start selling playing cards in 1889, at this point in time I can't imagine the founder envisioned an Italian plumber eating mushrooms and rescuing princesses. The key is to move one step at a time and gaining some initial start can get your ear to the ground.

Don't be perfect

Utopia's are a constant threat to getting us into better positions. If my team is flying a burning plane and we need to land ASAP, I understand landing 10metres from the office or your home might be ideal but right now landing anywhere will do.

Perfect is the enemy of good. If we are constantly striving for a form of perfection we should acknowledge that we are delaying or forgoing getting to places that are good enough. And since Utopians are often unrelated to anyone's lived experience, there's no proof that this vision of perfect is indeed a great place to be. Which is why we need some resemblance of validation at each step of the process.

There are a number of successful companies and they are equally running numerous processes and styles of business. You might be able to find support for every methodology, if the self help expert says that eating carrots make you see in the dark, try it. But if it doesn't work, ditch it. If you're a team of one, perhaps doing daily stand ups will look different to a team of six.

Don't let the utopian process get in the way of driving value.

Lastly

Be wary of anyone that speaks with confidence and doesn't read.


  1. Dijkstra in EWD-447 (1974) 

S Williams-Wynn at 12:05 | Comments() |

Mon 28 April 2025

Engineering Vibe

Like it or not, vibe coders are the next software engineers.

3 years ago I made a prediction that triggered a mixed response:

Within our lifetime. We will see a YouTuber or streamer becoming head of a state.

Me (March 4, 2022)

Whilst I don't believe this prediction has come true there's been progress. In June 2024 a Cypriot YouTuber was voted to become a member of the European Parliament, he earned 19.4% of the vote and earned 40% of votes from the 18-24 age group.1

The interesting thing about my prediction is that it seems that it's actually gone the other way. More politicians are becoming YouTubers and Streamers.

Could the same thing happen with vibe coders? Perhaps software engineers are the next vibe coders.

We like to bash

We see software engineers being dismissive at the content aimed at vibe coders. There's a new wave of people being introduced to coding and managing complexity; so most of the content is covering the basics. I.e. Write tests, compartmentalise and plan things out before you dive into the code.

This wave of programmers haven't had the time to digest The Mythical Man-Month to learn; upfront planning in software leads to a huge reduction in downstream costs. They are however learning the hard way, by hitting these challenges head on. (For better or worse).

How did you get here?

It's all a journey and we're at different stages of the process. A large overhead to programming is building up the vocabulary, this is the struggle for both early stage developers and vibe coders.2

Experienced programmers have been exposed to more language and can therefore provide more specificity when commanding the computer, vibe coders will get there. Perhaps this specificity makes the experienced programmer a better vibe coder. Maybe it's their keyboard.

No one was born with the knowledge of how the computer works, there's hurdles to overcome. It was only a decade ago we were cringing at someone stating they're full-time YouTubers or an Instagram influencer, and look, they've still got you glued to your screen.


  1. Cypriot Fidias Panayiotou 

  2. What exactly is the difference between "an early stage developer" and a "vibe coder"? This sums up my point. 

S Williams-Wynn at 12:08 | Comments() |

Mon 21 April 2025

Gray Code

A modern laptop can run ~3.8 billion cycles per second. The cycle is determined by oscillation frequency of the electrical signal that hits the CPU. Contemporary CPUs manage synchronisation using all sorts of error correction tricks.

In mechanical systems, such as those used in medical equipment and robotics, the binary numbers that we are most familiar with can cause errors if they're read during state transition.

Decimal and Binary

We are most familiar with decimals, this is a base 10 counting notation where each position in the number represents a different power of ten. E.g. 10, 100, 1000.

The computer relies on binary as this takes advantage of the fundamental on/off state within an electronic circuit. Binary is base 2, so each position represents a power of 2. E.g. 2, 4, 8, 16.

Reading States

Binary numbers can cause errors if they're read during transitions. The more positions that require toggling, while switching between numbers, the higher the chance we introduce errors into the system. This is shown clearly as we transition between the numbers 3 and 4. Which requires changing three bit positions. 011 -> 100.

BinaryCounting

If these bits aren't switched instantly we can read any of the following numbers 1, 2, 5, 6 or 7 instead of 3 or 4. Not great if you're working with a critical system and need precision.

Grey Code

To get around this we use an alternative ordering of the binary system in which successive numbers are separated by a single bit. Incrementing a number only relies on switching one position and removes the chance of reading the wrong number during state transitions.

This ordering is called Gray code, and an animation of the bit positions, for an incrementing number, is shown below:

GrayCounting

Decimal Binary Gray
0 0000 0000
1 0001 0001
2 0010 0011
3 0011 0010
4 0100 0110

The Application

In addition to reducing read errors, relying on only a single toggle to move up the number scale consumes less energy than traditional binary due the fewer toggled bits.

Some systems require testing every position of multiple switches or toggles. Gray code can improve the efficiency of these tests. If we had to iterate through all 16 combinations of 4 switches. Using ordinary binary would need to flip 1, 2, 1, and then 3 toggles as we move from numbers 1 to 4, while gray code allows us to only ever need to flip a single toggle to eventually test all switch combinations.

One of the most common uses of gray code is in rotary encoders, also known as knobs. These convert angular position to an analog or digital signal. If we had to rely on a normal binary scale, when rotating the knob, it could end up sending the intermediary numbers between each angle, which would make it pretty useless.

S Williams-Wynn at 12:03 | Comments() |

Mon 14 April 2025

Engineering for Resilience

Engineering velocity and delivery is strongly tied to how code is deployed to production. Having a certain level of safety and automation can enable teams to deliver and learn faster.

Engineers that avoid failure don't learn and won't ever put anything significant into production. The quickest way to learn is to fail, some teams aim to avoid failure instead of trying to optimise recovery from failure. Forget about trying to avoid failure think of failure as inevitable. Like it or not, there will be a system failure and knowing how to thrive in this space will separate you from the average developer.

Shorter feedback cycles and high confidence will distinguish your engineering team from any other, focus on a resilient system in production and short recovery time. Breaking things should become the norm as long as the repercussions are minimised.

Compartmentalisation

Stopping the ship from sinking. Bulkheads are used in the navel industry to avoid a ship from sinking. By compartmentalising the hull of the ship you allow it to sustain some level of damage before it sinks.

The titanic had 16 bulk heads. It could stay afloat if 3 flooded and in some cases it could maintain 4 flooded bulkheads. 5 or more would make the titanic meet its demise, when it sunk it had 6 compromised.

We also do this with software systems, we have built-in levels of redundancy. If one of our servers decides that today is the day it kicks the bucket we have more than one server available to fill-in and pickup the slack.

Keeping a tight ship

The military also practices compartmentalisation in the form of modularity. Information is given out on a need to know basis. You don't want the entire army carrying state secrets and ideally make it difficult for information, that may compromise soldiers, to leak.

It's also useful in hindsight to pinpoint where the leak occurred. If the information was privy to 4 individuals you can blacklist them and your overhead to discovering the snake is a lot smaller than had you provided the entire army with this knowledge.

Software runs on a similar structure called the principle of least privilege. In a large system with multiple services, you're granting each service the minimum level of access in order for it to be able to perform its job. If it's got write access to the production database but it only ever needs to read from this database, then we should restrict it's permissions down to read-only. In the event that this service is compromised, your surface area of attack is decreased, you're much less vulnerable in this situation than one where where the attacker had permission to do everything.

He'll be long remembered

We've taken practices from 1907. Canaries were used in coal mines because they're more sensitive to the toxic gasses that miners were exposed to underground. Carbon monoxide is odorless, colorless and tasteless so as you'd imagine it's tough to detect, however because these birds were bricking it at the first hint of these gasses they were used as early warning signals underground, if the canary drops dead you'd better get yourself out of there.

High velocity engineering teams that deploy multiple times a day at scale need their own canaries and luckily no one is going to die (industry dependent). We can do this in our deployment process, because we've got multiple servers, for redundancy. We can spin up a new server to receive a small percentage of the traffic and keep a close eye on its behaviour, if we notice errors or a reduction in performance we'd have an early signal that we've introduce something faulty in the new deployment and we can avoid risking rolling this out to the entire fleet.

We can juxtapose this to the alternative, sometimes called a big bang deployment. You can switch all the traffic over to the new code and hope (fingers crossed) that nothing bad happens. In a big bang deployment you're committing 100% of your traffic to new code, if things go bad you're more exposed to the downside of failures in this scenario.

Automation of these canary deployments brings a higher level of confidence to an engineering team as haywire metrics can automatically stop traffic to the wonky canary and your overall exposure to negative effects is greatly reduced.

Cutting the wires

A surge in electricity can cause damage to your home appliances. To prevent this homes will commonly have a switch board, this board is called a circuit breaker.

We implement these in engineering too, dynamic feature flags that prevent a user from hammering a broken system and in some cases we prevent even showing the feature completely. The user might not even notice that we've hidden the feature and if they don't notice we don't have a problem.

We can programmatically trip these flags on new features so that we can reliably fail over the weekend without much impact on our customers and engineers can follow up during work hours after the weekend to understand what caused the system to fail.

These are typically used alongside new features which we'd like to turn off at the first sign of something not working as intended.

Can you hear me? How about now?.. And now?

Enterprise software is always going to rely on external systems. These systems are out of our control yet we are still responsible for designing around failure. These systems might be from another company or they might be from another team within our business.

The more moving parts in our system the higher the likelihood of something failing. It's the same reason going on a trip with a large group of friends ends up being a practice of co-ordination and patience, the more things you bring into a system the higher the chance something fails or someone in the group doesn't want to eat at a particular restaurant or want's to wake up slightly later than the rest of the group.

Unlike friends, if a server doesn't want to respond to your request you can kill it. If you don't have the ability to kill it you can try again 50ms later. Retrying requests are very common because of the multiple ways things can go wrong with a network. We also need to consider that sharks have a habit of chewing our undersea cables.1

If we fail a retried request we can keep trying however the server might be failing because it's overloaded, so having a request being continually retried isn't the most ideal use of the networks' time. Plus we know it's failing and perhaps nothing has changed since the last retry. So we introduce exponential backoff. Simply put, it's a growing delay between each retry. If it doesn't work now, try in 50ms, if that doesn't work try again in 100ms, 200ms, 400ms and so on and so on. Eventually we can give up trying maybe flag it and let the engineer inspect it on Monday.

Retrying requests can be quite dangerous, especially if you've got a lot of clients and they're all retrying at the same time. This single explosion of requests can cause the server to burnout since it's already trying its hardest to recover.

In order to avoid a herd of requests at the same time, we introduce what is called jitter. Pick a random number and add that to the retry delay. If we have a number of clients attempting to retry after 50ms they'll be offset by some random number of milliseconds which helps to space out each request.

Elements of resilient software

Retried requests aren't a silver bullet and they come with some considerations. In any kind of transactional environment, like banking for example, if you're deducting money from an account and the request fails because the connection to the server has been lost. Your phone or client won't know if the transaction was successful or not. Attempting to retry this request might cause a double payment.

The solution to this is to introduce idempotent endpoints. The implementation of these endpoints often rely on having a header with an idempotency key, when you retry the request the server will check to see if it's handled this key previously, if it's already handled the server returns the original response, no matter how many times you send this key. If the key is new it will assume that this request is new and create a new transaction. With an idempotency key we can safely retry bank transactions in spotty environments.

So why are we doing this again

The feature that sits stuck in development doesn't face reality until its deployed. If we want to learn fast we should deploy fast, how can we build a system that allows developers to have high confidence that they're not going to collapse the business if they make a deployment.

There are patterns in engineering that enable high confidence, otherwise we are stuck with slower deployment cycles when the true learning comes from releasing software. You can theorise as much as you'd like about the impact you will have, but until your code is in front of users and used you don't have a benchmark to grow or improve.

Not having a robust system to handling failures is often the anxiety that slows down development. Slower development cycles can cause this problem to worsen if the code that is stuck in development grows, your certainty about how it behaves in production drops. Which lowers your confidence of actually shipping.

Developing in an environment with high resilience leads to higher confidence and higher velocity. Instead of focusing on avoiding failure focus on how you can grow from failure.

S Williams-Wynn at 12:18 | Comments() |
Socials
Friends
Subscribe