Tue 28 April 2026
Software Design Playbook
A PR should never be rejected for architectural decisions. When code is written the overall system should have already been planned and agreed upon.
There are two common types of software documents, the architectural decision record [ADR] and the software design document [SDD]. Companies can combine both or do with only an SDD. There's no one size fit all and not all software needs much thought.
Planning software is a recurring theme on my blog and this article attempts to provide the structure to how I plan and design.
ADRs
Architectural Decision Records are typically how decisions are expressed given the context and knowledge at the time. Martin Fowler will cover the topic of ADRs better than I.
SDDs
A Software Design Document is the specification detailing the system and the system's intent before it is built in order to encourage feedback and catch plot holes early in the development cycle of software.
Other than allowing us to get feedback before commitment they are a useful practice for the following reasons:
-
Discovering knowledge gaps. The practice of planning the system allows us to make an initial pass at realising an idea. The practice can bring to light areas we hadn't noticed and reveal that our logic doesn't make sense on paper.
-
Reflection. We learn over the course of a project and having a document that captures our assumptions at the start provides us with a meaningful document that we may reflect on and identify why we were initially wrong.
-
Prosperity. Systems are often forgotten, having a document that outlines the project's initial stated aims can provide an answer in the future when we ask "why did we do this?" Or someone new to the company asks "why did you build it like this?".
-
Getting buy in. It is tough to convince others of an idea in your head. As soon as it's on paper and there's clear logic to your conviction there will be more willingness of others to support. Projects that stand up well to criticism are also more likely to be convincing and picked up.
Template
These documents depend on the problem they aim to address and typically include the following sections:
Context
Tell the audience the problem you aim to solve and why it is important to solve it. Generally I define the state of the system at the current point in time and expose the gaps it has.
E.g. users are wanting double the portion of donuts, we only have 1 donut machine, if we buy another donut machine we will be able to serve twice as many donuts and multiply our profit by a factor of 2.
Scope
Make the limitations clear upfront. Tell the reader what the solution is not aiming to solve. The perfect software doesn't exist and if we tried to build it we would run out of time and runway. Setting boundaries allow us to focus on the core problem we aim to solve.
E.g. The donut machine will not address the users asking for apple juice.
Proposal
Start at a high level and break it down into its components in the subsequent sections. When communicating systems we must understand that individuals consume information differently, some are more visual than others. Knowing this we can enrich the document by providing more than a single representation of the point we are trying to make.
| Representation | Good For |
|---|---|
| Diagram | flows and relationships |
| List | overviews and sequences |
| Prose | nuance |
| Table | comparisons |
Even if a decision is minor the trade-offs should be tabulated. The choice can be obvious but the practice of assessing an alternative can reveal better solutions, and avoid falling into a Cargo Cult trap.1
Rejected Options
Avoid designing once, by including the rejected options we make this practice clear. Our first idea will not always be our best idea2. Understanding the weaknesses of our rejected options can also provide answers to the curious reader.
If we need to improve the system in the future; remembering why we rejected the other options may allow us to avoid them when we need to pivot. Unless something has changed, we might have a better understanding of options in the future and they'll become opportunities.
Optional Sections
These sections don't apply to all software designs, you might use all of them on a large document. Smaller changes to a system disregard them.
Cost
Money and time.
Provide cost and time estimates, these can give other departments a head start when considering pricing or capacity. Start with estimates or source price pages, something is better than nothing when it comes to informing the stakeholder of a go/no-go project.
Cost is not only found in raw compute. There's also the ongoing cost of upkeep; ensuring the system is up-to-date. Some systems require a member from support staff or an analyst for input on a regular basis, these are flagged as operational cost.
Risk
Address what can go wrong and cover how likely it is to happen. On large projects, some risks might not have a mitigation but we can address how to measure it and minimise it's impact by catching it as early as possible.
Determining the project's risk comes from experience. They range from technical risks like an overview of technical short cuts and their cons, to softer risks such as the introduction of friction affecting product adoption or the consequence of design affecting popular perception.
Goals
Every system should have a goals otherwise it's literally pointless, without a clear purpose the systems is debt before it gets off the ground. This also helps to alleviate scope creep as clearly defined goals allows us to cut any work that does not achieve the original stated purpose of the system.
Milestones
Through my experience I have found high level milestones can be useful, however if they are inflexible or significantly detailed they hold a project back.
The benefit of high level milestones is that it indicates a larger vision for the project and the potential long term impact it may have. We tend to learn over the course of the project so these should be flexible as we are bound to discover things up until the release of the first milestone that will impact our initial plan.
Getting feedback from the first users of your system can throw your roadmap and milestones in the bin.
Deep Dives
As a company scales the systems become more complex, complexity can also be introduced by regulation and some consideration needs to be made before bringing on 3rd party providers. To deal with this I have a shortlist to assess risks when designing software.
Certifications may be required if you are operating in specific industries such as healthcare. It's best to know upfront that you're integrating with a HIPAA compliant party than after the integration has completed. Finding out too late can be a huge waste of time.
Regionally locked data might be a requirement, GDPR, for example, requires explicit consent from an individual if their data is going to be transferred across regional boundaries. If you are integrating with a 3rd party you may need to confirm what regions they are able to operate in.
Authentication and permissions are commonly needed if we have to limit access to specific users or to subsets of users. Determining how permissions are granted and how users authenticate can impact the design of our systems.
Localisation is required by most medium size software businesses as they'll operate in more than a single language or more than a single currency. Serving multiple geographies require these considerations.
Security audits and assessments can be required before being able to use a provider's service. We should ensure the software we use is trustworthy if we are likely to send sensitive data.
These topics may be brought up under our section on risk, or during the discussion of trade-offs between solutions.
FAQ
There are questions that are asked more frequently than others, it's always helpful to include them. Even the off hand questions that were brought up in passing. Good questions can be more interesting than the answer.
Feedback
Feedback is often an opinion and can come from someone with lesser context but more experience. Figure out the quickest way to test an opinion. Don't get caught up on addressing everything but ensure you de-risk concerns by having a plan if their concern becomes a reality or making it easy to roll back or having circuit breakers or feature flags.
As an example a conversation around "this might not scale" could be countered with "this might not get a lot of traffic", systems don't need to be implemented with scale in mind, but they should have an answer that can address scaling if it becomes a concern unless you have some concrete evidence that it won't be a concern; such as; we don't think this will be used often by users but if it does we have a way to limit it's usage and have isolated it from the rest of the system so that if it starts getting attention it won't affect the performance of the whole system.
Do we always need an SDD?
One thing that has held true in software is that the earlier something is found in the process of developing software the easier it is to correct course. Software design documents are a way to map out the unknowns and identify blind spots in ideas. Not all ideas are complete, sometimes ideas come without a solution, we don't want to find ourselves a month into development to realise we don't know how the solution is actually meant to look.
Not all changes require deep thought or detailed specification. There should be room to explore and learn by hacking something together, but the exploration needs the bigger picture and the goal. Without purpose we are wasting time.
This article is longer than many of the design documents I've written.
Tue 21 April 2026
Probabilistic Data Structures
Keeping track of a constant set of items is fairly straight forward, however when the number of items start to grow larger than the capacity of a single machine things get expensive. There's a way around this and it's to rely on approximations instead of concrete numbers. There are two probabilistic data structures I'd like to cover in this post; Bloom Filters and Count-Min Sketch.
Generally these are applied when space is a constraint and you need predictable and consistent size. If you're counting or caching at scale there might be a chance that your database is relying on probability instead of certainty.
Hash Tables
A fundamental data structure in computer science is the hash table, useful in caching and counting, as well as representing objects in software.
A large hash table is often useful when you need a key value store such as one that would map user ID to a profile picture so that every request for a profile picture is speedy since hash tables operate in 0(1) for look ups.
This is done by having a number of buckets and a function that consistently converts a key to the index of a bucket. This is called the hash function. A cache or a key value store only requires computing the hash to locate the data.
If we were to hash the key "fox" (hash("fox")) and the
resulting output was 5 we would know our data is in bucket
5.

Hash functions won't compute unique hashes and occasionally
they can collide with existing keys stored in the hash
table. So both hash("fox") and hash("cat") might end up
pointing to bucket 5. They can reduce the chance of this
happening by increasing the number of buckets. Having 10
keys and 1 million buckets means the chance of collision
becomes extremely small.
In practice hash tables store linked lists in the bucket locations and when a collision occurs they iterate through the list until it finds the key. When storing a new key it appends to the end of the list if it's not there already. Redis uses this technique in addition to resizing the number of buckets dynamically in-order-to keep the length of these lists to a minimum.
We can use hash tables to determine if we've seen something before, by storing keys as we see them, if the key exists in the hash table we know that this isn't the first time we've seen the item.
Bloom Filters
When we start dealing with data streams or billions of users, storing everything in memory can be expensive. Instead we can reduce the total memory consumed by using a probabilistic data structure; the bloom filter.
Bloom filters rely on approximate set inclusion. So instead of yes this item is in the set or no it isn't; we get the following outcome:
- This item is not in the set.
- This item might be in the set.
This is done by having a consistent number of buckets and using more than one hash function. As you can see in the illustration below, the key is hashed three times and each resultant bucket is set to 1.

When items are queried we will know for sure that we haven't seen it before if any of the buckets return a 0. However if all the buckets result in a collision then we know that we might have seen it before.
Both the number of buckets and hashes can be configured which allows us to trade more space for a reduction in the probability that we return false negatives. (We might have seen it, when we haven't).
The bloom filter is applied in situations where space is limited and keeping track of every element isn't an option. If we wish to avoid making expensive queries for data that doesn't exist; a bloom filter can help us reduce the number of expensive queries.
Browsers have used bloom filters in the past by providing a preset filter of malicious URLs. When we visit a URL and it's not included in the filter we can proceed however if it might be in the filter we can query a server to help determine if it's safe or not. We avoid this query on the majority of URLs as most URLs are safe.
Count-Min Sketch
The last probabilistic data structure I'd like to cover is the Count-Min Sketch, like a bloom filter it has multiple hash functions, unlike the bloom filter it tracks the number of times a key lands in a bucket.
When queried it hashes the key and returns the minimum from the counts stored in the corresponding buckets. This allows us to determine an upper bound estimate for the number of times we've seen a key.

Count-Min Sketch is useful in large scale data processing,
for example if we are interested in tracking the top-k
searches we can do this normally by using a heap. If we have
size restrictions and need to use constant space instead of
O(n) space we can put the sketch in front of the heap.
Heap inserts are done in log(n) time which we can avoid
doing if we know the item shouldn't be in the heap. Items
that appear infrequently are then discarded before even
making it to the heap. We do this by querying the sketch for
an upper bound of the new item, for example 3, and if the kth
item in our heap has 12 appearances then we can avoid adding
the new item to the heap.
I have found it interesting that at scale we can use probability to optimise our systems but it also requires an understanding of how the data is distributed. These data structures work well on long tail distributions but when all items are as frequent as each other these become less useful. It would be interesting to discover how systems can map to different distributions of data and how these structures are set up to solve the given problems.
Mon 13 April 2026
Setting up success
I have a frightening memory of joining a company and my
first Pull Request required me to SSH into a
compute instance and do a git pull in order to release my
change into production. On that occasion I found that I
was also deploying more than just my change as the branch
was not entirely up to date.
There's a battle of trade-offs in software where things that sound good on paper are often placed in the backlog and forgotten about. Something about the process being good enough or being the way things have always been done that kicks valid improvements down the priority list. It is with luck that some are granted the privileged to determine what they get to work on and companies will give that chance to people that have agency and a strong conviction that they know better.
How it began
The other pattern I noticed at that company was a lack of dev and prod environments. Everyone with access to the project had access to everything. At this stage the company was on-boarding more engineers and data scientists and each of them were being granted full access to this monolithic environment.
It was even more exciting to find legacy projects without owners and services being deployed into production using the developer's container orchestrator of choice. Docker-compose, Mesos, self-hosted kubernetes or GKS.
The company was growing quickly and it felt like with every new developer there was a new way to release changes into production. Every fire I was pulled into appeared to be running on it's own tech stack and required learning how things operated from the ground up. Nothing seemed transferable from one project to the next.
These were the problems I had determined to solve.
Taking on the problem
Over a week I mustered together a team to tackle this complexity. The first thing was a meeting with the CTO to propose setting up two new project environments; one for dev and one for prod. This was an easy request as the CTO was horrified to learn that things were being deployed straight to production.
The next thing to setup was a CI/CD pipeline that is generic enough to be used across any service. This meant that if you wanted to use our new dev/prod environments your servers had to be deployed through our automated pipelines. To help the other teams we wrote a service template and helm chart that would play nice with CI/CD. This also meant that we restricted the deployments to a single hosted container orchestrator and allowed us to consolidate all the different styles of deployment. As a consequence we were able to help across more teams as they ran into issues due to us becoming more familiar with kubernetes and not having to understand an entirely new workflow or orchestrator.
The Fun Didn't Stop
We had an outage at around 2pm in the afternoon when the company lost connection to all the servers in the production environment. At this point we had around 5 teams deploying code on our stack. An urgent message in one of the slack channel asking if anyone had changed something revealed that someone had assigned their service a new IP range which masked our network bridge to the rest of the company.
This is when we introduced terraform. Any network changes or changes to the infrastructure could now be reviewed, reverted and audited since it was defined in code. If something went wrong we could investigate the changes as well as having our infra committed to version control. We saw the introduction of terraform improve the adoption of our stack as new engineers were frightened to work in the other cow-boy projects that had existed on infra configured by hand through the web UI.
We started to notice that other teams were adopting our tech stack as they no longer needed to spend time on defining the release process as we had a template that they could clone and get started almost immediately. Now they could tackle business problems instead of weighing up the trade-offs of each container orchestrator.
Our helm chart also helped, we got rid of the mountains of bespoke yaml used for several k8s deployments. We could also serve engineers with features such as specifying cron jobs with very little configuration in their service spec. Most teams were data intensive and relied on scheduled batch processes, so in some cases they had only adopted these cronjobs.
We also developed a library for new services that setup logging, sentry integration, trace IDs and database connections and introduced a standard for running database migrations in the service template. At this point is was quite important to encourage shared ownership of these codebases. The more open we were to changes from other teams the more likely they were to use it and benefit other teams from these changes. This broke down the silos that had existed previously, as improvements were no longer isolated to a single service but could now be utilised across the company. It also meant these libraries were improving without needing someone on my team working on it full-time.
Clear Sailing
My team of 5 was serving 80 engineers and data scientists across 12 teams. Upon reflection not everything went smoothly. There were people that wanted to maintain control over their entire stack, perhaps we were a bit stretched and couldn't focus on features they required at the time. They might have also had disagreements with choices we had made in the service template.
There were also features we developed that didn't get adopted, which makes sense to me now. These were things that generally slowed people down for very little benefit. Contract testing is an example of this. On paper having clear contracts between services and having a means to define and test these contracts sounds like a great idea. The nature of the services at the time, being cronjobs or at most two endpoints meant that their interfaces weren't growing in complexity and introducing this step in the build process wasn't a big enough bang for their buck.
Service to service authentication was another feature that we didn't need at that time. Our services were internal and in a VPC. Obviously if we were compromised not being able to send requests to the servers in that network would be a great thing to have, but I think it would have been better to sink time into features that would speed up the adoption of our workflow instead of features that added friction. Not to say we didn't need this auth layer but we could have potentially address this at a later point.
Improvements
There are improvements to the workflows that I would have enjoyed introducing. For example; using a CI/CD server that didn't rely on defining our workflows in Kotlin. I feel that Kotlin added a barrier to contributions and we didn't need this. There are other CI/CD tools like gocd.org and concourse-ci.org which have an easier way of defining workflows. Although now-a-days we can get a lot done with github workflows and reduce the reliance on having a CI server.
We attempted introducing Istio, however, at the time this was an aspirational feature but if we had succeeded we would have allowed our teams to run canary deployments, which would allow them to divert a small amount of traffic to a new version of their service and if anything goes wrong, it avoids rolling out the broken version to all customers.
I still seem to come across companies that only give leadership the ability to deploy to production and changes go out once mid week. When this is the practice many changes tend to accumulate and when they're released everything goes out as a big-bang deployment. When something breaks in these scenarios it's often harder to pinpoint what went wrong. Smaller frequent deployments grants autonomy and shortens the feedback loop which speeds up a developer finding out what went wrong, releases also tend to be less disruptive and engineer's have a higher confidence in the changes they release into the world.
The biggest take away from my experience in leading an infra team is that we learn by making mistakes, so we shouldn't try reduce the number of mistakes we make. We must focus on reducing the cost of the mistake through incremental change, rollbacks and observability. Many companies and teams try to make no mistake at all and in doing so they cost themselves growth.
Mon 16 March 2026
Learning from Sales
It is not the job of sales to find customers with problems that we have solved, but also to find the problems that we can potentially solve. The role is one part customer relations and one part business development.
The challenge with this role is that features fall on a spectrum from nice-to-have to critical-for-business; being able to tell the difference is a key skill for anyone in sales. If we filter all the nice-to-have functionality out of the developer's backlog what remains is the high value and impactful functions that allow us to sign more deals.
Engineering teams often miss this business context and lack the skill to differentiate these features. Part of the problem stems from being a few steps removed from the customer and nice-to-have features can appear more exciting to work on but don't end up creating value.
Engineers that wish to be highly autonomous and are charged with seeking impactful work need to exercise their inner salesperson. Those that don't embrace these skills are at risk of wasting their time and working on the wrong thing.
Buy Signals
Those in sales and business dev look for hints that inform them of the project's potential to get to the deal signing stage. Focusing on projects with high potential allows us to avoid chasing cases that won't move forward or are likely to end without being closed.
If more than one customer is asking for a feature this can be an obvious indication that focusing on this can provide more impact. If you're working on internal software infrastructure, are you getting a feature request from more than one team, or is it from the esoteric data scientist that seems to have their own bespoke setup?
Sales rely on buy signals. A buy signal can indicate when a client is excited enough by a solution that they are willing to consider a purchase. The most obvious way they signal this is by giving you money. If they're trying to give you money before the solution is built, you'll know the solution is somewhat important to them.
We don't have to rely on the client putting down money for us to recognise a buy signal. Buy signals can appear from any skin a customer is willing to risk for a solution. If a customer is willing to vouch for the product to a superior this is a political stake. This allows us to move up the decision chain and provides a positive signal that a deal can make it to close. We also use this as a way of making progress in conversations without explicitly asking for money.
Software engineers should want to get closer to the projects that have the largest impact. We can use buy signals to determine the projects with the most buy-in from leadership and create the most business.
Discovery Questions
Developers can find themselves working on the right project but can end up developing the wrong thing without an accurate idea of the pain-point. We need to peel back the layers that surround a customer's pain to determine how we may best deliver.
Sales do this with discovery sessions or hearing it directly from the customer. Their goal is to avoid making incorrect assumptions about the problem which may lead to building the wrong solution. Software engineers could also benefit from this skill. If you're given a spec an engineer should ensure they're delivering the appropriate fix, otherwise they risk missing the target and have to do revisions or start again from scratch.
Sales does this with open ended questions, the more space they provide a customer to explore the problem the higher the chance we expose insight. Getting them to dig deeper on specific points can expose issues they might be having and asking how a solution might affect their business can help sales qualify this lead.
A classic technique of getting your customer to talk is called Mirroring covered in "Never Split the Difference" by Chris Voss.
Are your developers asking enough questions to expose misaligned assumptions?
Touch Points
Progress updates with customers allow you to show that you are keeping them in mind and their requests haven't been forgotten about. They also provide a heads up when new features are near release. They are used to build trust and allow the customer to feel like they are helping guide the process.
If your wins are also their wins they will feel like they have more skin in the project's success and become an internal champion across the industry and in their own business.
How often do you provide a task to a developer and only hear about it three months later when there's just two weeks until the deadline?
Regular touch points don't just serve the relationship you are building with your customers but also allow you to validate the project closer to real-time. Ensuring that you are headed in the correct direction is critical for success. If you're heading down the wrong path you'll want to know as soon as possible.
It's a shame that developers seem to be stereotyped as introverts lacking people skills that wish to be isolated from any call with the customer. The industry appears to lean into this idea by adding more barriers between the client and developer. The truth is the companies that have their developers closest to users and customers are the ones that succeed and the best way for an engineer to level up is to care more about the customer.
Determine Timelines
Software can rely on external processes and teams. Figure out what you're building and predict where you might need approvals or contracts, this will allow you to get the ball rolling in those departments. Sales already do this on their discovery calls by probing if the customer will need to bump other departments, bumping them now can lead to less delay.
Within a company we might require a third party tool for the new feature we're developing, so the sooner we loop in finance or the security team the better. We can have them work on their specifics in the background while we work on the feature. We can probe our PM before we start the project that we might require looping in these other teams.
We should also determine when something will be needed and understand the timeline. They could be aiming to deploy their MVP in a month which could affect how we prioritise building the solution. If the deadline is tight, perhaps we can determine if there's only a subset of features required for the MVP and avoid sinking time into the features that are expected at a later time.
Alignment
Developers can level themselves up by becoming more aligned with the business and not less. One way of doing this is by moving closer to the customer or learning how other roles operate. Software will continue to be a competitive industry and engineers can't afford to waste time on things that don't delight their users and/or provide impact.
Mon 12 January 2026
Seating Charts
We have two large tables and 100 guests coming to our wedding and we have to figure out how they will be seated. Defining the seating chart tends to be the most enjoyable part of wedding planning1.
After drawing circles for all the seats and guests in excalidraw.com, I began connecting the circles with colourful lines to map out specific relationships. As an example, a couple at the wedding should be seated together.
It dawned on me that this is a constraint satisfaction problem (CSP); and modeling CSPs is something I've been doing for the last year.
Constraint Satisfaction Problems
CSPs are a family of problems where you are required to find the value to a number of variables given certain constraints to those variables. There are many areas that such problems occur, one such area is packing optimisations.
There are software engineering roles that rely on solving these type of questions; typically these roles will include the term "Formal Methods" under their list of responsibilities.
To solve a CSP problem, we begin by modelling the problem mathematically. There are a few notations that allow us to define the variables and constraints for these kinds of problems, the one I am familiar with is called SMT-language.
Here is a simple example where we wish to find valid values for x and y:
(declare-const x Int)
(declare-const y Int)
(assert (> x 0))
(assert (< y 10))
(assert (= (+ x y) 15))
(check-sat)
(get-model)
You may have noticed that this uses prefix notation, i.e.
(> x 0) which is equivalent to (x > 0) in the familiar
infix notation.
To find valid solutions we need a solver such as z3.
If we provide z3
with the file above it will give us a solution for x and
y that fits the constraint. E.g x = 15 and y = 0. There
may be more than one valid solution. Sometimes there is not
solution and it will return unsat. Short for
unsatisfiable.
Knapsack problems
We can use z3 and smt to model and solve a classic knapsack problem. Given the following products:
- Product A has a size of 3 and value of 4.
- Product B has a size of 5 and a value of 7.
Pack a bag with a capacity of 16. Maximizing the total value of the bag.
(declare-const productA_count Int)
(declare-const productB_count Int)
(declare-const total_value Int)
(assert (>= productA_count 0))
(assert (>= productB_count 0))
; Product A: size=3, value=4
; Product B: size=5, value=7
; Bag capacity: 16
(assert (<= (+ (* 3 productA_count) (* 5 productB_count)) 16))
(assert (= total_value (+ (* 4 productA_count) (* 7 productB_count))))
(maximize total_value)
(check-sat)
(get-model)
Solving this tells us that if we pack two of each product we maximize the total value of the bag; this being 22.
Seating
Getting back into the seating chart. I wrote (vibed) a program that allowed me to draw different connections between each of my guests. These connections account for the different constraints we wanted to map onto the guests. So if A and B are a couple, ensuring they sit next to each other is a constraint.
Here is a complete list of the constraints that were modeled:
- Sit next to each other.
- Sit opposite each other. Or constraint 1.
- Sit diagonally across from each other. Or constraint 2.
- Not next to, not opposite and not diagonal from each other.
- A should be next to B OR C.
After drawing all these constraints between our guests this is how the wedding looks:

Green is constraint 1, yellow 2, orange 3, dashed red 4 and dashed purple is constraint 5.
We are seating everyone at two large tables. As would have been done in a classic viking longhouse. In order to model the guests at these tables each guest will need a variable for the position, the side and the table.
The groom will have an assigned seat, in the middle of the first table facing inward, towards the guests. As guest number 1 he would have the following variables:
# table_1: int = 0
# pos_1: int = 12
# side_1: bool = true
This would seat me in the middle of the table looking out
across the room. We then define these variables for all 100
guests. We also need to include a constraint that all the
table_{n} variables must be 0 or 1, since there are only
two tables. Additionally the pos_{n} variables have to be
between [0 and 25) since there are only 25 seats on each side
of the tables.
Constraints
Now we model each constraint listed above. The first constraint states that guest A and B must be next to each other. If we take guests 11 and 12 this would look like the following:
(assert (= table_11 table_12))
(assert (= side_11 side_12))
(assert (or (= pos_11 (+ pos_12 1) (= pos_12 (+ pos_11 1)))))
I.e. Same table, same side but their positions are offset by 1 or -1.
The next constraint allows the guest to
not only sit next to someone but as an alternative they can be
opposite each other. To do this we assign the above
constraint to same_side_const and define a new constraint
assign it to opposite_const and in order to satisfy either of
them we say. (assert (or same_side_const opposite_const)).
Here's the definition of opposite_const:
(assert (= table_11 table_12))
(assert (distinct side_11 side_12))
(assert (= pos_11 pos_12))
I.e. Same table, different sides but same position.
For the 3rd constraint we merge constraint 1 and 2 together. B needs to have a different side to A and the position needs to be offset by 1 or -1.
Using the tool above to define the graph constraints I could then export the constraints to be passed into a script that generates the smt file for z3. It then provided a solution that I could import into the same tool to see a visual representation of how it would seat all the guests. You can see that below:

As you'll notice from the colour of each line, all the constraints are satisfied.2
Results
So after 5 hours was this seating chart useful? Not really...
The guests that are on the edge of each of the clusters weren't really people we wanted to sit together. However the arrangement within each cluster was great.
Was this fun? Totally!
I'm not going to do this again, because I'm not going to be married again, but in the event that I have to create a seating chart for 100 people in the future there are certain things that might provide a better outcome.
Perhaps it would be better to rely on more flexible constraints, as an example instead of saying A and B need to be in close vicinity I would instead say A should sit next to at least B, C or D. Giving more flexibility to who A can be sat next to.
After presenting this analysis to my future wife she showed me how she had laid out the tables. You can see this below.

Since the constraints are already linking the guests, I was able to spot some improvements to the chart and get some value from this effort.
There was another improvement to the modelling I could have made when looking at the final chart.
If we wanted A to sit next to C and A was in a couple with B. I had not considered that it was probably fine for B to sit next to C instead of A. If C would get on with A there's a chance they would also get on with their partner B. So I could have relaxed this constraint and had C sit next to either A or B.
Creating a seating chart is not a recurring problem of mine, so sadly there's no need for me to refine this solution further.
Guests forgetting to tell me that they're coming has also caused me to pull some hair out, as with most math, when it attempts to model the real world.