An Ethical Decision-Making Framework for Software Engineers from Nicholas Thompson, CEO of The Atlantic
Nicholas Thompson is CEO of The Atlantic, former editor-in-chief at Wired and the American record-holder in the 50K. We were thrilled to host Nicholas for the latest in our online event series Lohika Re:think.
Nicholas presented an ethical decision-making framework for software engineers. In this post, I’ll summarize his framework.
Moore’s Law and the power of software
To set the stage, Nicholas highlighted the power of software. According to Nicholas, “Some of the most consequential decisions are being made by software engineers and product managers. The decisions being made today inside of tech companies will affect the world my children live in and the world that their children live in.”
The power of software is magnified by advances in hardware. Nicholas cited Moore’s Law, which states that every year and a half, the power of processors roughly doubles. To illustrate the magnitude of Moore’s Law, Nicholas noted that in 1961, the cost to create a gigaflop of computing was twice the GDP (gross domestic product).
Today, the cost to create a gigaflop of computing is a nickel.
Nicholas stated that in every industry, software is beginning to recreate much of what we do. In Nicholas’ industry of media, artificial intelligence (AI) will very quickly take over the basic forms of journalism and storytelling, massively changing the way reporters work.
Software can make us happy, but also make us unhappy
Nicholas said that software engineers sometimes get it right, but other times get it wrong. He showed the following chart from the Center for Humane Technology:
On the left, the chart shows apps that make people happy, along with the average number of minutes the apps are used. The top three are Calm, Google Calendar and Headspace. On the right, apps are listed by how much they make people unhappy.
The scary thing about these apps is the average number of minutes people spend on them, despite the fact that they make them unhappy (e.g., 59 minutes on Facebook and 97 minutes on WeChat).
According to Nicholas, we develop these incredible devices (e.g., smartphones) and as a result of decisions that were made and business models that exist, they make us miserable.
Would things have turned out better if we had an ethical and technological framework when smartphones were created? That’s the inspiration for Nicholas’ presentation.
Nicholas’ framework is based on six independent ideas for engineers to think about when they’re building software.
Idea One: Change is Good
“Change is good” was one of the mantras of WIRED magazine when it was founded 25 years ago in the San Francisco counterculture. The founders of WIRED generally believed that most things developed in software were net positive. Having society get “wired” enabled people to connect with one another and that was good for the world, they thought.
Social media platforms are a modern-day parallel. Originally, it was thought that by connecting the world, social media platforms served the common good. But then things got complicated, as social media algorithms have been shown to amplify hate and spread disinformation.
Nicholas referenced the leaked memo from Facebook executive Andrew Bosworth who wrote:
“Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good.”
For Nicholas, “Change is Good” has significant limitations.
Idea Two: First, Do No Harm
When you’re building technology, think about the worst consequences that can happen from that technology. Generally, engineers have good intentions when designing technology. They want to create something that people will find useful.
However, they don’t always consider or realize the negative outcomes that can arise. Nicholas gave the example of induced pluripotent stem cells (iPSCs), which enable scientists to potentially create human sperm and eggs using ordinary cells.
The technology works by taking ordinary cells and turning them into stem cells, the kind of cells in early embryos that can grow into every tissue type in the body. I found details in “Reproduction revolution: how our skin cells might be turned into sperm and eggs” from The Guardian.
While this technology seems groundbreaking, guess what happens if it becomes a reality?
We no longer need men.
Ultimately, Nicholas said that Idea Two is insufficient. You can’t harden systems to protect against all the worst consequences because to create something great, you need to take risks.
Idea Three: Consequentialism & Utilitarianism
We add up the good and the bad that software created. We look at the outcomes and see if the total amount of good it created exceeds the total amount of bad. Nicholas refers to the good as “Utilitarianism” and the bad as “Consequentialism.” In announcing her resignation as COO of Meta, Sheryl Sandberg noted that at Facebook, the good outweighed the bad during her time there.
Now let’s consider the scenario of AI taking away someone’s job. How would this scenario be weighed? Well, if you’re removing the grunt work and letting them focus on more creative tasks, that’s good. But what if doing grunt work made someone feel productive? Now you’re taking away their livelihood, which would be bad.
According to Nicholas, these decisions have too many unknowns and are too hard to calculate.
Idea Four: Fairness
In Idea Four, software engineers ask the question:
“Does my software really help the least-advantaged person in society or will it only help the advantaged?”
Nicholas noted that this is particularly relevant when designing AI algorithms. If you train a criminal sentencing algorithm using a data set that is racist, then you create an algorithm that is racist.
However, Nicholas noted that you don’t want to eliminate all bias. For example, what if you had a loan algorithm that was more likely to give loans to women because they default less? If it’s biased against a group that’s relatively advantaged, maybe that’s OK, said Nicholas.
Idea Five: Principles & Codes
Instead of thinking about outcomes, Idea Five asks us to think about principles and values. Nicholas said that while outcomes are hard to measure, it’s easy to understand inputs (e.g., principles and values).
Nearly 100 years ago, engineers wanted to build a bridge across the Saint Lawrence River in Canada. The bridge collapsed twice, leading to the unfortunate deaths of many workers. On the third attempt to build the bridge, the engineers made rings with steel forged from the collapsed bridge.
They wore the rings to remind them of the people who lost their lives. They each took this pledge:
“My time I will not refuse; my thought I will not grudge; my care I will not deny towards the honor, use, stability and perfection of any works to which I may be called to set my hand.”
Drawing closer to software engineers, Nicholas shared two examples of pledges:
Nicholas calls Idea Five imperfect, however. While moral pledges are good and engineers should consider signing them, good intentions can lead to bad outcomes. You also have to look at the consequences of your software, said Nicholas.
Idea Six: The Child Test
Nicholas once had two job offers and had to decide on a career in tech or a career in journalism. He could have developed charts with different factors and weights. In discussing the choice with his wife, he decided that they should decide on a single question to decide the outcome.
The question they arrived at was:
“Will my children, when they fully understand this choice, respect it?”
When building software, engineers could ask:
“Would I let my kids use this?”
Nicholas noted that some engineers at Instagram don’t let their children use the app. Another example is Steve Jobs, who didn’t let his kids use iPhones or iPads.
Building on the first question, Nicholas shared another question that he said was even better:
“Will my child be proud of me that I built this?”
Nicholas closed his talk by saying that the Child Test is the most effective framework:
“My general sense about this moral framework is that they’re all kinds of interesting ideas, they’re all kinds of incomplete ideas, but ultimately, the best way to answer the question is this one. Will my child be proud that I built this?”