Thursday, April 20, 2023

Software Development Ethics

Computers can do great things for people, but like any other tool, they can also be used for evil.

So we, as software developers, need an ethical code.

Basically, it is a commitment that we will not build evil things for evil people. That is, when we sit down to build some software, we always do so with ethical standards. If someone asks us to violate those standards, we refuse or we walk away.

For this to work, the ethics have to be simple and clear.

First, no matter how many instructions are executed for any type of automation, the code must always be triggered by humans. Not someone, or anyone, but the actual people involved.

So, for in-house batch jobs, they are started and stopped by operations personnel. Someone also has to explicitly schedule them to run at a frequency. Those people are the ones that ‘operate’ the software and provide it as a service for the users. They are on the hook for it, so they need to be able to control it.

For commercial web app sign-ups, they can’t be automatic. You can’t add someone to an email list, for example, without their explicit consent. And they must always have a say and a way to get themselves off the list. You can’t track where people go after they leave your site.

That even applies to issues like single sign-on. You log into a workstation, and that workstation passes on the credentials. That is fine, as you triggered it by logging in. But you can’t just quietly log them in some other way with different credentials. That would not be okay.

Second, a computer should never lie or trick the users. People need to be able to trust the machines, which means they need to be able to trust any software running on the machines, all of it. For web signups, you can’t trick people into signing up. You can’t hold them hostage, you can’t actively coerce or manipulate them

For big websites you can’t throw up an unreadable eula and then use that as an excuse to sell the data out from under the users. If you want to monetize by reselling the things people type in, you need to explicitly tell them, not try to hide it. You need to make sure that they understand.

A big part of this is that the data presented to the users must be correct, at all times. So, it must be named correctly, modeled correctly, stored correctly, cleaned correctly, and presented correctly. Stale stuff in a cache would be an ethics violation. Badly modeled data that is broken is one as well.

This applies to operations personnel and other developers. Writing a script called add_user that actually deletes all of the users is unethical. Reusing an old field in the database for some other type of data is unethical.

The third part is to not enable chaos, strife, and discord. This is the hardest of the three.

If you write a library that makes it easy for people to violate rules, then you are culpable. You did something that enabled them.

It is unethical unless your library is in opposition to some other oppression. But for that to be the case you would have to know that, have researched it, and have found the best ways to oppose that without enabling wanton evil. So your library isn’t an accident, it is a personal statement. And your mitigations are enough to restrict any types of obviously bad usage. So you fully understand the costs of your opposition. If you don't, you can’t ethically release the library.

If you write something innocent and later find out that it is being used for evil, you are now obligated to do something about that. You can’t ignore it or say you didn't know. You are now stuck and have to do your best to make that situation better.

We see a similar problem with social networks. We value freedom of speech, but we also need to not make it easy for lies and hate speech to propagate. When you build social networks, of any kind, both of these requirements must be in your design, and both shape the solution. You can’t pick one and ignore the other.

In that sense, for anything right on the ‘line’ in ethics, you have to know that it is on the line and have to actively try everything to ensure that it doesn’t cross over. If you don’t bother, you are enabling evil, so it is unethical. If you try, there may still be some unhappy occurrences, but as you find them, you must do something about them.

Getting too close to the line is a huge pain, but it was your choice, so now you have to do everything you can to ensure that it stays tilted toward the ethical side. If you don’t like it, walk away.

The physical similarity is with power tools. They are very useful but can be dangerous so people spend a lot of time adding in safeties, guardrails, and stuff. You can build the tool but you cannot ignore its usage. If you are aware of a problem, you have to act on that awareness. If you ignore it or exploit it, you are being unethical.

If you are ethical, you can no longer use the excuse that ‘they’ told you to do it. That is not acceptable. If ‘they’ ask for something bad, you refuse or walk away.

In the practical sense ‘they’ can order you to do the work and it may take some time to be able to walk away, so while you are stuck, you would restrict yourself to the bare minimum. For example, you tell them it is unethical, but they force you to write some dubious code anyways, and since you need to eat and the labor market sucks right now, you are trapped. You do the work, as minimally as you can, but you can’t ethically damage or sabotage it. But you certainly don’t have to do anything to help it get released either. You actively try to find another job in the meantime.

You’ve let them know, and you have not gone above or beyond in your role, and you are in the process of walking away. That is the best that you can do in this type of situation. It is being as ethical as you can get, without being self-destructive.

It is very hard to be an ethical software developer these days. That is why we see so many unethical circumstances. But as software eats the world, we have to live in the mess we have helped create, so we should do everything possible to make it as good as possible.


  1. really well written

  2. My sprinkler system runs software when it rains. Does this violate "... no matter how many instructions are executed for any type of automation, the code must always be triggered by humans. Not someone, or anyone, but the actual people involved."?

  3. "If you write a library that makes it easy for people to violate rules, then you are culpable"
    So a library like youtube-dl, bittorrent and similar software would not be ethic according to this proposal?

  4. How does this apply to engineers whose job it is to design weapons? Is it ethical to design systems which are explicitly meant to kill people? Does it matter, from an ethical perspective, who you're working for -- the US military, the police, Raytheon, Israel, China, North Korea, some rebel group? Does it matter if you're a citizen of the nation you're designing weapons for -- meaning are the ethics different if you're an American designing weapons for the US vs. for China?

    1. Yes. It is unethical to extreme and you are a murder. Walk away and get another job...


Thanks for the Feedback!