#52: The Programmers Oath - I will not produce harmful code

In this episode I continue to look at professionalism in software development.

I take the first oath from the Programmer's Oath by Uncle Bob Martin, introduced in the last episode, to explore further:

I Promise that, to the best of my ability and judgement: I will not produce harmful code.


Or listen at:

Published: Wed, 19 Aug 2020 15:46:12 GMT

Links

The Programmer's Oath

Transcript

[00:00:34] Hello and welcome.

[00:00:37] In this episode, I want to carry on the conversation about professionalism within software development. I will continue to look at it through the lens of the Programmer's Oath by "Uncle" Bob Martin that I introduced in the last episode.

[00:00:51] In this episode, I want to look at that first oath:.

[00:00:55] "I promise that to the best of my ability and judgment: I will not produce harmful code. "

[00:01:03] Now, it's quite obvious that Bob has taken that harmful code from the Hippocratic Oath an oath by medical professionals to not cause harm.

[00:01:16] But when we talk about harm, who are we talking about?

[00:01:20] Within software development, there are a number of people and organizations that we could possibly cause harm to. We have the end user of the software. We have the team that we work with, and we have the organization that is actually paying for the software in the first place.

[00:01:40] So if we talk about the end user.

[00:01:43] Take, for example, the VW emission scandal of 2015. There was actually software in the system that would make the engine behave differently if in laboratory testing situation than in real life. This appears to have been done explicitly to get around certain emission testing. Now, that has caused harm to the end customer. The vehicle they have bought, actually, we've seen drops in their value because of that scandal. The wider community potentially suffers because the emissions are higher than one would expect.

[00:02:26] When I first started in development, a senior developer actually talked to me about a report that he'd been asked to produce. On the face of it, it seemed like a relatively simple report. He'd been asked to identify customers that were below average spend. Who had never contacted customer services. This may not seem a particularly innocuous request, but what he later found out was the company intended on using that to charge them one pound per month for a free newsletter. Their belief being that they weren't a very high paying customer and they had no propensity to contact customer services, thus they could potentially get away with it. That has to be questionable from an ethics point of view.

[00:03:17] There are a variety of stories in the news recently about artificial intelligence and mistakes being made in that artificial intelligence, in how it's trained and how it's set up in the first place.

[00:03:28] Most artificial intelligence is built, built based on example. So if you feed certain examples in that, it will have by nature a certain predisposition based on that data. There have been instances where, based on the data that's put in, because it's been such a narrow subset, the system itself effectively struggles to work with certain types of people.

[00:03:54] The most often this is racism. If the team that built it have all been of one specific race and maybe even gender, that when it tries to be used in the outside communities with a wider population, it doesn't recognize them. Skin tone is one of those obvious ones where many companies have fallen foul of not building their artificial intelligence systems with enough variation.

[00:04:17] There have also been situations where police have used historic data, and because one area in particular had historic crimes, the police are constantly being sent to that same area, becoming a self-fulfilling prophecy of they go look for it and they find it. And it then reinforces that learning to keep coming back to. Now, I can't think any of those artificial intelligence failures were intended. I would very much expect they were accidental, but again, it comes down to thinking through what harm you could be causing to the end user.

[00:04:57] Of course, that could be a spectrum here, you could have things which are you've accidentally caused harm all the way through to do you feel that it would be appropriate for you, your organization, or you as a developer to be involved in some form of military system, a system that could potentially be used to harm mass people?

[00:05:21] The next place I want to talk about harm is the team itself.

[00:05:27] When you work as part of a team, you need to be thinking about what actions you're performing and how it will actually affect the way the team. If as a developer, you choose to do something that maybe works for you but actually causes everyone else more work, you're causing them harm.

[00:05:46] Any thing where you are making it harder for them to work, without an explicit understanding and everyone being conscious, that's the right decision, you're potentially putting harm onto them.

[00:05:57] And this might be accidental decisions, it might be purely selfish decisions, a developer might act as an individual and decide they want to change the system to reflect their own beliefs of how the system should work. They might want to add a new technology.

[00:06:14] They might want to explore the latest technology so they can put it on their CV. They may, by all appearances, look as if they are actually working really hard, working weekends and evenings to implement this new technology for the betterment of the system. But have they then made it more complicated for the rest of the team to actually work on the system? Have they made it more harmful and more expensive, more difficult for those people to then continue to do their work?

[00:06:45] It may seem logical to that individual to take this new, shiny, exciting thing and go, "oh, this is going to be brilliant, the team are going to love this". But sometimes you just need to step back and have that conversation with the wider team.

[00:07:02] The same can be said for just simply doing things quickly and easily. If as an individual developer, you take a shortcut and you maybe make things a little bit less easy to maintain your causing harm to the team. You make it more difficult to maintain, you make it more difficult to work with.

[00:07:24] And that quickly spills over into the employer, the organization that has actually paid you to produce this in the first place. Now, if you're if an individual developer is actually doing things, that makes it difficult to maintain the system over a longer term. Then they're causing harm to their employer there in that organization because it costs them more to maintain it. Maybe they're introducing changes in such a way that bugs will occur later down the line or they're actually implemented changes with known bugs, but they choose to ignore them. Again, what they're doing is they're causing harm potentially for the team, potentially for the customer, but definitely for the organization as a whole.

[00:08:09] This very much introduces Technical Debt, a concept I've introduced in the past. An analogy similar to financial debt, where shortcuts are taken potentially for short term wins, but having to pay a long term debt off to actually pay for that short term win. The equivalent of taking a loan. And unfortunately, if an individual is not careful, if an individual does decide to make changes without the full understanding of the team, the full understanding organization, then they can be incurring technical debt, which, of course, is causing that organization harm in the longer term.

[00:08:50] Some of these things will come down to personal ethics. Certainly, of course, what type of systems you're prepared to work on, what type of organizations are developers prepared to work with? And I personally don't think that it would be correct to tell people exactly what was or wasn't ethically correct. I'm sure we could debate certain situations and most people will fall on one side of the line or other. But there will always be that greyness.

[00:09:17] There will always be certain things that certain projects, certain companies, certain behaviors that wouldn't be clear cut. So it does depend on an individual's own sense of ethics.

[00:09:31] Hopefully, I've given you an idea of what the "I will not produce harmful code" means and how it can be interpreted by a software developer. And how it can go towards them thinking about what level of ethics they want to portray in their professional relationship with their team, the end customer and the organization, and indeed just the professionalism full stop.

[00:09:54] In the next episode, I want to move on to the second item on the Programmer's Oath.

[00:10:00] "I promised that to the best of my ability and judgment, the code that I produce will always be my best work. I will not knowingly allow code that is defective, even behavior or structure to accumulate."

[00:10:16] Thank you for listening and I'll speak to you again next week.