I think it’s safe to say I’ll be talking about AI for the next few weeks. Although I am doing plenty of training around agile and agile supporting practices, what is capturing my attention is how to coach and train people to work with machines as part of their team. I want to promote a way that supports us by improving our professional work, and from there, our society.
I think I think AI has great potential to be a force for good.
Living through the Industrial Revolution must have been challenging and exciting. As work moved into cities and we started working alongside (and inside) dangerous machines, people had to bend to fit the new tools, techniques, and processes. Ultimately, though, the Industrial Revolution led to a utopia. We can now see that a few hundred years later, humanity has greater choice, more freedoms, and the ability to develop bigger and better technologies.
AI could be as disruptive a machine entering the workplace as steam engines were. I think it will most likely be, but I also don’t want anyone quoting this back to me in a decade, so I’ll slightly hedge my bet! Given this, I feel it is our responsibility to consider what good looks like. There are plenty of people out there talking about how bad it will be, and I worry that as these models are trained on more data, if they only learn of our dystopian futures, we could accidentally think one of them into existence. So, I’m going to attempt to describe what a utopia looks like instead. Or at least a future I’d happily live in.
It seems inevitable that humans and various kinds of AI will work in teams as a normal practice in the not-too-distant future. Agile coaches and team leaders are in a position to implement some interesting and impactful changes: We work in knowledge work, support teams to be more effective and cooperative, and can influence how AI is integrated into those teams.
I want to consider some rules about engaging with anything AI at work. We start with base values and principles in agile (even if they get largely ignored) because they allow us to convey a spirit we want to work towards. This means that as each individual working towards the goal of agility encounters a novel situation, they can decide which of the options before them is best, based on a philosophical underpinning of how and what they should do.
I am currently considering these two ideas:
1. Augmentation, not replacement
AI should improve human work rather than make human effort redundant. It will be tricky to determine what is beneficial and what is not when swapping humans for machines, and we will have to correct where we’ve overshot. To err on the side of using AI to augment human abilities should help us ensure we’re not replacing the roles that should have empathy at their core.
An AI cannot take responsibility or accountability for its output. This might change in the future, but I’m definitely not thinking about a future where machines are androids. Therefore, AI needs to be augmented by us, as much as we could be augmented by AI. We need to verify everything that comes from an AI and ensure that mistakes that will impact individuals, communities, and societies are avoided. This means that we still need humans to be experts. Experts can interpret the output of an AI, determine whether it is a sensible output, and take accountability for the use of that output. The virtues of AI come from a point between sloth and pride, where one outsources as much of one’s labour to a machine as one can while still delivering a piece of work one is proud of.
2. Empathy is irremovable
Anywhere a human is vulnerable, AI should play a minor supporting role, and not be the star of the show. I don’t view the biggest risk as machines becoming sentient; I fear we’ll forget how to be. We already see the impact social media and the wider Internet have had upon a large portion of a generation raised in front of a screen instead of being taught how to be a civilised, well-mannered and integrated member of society. (I’m pretty sure someone said this amount millennial children in the ‘90s and the amount of TV channels we had access to.)
I recently came across a compilation video of some young people on TikTok talking about how they had to stop using AI to generate their flirty conversations, text message arguments, or any other textual conversations with (potential) lovers. They all noticed that their abilities to communicate and reason about social situations were disappearing quickly, and AI was making it harder for them to have the connections they went to AI to help them create. (As always, the first thing we use a new technology for is sex.)
When humans make decisions, we can consider our impacts with a sense of what it would be like to be on the other end of them.
AI cannot imagine, there but for the grace of god, go I.
There are places where AI needs more oversight and greater restriction: the government, the courts, and healthcare, to name a few. This is true for everything that interacts with these human systems, though. We expect greater regulation for these systems because their impact is so significant that we need to be as certain as we can be that we are making the best decisions possible. As such, introducing a machine that makes decisions should cause us to pause and create rigorous tests.
Currently, AI companies are lobbying governments to choose innovation over regulation. I would encourage everyone to write to their democratic representatives to say that we want to prioritise regulation to keep us safe. Well, boohoo if a few millionaires / billionaires can’t suck more money out of the economy and make the rest of us poorer faster.
When I use AI, I ask: Is this helping me be more human? If it isn’t and it’s replacing my own thoughts with predicted thoughts or a human connection with generic content, I know this is problematic.
I’m still learning and thinking about AI and considering its use from a more philosophical and anthropological perspective. I’d love to hear your thoughts on my above proto-ideas.