I interviewed a colleague this week about how they’re using AI in their workflow. I’m interested in what’s changing and what that looks like day to day. As most of my readers will know by now, I habitually pick up on throwaway comments that have a larger impact than perhaps the speaker realises. I asked my colleague to explain why they keep using ChatGPT, and they said, “It’s like having another person in the room to talk things through with.”
They went on to describe how using a custom GPT they’ve written allows them to ensure that only the sources they want are referenced, the output is formatted exactly as they’ve prescribed, and they can spend less time reading because they’ve outsourced it to the GPT.
When I was a programmer, I loved pair programming. I know many programmers hate the idea of it, but if you can find someone on your wavelength, at a skill level similar to yours, and whose strengths match each other’s weaknesses, then the work flows freely. If you can achieve flow with another while working on a complex problem, you’ll create a solution together that neither of you could create on your own, and it feels amazing. I’ve always struggled to concentrate on a problem for more than an hour when working alone, but when pairing, I would have to put timers on to ensure I popped out of the hyperfocus.
Listening to my colleague about her custom GPT, I realised I was listening to someone describing pairing.
As an agile coach, I’ve spent a decade trying to get teams to collaborate. I’ve succeeded in supporting teams to become cooperative, but collaboration has nearly always been a magical and elusive space that my colleagues haven’t believed in enough to attempt to visit. I love to see minds think together, creating something neither could have built alone, but unfortunately, most teams never get there.
And in that moment, I found a person who had that exact experience, not with their team, not even with another human.
With **** ChatGPT.
I’m both excited and concerned about this emerging behaviour.
Before computers, tools were a natural extension of the human body. We started using computers before we could build them in a shape that extended our minds. Instead, we were happy to bend our minds to fit the needs of the tool because the trade-off still benefited us greatly. Our ability to build electronics is now becoming sophisticated enough to build a tool that is tailored to us, rather than the other way around.
I’m excited about the AI future because this has the potential to become the future I grew up watching on TV. Where people can speak to their computer system in natural language, perhaps correct it a few times for nuance, and then achieve exactly what they expected. A future where the machines understand us, so we don’t have to downplay our humanity to achieve our goals. I’ve spent my whole career trying to make technology more accessible to more people, and I feel we’re closer to achieving technology for all than I ever imagined we would be in my lifetime.
But I’m also concerned because this vision of a perfectly augmented human, highly productive, deeply self-sufficient, leans into an ideal that I don’t believe in: that independence is the goal. The neoliberal dream of total self-reliance continues on, just now it has a better UI.
We can already see what’s coming. Team sizes will shrink as one person oversees AIs that can do the work of five, ten, maybe more. Just as manufacturing plants needed fewer manual labourers because automation took over the majority of the work and needed minimal human oversight, so will knowledge teams dwindle. It will be even easier this time because knowledge workers aren’t typically unionised.
I find a twisted little nugget of irony in the middle of this transformation. The augmented team member will not only be the most effective but also make the rest of their team redundant.
If collaboration between humans remains a stated value in our organisations, if we still believe in the cognitive diversity and creative tension of many minds working together, then we can’t pretend that AI adoption is neutral. Tools change power dynamics.
This kind of pairing relationship, half computer, half human, is seductive. No interruptions, no emotional labour, no meeting fatigue, no conflicting opinions, no friction. But it’s also lonely and deeply fragile.
Yet, my recent experiences with GPT have taught me that there is still a need to collaborate with humans. There have been moments when I’ve worked alone with a GPT and completely failed to complete the task at hand, finding myself in a position where there is no one around to ask for help or bounce ideas off of.
So, how do we keep collaboration between humans alive with the competition of such an alluring partner for both the worker and the employer?
It might mean reframing collaboration as a source of growth and resilience, so humans don’t end up going crazy in their cubicle with no other cubicles nearby. We’ll probably need to continue to teach the value of creative tension and encourage hiring for humility and curiosity, rather than fluency in prompt engineering. Of course, many of us agilists have already tried all of this, often with frustratingly little success in the golden years of agile adoption, when everyone started hiring for the endless line of certifications rather than a deep understanding of the work philosophies.
When I pair with ChatGPT, I hope I’m not losing the beauty of my real, messy, independent thoughts. I’m concerned that the steps between a weird idea and the first draft I outsource to a machine might take away my ability to understand what I believe. I hope I’m not being anchored to a thought that wasn’t mine, as though it was.
I hope that we can articulate why we want more human minds over computer minds. We’ve spent decades, centuries, trying to increase the diversity in our organisations because we understand the benefits delivered by a range of lived experiences. We know the training data has instilled biases into the LLMs that we have exhibited as humans, but hopefully, we’ve also instilled an understanding that we want to be and do better.
After writing this, I saw this video on YouTube, where the commentator talks of their use of AI to have productive conversations.