This week, during a regular team meeting with our stakeholders, one of our senior managers casually dropped a line that caught my attention: we need to develop ways of working where humans and custom GPTs collaborate in parallel.
I don’t think anyone else in the room really clocked the weight of what was said.
But I did.
I use GenAI every day in my work to draft, reflect, summarise, and research. You name it, I’ve probably tried it. I’m a techie at heart, and I’m excited by the promise I can see. But I’m also a psychologist and sociologist, so I also see the threat that unregulated algorithms pose in the hands of the average human.
I want to run with this project. In fact, I think I’m the best one to run with this project. It’s not the shiny, experimental part that draws me in; it’s the deep, systemic questions it raises that make me want to be involved, if only to stop us from doing something short-sighted and irreversible like we have several times in the last couple of decades.
Let me explain myself better.
The organisation I’m currently working with advises other companies on using GenAI. It will be very easy for the narrative to become: “Twice the productivity with half the people.”
That is how we end up doing the same thing we’ve done before: something meant to support humans eliminates humanity.
We’ve all seen organisations adopt agile to “move faster” while gutting the teams that were supposed to be empowered. We’ve seen values like “individuals and interactions over processes and tools” conveniently forgotten because a process is easier to implement than a person is to develop.
Look at how social media has impacted our social and political lives. Young people struggle to socialise and blame other groups, instead of noticing that they haven’t spent much time with real people to develop social skills. Politics has become divided as we only interact virtually with people who (the algorithm tells us) share the same values and beliefs as we do.
GenAI will be no different, unless we choose differently and work hard to shape how people think of and interact with it.
As coaches and leaders, we are in a position to shape the narrative, especially with our peers in middle and senior management.
When we introduce GenAI to teams, are we telling them:
This will help you focus on what humans do best.
orThis will let us do more with fewer humans.
Same tool. Totally different futures.
This isn’t about being anti-AI. Quite the opposite. I think GenAI can be a powerful tool. But like any tool, it needs intention behind it. Clear values. Guardrails. A vision of success that includes humans at the centre of the system, not at the edges.
I’ll be working with department heads over the coming months, supporting their teams as they experiment with GPTs and rethink workflows. And I’m going to be loud about this one point:
This is about increasing capacity, not cutting headcount.
This is about better work, not fewer workers.
I’ve started collecting stories and use cases demonstrating how AI can reduce cognitive load, not organisational size. And I’ll be repeating one line a lot in meetings:
“Let’s use GPTs to free up our people, not to replace them.”
If GenAI is arriving at your organisation (and let’s face it, it is), then this question will find you soon:
What story are you telling about AI at work?
And more importantly:
Is it one you’d be proud to hear repeated by someone whose job is on the line?
Let’s be intentional. Let’s be principled. Let’s stop repeating history with digital technologies.
An important post, Georgina. Love the with, with, with subhead.
"Let’s use GPTs to free up our people, not to replace them." Yes. That.
I remember conversations about test automation back in the 90s and 00s. Testers were afraid they would be replaced by scripts—and actually rightly afraid as management also thought that, hoped for it even. Automation==fewer humans, they imagined. Fewer people to pay, fewer people to cause problems. I remember using almost that exact same line: "Let’s use automation to free up our testers, not to replace them." The wise ones understood. Most did not.