So the US has decided that states won’t be allowed to regulate artificial intelligence for the next ten years. If only this were a surprise. In the Trump government, the tech billionaires get whatever they want, and mostly, they want to be left alone to do everything they want. In one interview, I heard an AI ‘expert’ say that regulation should be different for using AI in medicine because of the potential for good; he (of course he) claimed it is better to deliver results now than in ten or twenty years. Of course, I would agree that delivering results now is better, but only if those results are good. However, we have regulations because there is no guarantee that the results will be good.
AI is another “once in a generation” technology that millennials are being impacted by. If there’s one thing that my generation understands, it’s that the speed of technological progress is moving generationally every five to ten years. These technologies arrive with a big, shiny promise to disrupt, transform, innovate, and improve efficiency. And, like all big, shiny promises, it attracts big, shiny opportunists. People who are faster at rebranding than thinking. People who see systems as things to be exploited, not stewarded. People who have fully bought into individualism, consumerism, and capitalism, and are out to ensure they win the game, regardless of their impact on individuals and broader society.
Sound familiar?
Agile was once the new shiny.
When agile started to enter the mainstream, it was exciting. It was a better way of working that focused on the team and the customer. People wanted to adopt it, but they were unsure of what to do because, let’s face it, most people don’t want to take a philosophical idea and work out how to align with it in every decision they make at work.
Rightly or wrongly, coaches seemed like a great way to help organisations overcome the alignment problem. Instead of having everyone ponder deep philosophical questions about the nature of work, we’ll just hire one in ten people to do so. That’s when the problem of recognising who has learned and thought about this stuff so that we can hire them was solved by certifications.
Certifications are great, I have so many, and they have helped me get my CV past the computers, recruiters, and HR when applying for work. Given that I have so many certificates, I also understand how little they have to do with the philosophy of working in alignment with some bold and incomplete ideas. In all of the courses I was taught processes that, if done well, would help me align ways of working with those bold ideas, but the trainers either struggled to anchor the two together or just didn’t bother to try.
So, the people who took the course to jump into some of the highest-paying, least technical jobs in the tech sector left thinking they were good enough to do the job, and so did the organisations hiring them. They ran “transformations” that rebranded waterfall as scrum and handed delivery leads a new vocabulary without any new thinking. Agile became a buzzword, then a checkbox, and then, for many, a disappointment.
I’ve seen it up close, both as a developer and as a scrum master/coach; the harm done when people with little understanding of a philosophy are given the power to enact it anyway. The real transformations never got a chance because the early ones went so badly. Countless teams now flinch when they hear the word “sprint.”
And now we might be doing it again, this time with a tool that can potentially have a far greater impact: AI.
The people who will be most affected by AI and the decisions made by AI are not the ones making the decisions about how and where and why we use AI. The working and middle classes aren’t getting a seat at the table. We are at least being told there’s a table this time, so that’s something, I guess…
In the last few months, we have seen a person whose various companies are under investigation by many branches of the government given astonishing powers to dismantle those branches. Musk has cost the American taxpayer more than he’s “saved” with DOGE, but the savings were never really the point. His team has stolen so much personal data of the American people, disrupted the normal operations of agencies that are there to keep people safe, and lied about it the whole way. And Musk was a co-founder of OpenAI, and although played a very minor part in the early days of the organisation, it’s illustrative of those with influence.
I know most of my readership is European, so you’ll have to forgive me for looking to the States so much in this issue. However, I think it’s worth noting what’s happening there because not only do the British (and the rest of the West) follow their lead, but also because arbitrary geographical boundaries will not bind this tool’s potential impact.
In Reid Hoffman’s latest book, Superagency (basically propoganda), he says:
[We’re] not unconditionally opposed to government regulation, [we] believe that the fastest and most effective way to develop safer, more equitable, and more useful AI tools is to make them accessible to a diverse range of users with different values and intentions.
We will have to hope this is true, given the American government’s declaration that it has no intention of regulating. I’m sure history has never repeated itself…
If we don’t have guardrails, we have exploitation. If we don’t have understanding, we have chaos. If we don’t work intentionally, we get whatever the market feels like giving us. And that, historically, doesn’t turn out well.
As agile coaches, we’ve lived through sharletons affecting our industry before. We’ve seen what happens when a powerful idea is handed to people who want results without responsibility. We’ve coached teams back from the brink of agile theatre. We’ve sat with leaders and helped them resolve the mess left by well-meaning but misguided consultants.
So maybe we’re better placed than most to notice the signs, ask the awkward questions, and model what thoughtful, ethical, intentional change actually looks like.
Frameworks don’t protect people; people protect people.
We can’t regulate AI ourselves (though we should campaign for it). But we can influence the way it’s introduced in our organisations. We can talk about data ethics. We can advocate for design processes that include the people who’ll be most affected. We can keep insisting that values matter more than velocity. And we can draw a very clear line between innovation and exploitation.
I’m not worried about AI itself; in fact, I’m quite excited about it and can see some interesting futures ahead of us. I’m worried about the people in charge of the technology.
You've articulated something I couldn't find the words for 🙏🏻