This week, I used DeepResearch in ChatGPT for the first time. Using this longer compute time, I created two outputs.
The first was a training deck, which would usually take me up to two weeks to make, maybe longer if I procrastinated, got distracted, or was given more work. It took me under an hour. The second was a business proposal. I’ve never written one before, and I would have needed to sit down with colleagues for hours (or days) to even begin to tackle it. AI got me to a near-finished version. I made a few contextual tweaks, then sent it off.
I created both of them in one day.
Neither of these was a junior-level output, either. The training deck was to the standard I would normally produce. The proposal is what blew me away most. I couldn’t have written it without spending a lot of time researching, learning, writing and rewriting. And then I would only have produced a document obviously written by a novice to someone who isn’t. Instead, I’ve produced a quality document as though it’s not my first time. I’m sure there’s room for improvement, but it will give me greater credibility by producing a higher-quality document.
I’m neurodivergent. One of the disabilities I struggle with is executive dysfunction, which is a difficulty initiating tasks, organising thoughts, and following through, even when I know what I need to do. It’s not a lack of motivation or knowledge. It’s like really wanting to cycle uphill, but your bike doesn’t have low gears.
This week, AI gave me a few extra gears. It got me over the blank page, and using it allowed me to deliver something closer to the work I feel capable of.
If you’re not using AI already, just start. Even if only to structure your thoughts.
As coaches and leaders, we have an opportunity to help teams integrate AI into their workflows ethically, empoweringly, and effectively. Someone will be needed to help those around us understand the scale of what’s coming.
This is about understanding what happens when the barrier between “idea” and “execution” gets thinner than paper. If things go well, AI fluency will be the defining skillset for work in the coming decade, so you need to get yourself on the right side of that line before organisations start throwing AI at everything in the same way they did with agile. If things go poorly, you’ll need to understand the beast, for whatever good that may do.
Knowledge no longer lives solely in the systems of people; it now exists in IT systems, too. Conversations with LLMs increasingly feel like talking to an intellectual mirror. The answers that are returned are only as good as the person creating the prompt, but that means the LLM speaks to each and every one of us at the level we are as individuals.
I don’t think this is just another software fad.
When listening to cross-disciplinary professors talk about consciousness, they suggest that it’s not just the thought processes but the vessel that contains, maintains, and runs them. In the case of a human, the body is what is conscious. In the case of an AI, the machine, the hardware, the circuitry, are conscious. Perhaps not conscious yet, but we are on a path that attempts to make a data centre conscious.
We’re rolling the dice again as a species, and it may be of biblical proportions. I genuinely can’t sense what the future looks like in this field. It is so bleeding edge that even the experts aren’t experts. I’ve watched so many interviews and lectures that I can recite the arguments and analogies because I’ve heard them so many times. There may be some hot-hand bias when the experts say we’re only a few years away from AGI, superintelligence, and an existential crisis bigger than climate change. In the last five or so years, things have moved extremely quickly; some things that they thought would take 20 or 30 years have taken three to five. If we continue to move at this pace and we currently have the technology capable of achieving these goals, then perhaps we will be in a very different world in ten years.
The reality is no one knows.