A fractional CAIO and CTO, Jeff Watkins holds an MSc in Cybersecurity, a second Master’s in AI, and co-hosts the multi-award-winning podcast Compromising Positions. His thought leadership has appeared in Wired, Forbes, IT Pro, and Raconteur, among others.
Jeff recently launched his AI consultancy, NorthStar Intelligence. In his own words:
Over the last few years, I’ve had the same kinds of conversations on the topic of AI with many leaders and teams: ‘We know that AI matters, our board wants it, but we’re not sure where to start, and we don’t want to make a mess of it.’
That’s precisely the gap NorthStar Intelligence is built to fill. Practical, responsible AI support that meets organisations where they are, from the ‘tyranny of the blank page’ to real adoption and delivery.
Jeff and I are both based in the North of England, which is producing what he calls “a huge pool of AI talent”. We got into where beginners should start with AI, why UK businesses can’t keep waiting, and why research already shows that LLMs are better at persuasion than humans are.
Kane: You wrote your first lines of code at six years old and now you’re a fractional CAIO & CTO leading AI strategy. What’s been the biggest “wow, the world has changed” moment for you in tech?
Jeff Watkins: I’ve been lucky enough to be around for the growth of the web (and the dot com bubble bursting), the cloud, the smartphone, the blockchain and of course Generative AI. Rather predictably, I’d say the launch of ChatGPT and OpenAI’s APIs that let people build using the technology.
Suddenly, competitors emerged; for example, Google had been sitting on its model for some time, worried about the safety and ethical issues. The web and smartphones have had a greater impact on our world, but that has been over decades, whereas GenAI has all happened in three years – and it doesn’t look like it’s slowing down. So my biggest “wow” is that the pace of innovation seems to be increasing, and these launches are getting closer together, with greater societal impact.
You’ve said that modest AI training can lead to major time savings for business teams. What does “modest” actually look like in practice? And where should a complete beginner start?
Jeff: I like to think of it as a layered topic, and it should be for everybody as this isn’t purely for technologists. You can build some basic AI literacy in a matter of two hours. I think some of the Google introductory courses, which are mostly free, are brilliant. Google’s Introduction to Generative AI takes around 30 minutes to complete. I’d argue that basic AI literacy should be a bare minimum, in the same kind of vein as cybersecurity awareness and GDPR training.
On top of basic literacy, doing some basic “general” Generative AI skills building pays off very quickly. It’s also worth keeping up to date with these, as the field is moving so quickly, but around four hours would pay huge dividends. Then it may be worth investing a few hours in more specific AI tooling training.
Overall, if you had a two-day time budget, you could go a long way and see almost immediate benefits. So in summary I’d say first, basic AI literacy to put a base coat down. Second, generalised GenAI usage to get people using things like Gemini/Copilot tooling effectively. And third, specific AI tooling training that helps people do their specific jobs better. Don’t overthink it, and don’t pay through the nose for it.
If you’re committed to learning over a year, I’d also suggest getting a Coursera subscription when it’s on sale, but there’s plenty of really good content on YouTube nowadays.
If you could give one piece of advice to a business leader who keeps hearing “you need an AI strategy” but has no idea where to begin, what would that be?
Jeff: Start by getting your bearings, and definitely not by buying tools. I’d suggest engaging expert consulting advice for this process to run a structured assessment of your starting point. You’ll likely have a lot of Shadow AI if you don’t have a robust strategy by now. Build some AI literacy at the top, i.e. the board and the exec. Set your principles and red lines, alongside a governance and accountability pathway. Then you can start looking at opportunities within your organisation and create a scoring method to rank them.
At this point, you can start talking about a strategy (preferably a simple one that can fit on a page as an executive summary), but it needs to consider not just the strategy for rolling out AI, but also for actual, meaningful, and measurable adoption.
You’ve got an MSc in Cybersecurity and a second Master’s in AI. How worried should everyday people be about AI and security – deepfakes, scams, that kind of thing?
Jeff: In a word, very. There are many novel kinds of attacks, both enabled by AI used by bad actors and by AI-based systems. Sadly, Generative AI security is an immature field, and some novel attack types are difficult to defend against. This hasn’t been too problematic so far, as for the most part, GenAI has been stuck in the chat window. However, as we begin adopting Agentic AI, our agents will have access to file systems, APIs, and browsers.
They can take actions, they can collaborate, they can be tricked, and they can go beyond their remit if they haven’t been adequately constrained. Generative AI-based manipulation using anthropomorphic LLM techniques, alongside psychological tricks, is an incredibly troubling problem that’s coming very soon.
Research has shown that LLMs are better at using persuasive language than humans, and also, in some Turing Test-style experiments, passed for humans more reliably than the human participants did, which blows my mind. Put simply, we have the mechanisms for enormous-scale, high-quality, micro-targeted manipulations at almost no cost to the attacker. This could be used for scams, to destabilise democracy, and even for intelligence and cyber warfare.
You co-host a podcast called Compromising Positions where you bring in psychologists and anthropologists to talk about security. What’s something from outside the tech world that’s changed how you think about AI?
Jeff: Behavioural science, no doubt about it. I need to be clear on this: launching user-facing conversational AI is closer to a relationship than to old-fashioned user interface design, and it really needs a behavioural science lens to do it well. The potential psychological issues that can stem from unethical or poorly implemented AI-based interfaces cannot be overstated.
The possibility of dark patterns and manipulation being used by organisations that seek only to maximise profits rather than take a “shared benefit” approach to AI is as scary as some cyber threats. Anthropomorphisation alone is an incredible rabbit hole to go down.
Ascribing human-like properties to objects or animals is something we naturally do as humans, which is on us, but we can now use LLMs to induce really strong anthropomorphic effects, and this needs to be handled with extreme caution, as it can bypass people’s reasoning and build up too much trust.
You’ve talked about the idea of a “fractional CTO” – businesses bringing in senior tech leadership part-time instead of hiring a full-time CTO. With AI changing so fast, do you think more companies should be doing that? Why?
Jeff: I think many more organisations, especially SMEs in the UK, do need a fractional technology leader in their midst. At this moment, I’d suggest a fractional CAIO (Chief AI Officer) or a fractional CTO with strong AI skills would be the most beneficial. This is a volatile time, with a rapidly changing rate.
For example, Intercom now only sets 18-month strategies, rather than five-year ones, as in pre-2023. Until this all settles down, I think it’s crucial for organisations that can’t afford full-time technical leadership to have some expert input. Not every organisation needs to be a “tech company”, but almost all organisations need to embrace the technologies required to help adapt to the AI age. Those who don’t risk becoming irrelevant.
We’re both based in Yorkshire, and Leeds is now outpacing London for AI job growth. You’ve warned the UK risks a “Blockbuster moment” if it doesn’t act on AI – is Yorkshire actually leading the charge here, or are we still playing catch-up?
Jeff: Sadly, although there’s significant AI job growth in Leeds, the North of England is poorly represented in AI investment and research. The Leeds tech jobs market is picking up in some areas but remains depressed. The majority of postings I see are in London, which is frustrating because the North of England is producing a huge pool of AI talent.
The AI Growth Zones initiative should bring infrastructure and jobs to Yorkshire, but we also need to ensure that innovation and research are happening here, too. The main thing for the UK will be figuring out the sovereignty problem, though, as we’re far too dependent on US technologies.
It’s 2026. What’s one AI prediction you’re willing to put your name to for the next two years?
Jeff: The Agentic Web will begin to take shape. I think we’re seeing the first hints of it with OpenClaw and MoltBook, even though the hype around them has already burned out.
We have the standards in place (even if they will be replaced), such as MCP and A2A. It’s inevitable that we will have another web of agentic AI bots collaborating and participating in a marketplace of skills. Think Fiverr, but for AI, and at a much larger and quicker pace. This is one of the key, life-changing technologies that will shape how we use the internet over the next few years.