From wedding photographer to digital anthropologist to Head of SecOps, Lianne Potter’s career has had more plot twists than most.
A proud Leeds Digital ambassador and co-host of the podcast Compromising Positions, she’s as comfortable unpacking online culture as she is knocking up a script or hacking into a system. Now focused on AI, Lianne is researching the people who marry their AI companions and asking what it means to be human when machines can offer convincing simulations of culture and companionship.
We spoke about the digital divide that shaped her career, why cybersecurity teams foster mistrust that’s making us more insecure, and what it really means to retain our humanity in the age of AI. As she puts it, “we lose that, we lose everything.”
Kane: You went from wedding photographer to Head of SecOps, and now you’re working in the AI safety space. What happened in between?
Lianne Potter: I often joke that my CV has more plot twists than an M. Night Shyamalan film. Every job change has contributed to my success in the next endeavour and, while the jump from wedding photographer to a career in tech might seem an unlikely career path, I wouldn’t change it for the world; I’m a better technologist for it. But there is one element that has been the thread throughout, and that is anthropology (the study of culture).
While I was a very successful wedding photographer (and loved it as a job), I started to think about what the next chapter would be. I ultimately decided to go back to university and study applied anthropology. There, my focus was digital anthropology – much to the chagrin of my professors who couldn’t quite understand the academic draw to what, at the time, seemed to be an ephemeral topic.

Understanding how online cultures formed and developed was a very new area of research. I spent my time watching people on the platform Second Life recreate Marxist capitalist structures in a virtual space. I also studied the aesthetics of fakery on Instagram and what would shape my life going forward – the digital divide.
At this time I was not a ‘techie’, but I could see that if you had any mastery of tech, you had power – and that there was a huge divide growing between the ‘info-haves’ and the ‘info-havenots’. It led me to study the consequences of a life not online and the social disparity that would bring.

It wasn’t all theory; I managed to secure a job as a digital anthropologist and put what I learned into practice for a local Leeds charity that was given a pot of funding and the brief “solve destitution in Leeds”. There, I saw the digital divide in action. At the time, the UK benefits system had moved most of its services online.
Traditionally, claimants of work benefits (Jobseeker’s Allowance, for example) were able to go to their local job centre and ‘sign on’ in person. However, people coming through our service hadn’t eaten in days, were in debt and often faced eviction. Now, while there was a myriad reasons why people might be in destitution, one kept coming up time and time again – the most vulnerable people in society had neither the skills nor the means to access online services.
When they showed up at the job centre, instead of being supported by a civil servant, they were pointed to a bank of computers to claim their benefits. But not everyone who came into our service had the skills needed to use a computer, and most had never even put a hand on a keyboard. As a result, most just left the job centre empty-handed, frustrated and some even felt ashamed.

I wrote reports with my findings — not to stop progress, but to highlight that the government had neglected to put provisions in. The consequences were devastating. The way technology could transform an individual’s circumstances became a driving force for me to retrain for a tech career.
I felt that the only way I could bring digital equality was from the inside, keeping anthropology at the front and centre of everything I do. And I have. I’ve never stopped being an anthropologist and think of myself as one who just happens to also be able to knock up a script or hack into a system. That’s the best of both worlds – the highly technical and the highly human.
You describe yourself as a “digital (or cyber) anthropologist.” For people who have no idea what that means, what does an anthropologist bring to AI and cybersecurity that a traditional technologist might not?
A recognition that technology follows rules and can be coded to behave exactly as intended; humans, by nature, cannot (although the tech companies are very much trying their best to crack this). Tech fails when you rely on the human to follow a path, whether a happy one or not. I see my tech career as being the anthropologist who keeps the human firmly in the loop of the projects I’m involved in.
When I was a software developer working for the NHS, it was to remind my fellow developers that our service needed to accommodate every single person in the UK – that’s a lot of culture. In cybersecurity, I used anthropology to understand why people engage in risky behaviours, why the (often technical) controls we put in place don’t work, and how hackers use what makes us human against us.

In AI, my focus has now shifted to what it means to be human when AI can offer rather convincing simulations of culture and companionship. I feel my time as an anthropologist has been building up to this moment – from watching cultures on Second Life to researching the people who marry their AI companion on platforms like Replika.
Now is the best time to be an anthropologist in tech. My job as one now is walking that tightrope that enables us to thrive with this tech, and also keep the tech companies in check, so we can retain our humanity. We lose that, we lose everything.
In a recent research paper, you argued that terms like “artificial intelligence” and “hallucination” can actually be dangerous because they mislead people about what AI really is. Can you unpack that argument for us?
Naming isn’t descriptive, it’s constructive – but we’re really bad at naming things in tech. The paper I wrote, Naming is Framing: How Cybersecurity’s Language Problems are Repeating in AI Governance, came about as I’ve been a long campaigner in cybersecurity that the language we use is causing more harm – and actually causing people to engage in more risky behaviour.
For example, in cyber, we call people ‘the weakest link’ or identify people as ‘insider threats’. That’s not exactly the type of language that is going to get people on side, nor is it going to fill people with a sense of agency needed that they can actually protect themselves.
As I began to get more involved in AI, I saw the same poor language choices happening in AI discourse that were very reminiscent of cyber. Cyber has been a discipline growing and developing for decades, so have some language choices that are hard to shake; but for most people outside of tech, AI is a new frontier – one that is changing rapidly.

My paper argues that language doesn’t just describe technologies, it actively frames them – and that can be for good or for ill. Right now, terms like ‘artificial intelligence’ or ‘hallucinations’ are technically accurate in a narrow sense, but they smuggle in assumptions that quietly govern how people think and act. Those assumptions matter because they shape where responsibility lands.
Calling systems ‘artificial intelligence’ makes them sound like thinking entities, which encourages people to trust them too much. This leads to a rise in AI psychosis. ‘Hallucination’ makes the errors these systems make sound like quick, human-like glitches, instead of predictable failures of how the system works. Language in tech isn’t harmless and, in the case of AI, it shapes (and will continue to shape) how people use language. This is quietly shaping how we govern AI, often in unhelpful ways.
Language in tech isn’t harmless and, in the case of AI, it shapes (and will continue to shape) how people use language. This is quietly shaping how we govern AI, often in unhelpful ways.
When we talk about AI as intelligent, autonomous, or ‘hallucinating,’ those metaphors end up baked into policy and practice, leading to over-trust, blurred responsibility and brittle systems. If we want better AI governance, we need better words that keep human power and accountability front and centre.
You have described cybersecurity as having a “horrific PR problem” – often seen as a blocker, with employers resenting the sense of being mistrusted. Is the rise of AI improving that perception, or making it worse?
Cybersecurity has a PR problem because, historically, a good chunk of teams blame the victims over the perpetrator. Think about it: if someone broke into your house, people would blame the burglar. In cybersecurity, if you click on a convincing phishing link then you’re branded as a fool. Even our so-called training and awareness methods (phishing simulations and mandatory training with a test at the end) are really veiled blame mechanisms.
Either that, or they’re pure arse-covering, as security awareness training is often to meet a compliance necessity, rather than a real drive to change security culture.
With this mindset, if you’re in a company and you happen to click on something, you’re either lucky and work in a company where you can safely report such things without losing your job, or you might not report something because you’re scared of what might happen. That mistrust cybersecurity teams foster is making us more insecure because we don’t leverage our best assets: the other humans around us.

Now, is AI improving that perception? Well, considering when I go speak at security conferences and ask the security professionals in the room, ‘Are you upskilling in AI?’ I’m often met with silence. So right now, AI isn’t fixing cybersecurity’s PR problem; in fact, it risks amplifying it.
If security teams don’t understand AI well themselves, it tends to get used as another blunt instrument: more surveillance, more automated judgment, more ways to catch people out after the fact. That deepens the sense of mistrust and reinforces the idea that security exists to police employees, not support them.
I think AI could be a turning point, but only if the mindset changes. Used well, it can reduce cognitive load, catch threats earlier and help design systems that don’t rely on people being perfect. Used badly, it just scales blame faster. The uncomfortable truth is that cybersecurity won’t fix its PR problem with new tools; it will fix it by rebuilding trust. AI will either help with that or make the damage worse, depending on how human-centred we’re willing to be.
There’s been growing concern about attackers using AI to make scams more convincing than ever. Does this mean we need to fundamentally rethink how we treat what we read, see, and hear online?
Yes, and the uncomfortable answer is that our old mental shortcuts are no longer reliable. I recently had to do my annual cybersecurity awareness training, and the advice was exactly the same as it’s been for years: look out for bad spelling and punctuation. But now everything gets run through a large language model (scammers included), so perfect grammar tells you nothing. That advice is obsolete, and we haven’t really replaced it with anything meaningful.
The hard truth is that I don’t know what simple advice we can give people anymore, and that’s a serious problem. Cybersecurity has always been an arms race: for every new defence, attackers adapt. AI just accelerates that dynamic. It means we can’t keep putting the burden on individuals to “spot the scam”. We have to rethink systems, reporting cultures and protections so that getting tricked isn’t treated as a personal failure, but as an expected risk we design around.
As co-host of the podcast “Compromising Positions,” you regularly bring in psychologists, behavioural scientists and philosophers to discuss technology. What’s one idea from outside the tech world that has most changed the way you think about AI?
While humans are delightfully complex, messy even (and I wouldn’t change this for the world), we are also delightfully inquisitive, adaptable, and our strongest asset is our capacity for empathy. This will be our sword and shield, which doesn’t actually need to be wielded against the AI itself, but the people in power building these tools that can enhance humanity for the better.

The experts in the humanities that I’ve had on the show have consistently demonstrated that AI isn’t a technical challenge to be ‘solved’ – it’s a cultural shift to be negotiated. Our capacity for empathy allows us to see the person behind the prompt and the community behind the code. If we lose that, we aren’t just compromising our technology, we’re compromising our humanity. I just hope those who wield this great power still have enough humanity left to make good decisions for the rest of us.
Leeds has become one of the UK’s fastest-growing tech hubs, particularly around AI. What’s it like building a career here, and how would you describe the AI community in the city?
If you’re wondering what it’s like building a career in Leeds, I’ll tell you straight: you’re missing a trick if you’re not here. Leeds has become one of the UK’s fastest-growing tech hubs, particularly around AI, and it’s done so by being a proper no-nonsense hub. I strongly believe if AI got its start in Yorkshire and not Silicon Valley, we wouldn’t have to put up with all that sycophantic fluff; it’d just give you some proper tough love and tell you straight if your idea was a bit crap.
I owe everything to the Leeds tech scene; it changed my life, and I’m a firm believer that it can do the same for others. A huge part of that is down to how welcoming we are up North. When I was a career changer pivoting into tech, I was welcomed with open arms and felt right at ease in the community.
I’ve spent time in other tech scenes (and I’ve even been tempted down to ‘that there London’ for the odd event), but I can honestly say the vibe is worlds apart. In Leeds, we’re grafters who don’t take ourselves too seriously. You can’t be too up your own arse in Yorkshire – you wouldn’t be allowed to get away with it. That keeps us grounded, and it means our tech and our technologists are focused on what actually works.
I owe everything to the Leeds tech scene; it changed my life, and I’m a firm believer that it can do the same for others.
We don’t suffer fools gladly, and we’re world-class at keeping things on budget because of that innate Yorkshire tightness. We also have the best of both worlds; I can be in the heart of a big city or stood in a field within 15 minutes of my front door.
In Leeds, we don’t just build in the cloud; we can actually go outside and see the clouds – and that bit of fresh air is exactly what keeps our code clean and our heads screwed on right. For anyone wanting to set up a “tech house,” that’s a winning combination you won’t find anywhere else.
As a Leeds Digital ambassador, I spend a lot of my time waxing lyrical about the virtues of our scene. We host the Leeds Digital Festival (one of the biggest tech events in Europe), and my favourite pastime is when I’m asked to speak at yet-another-conference in London. I just bat it back to them and say, “Nah mate, come to Leeds!” To speak the words of my people: if you’re not building it in Leeds, you’re bloody daft.
If you could change one thing about how the world talks about AI, what would it be?
I don’t think the world needs to change how it talks about AI; the people building it do. I use what I call the “Uber test” to keep myself grounded. The next time you’re in a taxi, ask your driver what they think about AI or ask anyone who isn’t in tech. The answers usually fall somewhere between blissful unawareness and quiet panic.
I once had a driver ask, “Will AI replace my job as a taxi driver?” and I had to explain that replacing drivers is essentially the business model of the company he was driving for. He had no idea. That moment stuck with me.
So no, I don’t think we need to change the conversation about AI. We just need to keep having it and not only within our own tech bubble. We should be talking to the people who can’t influence the decisions we make but will feel the consequences first. If we don’t bring them into the conversation, they won’t see what’s coming until it’s already here.