A Little Bit of Background
I’ve spent 25 years working in medical engineering where systems have to be safe - mistakes kill people. My career has been in high-stakes medical devices, where the mantra isn't "move fast and break things," but "move slowly and don't kill people." I hold a PhD in a field intimately tied to AI, including neural networks and signal processing, and I've been building and understanding complex systems since the 90s. This isn't my first experience of the technology hype cycle.
I found a safety issue - what next?
In medical devices in the UK and Europe there are dedicated safety agencies that you can report a problem to and you will get feedback. Medical device safety is rightly taken very seriously. Products are recalled, safety notes are issued, all equipment has to be designed to stringent regulations. But what about LLMs, that are now influencing the world around us? It’s simply the Wild West, a shoot out between big companies.
A Discovery I Didn’t Want to Make
I discovered this when I found a potential serious safety issue by doing some basic but highly technical experiments on LLMs. As a result I felt it was important to to tell the AI industry. So for several months, I’ve tried to share my results, and contribute to AI and LLM safety. I feel I’ve done everything I could and was supposed to do. Especially as I felt a moral responsibility to put my work into the hands of the big companies: I wrote an explainer book on the subject, authored a research paper, built a dedicated website with example code, and published multiple articles on my Substack and repeated warning.
Then, I started reaching out. I systematically contacted the major AI companies like OpenAI, Google, Anthropic, as well as leading researchers. But despite the glazing, the beautiful windows on AI and AI Safety web sites and personal biography pages, there was nobody behind them. There was often no email address, or contact point and if I did send something, the return on my efforts? Nil. Nada. Nothing.
Not a single meaningful response. Not a debate, not a challenge, not even a "you're wrong and here's why." That would have been great; as at least then I wouldn’t have to worry. But no, there was just a total, utter, deafening silence or as an LLM says ‘crickets’.
This begs the question, that if someone with my background and direct experience has spotted a technical issue and can't get a single reply, then the grand narrative of “AI Safety” isn't just flawed. It's an illusion. There is no AI safety because nobody who matters is listening, or perhaps, they don't want to listen.
The Black Hole of Corporate Indifference
My attempts to engage with the very companies leading this revolution were met with a wall of automated indifference. My messages to OpenAI were met with the digital equivalent of a shrug: a chatbot told me to ask ChatGPT4. The same company that positions itself as a leader in AI safety has built a digital wall against actual engagement, treating a professional inquiry like a forgotten password request.
Emails to other major AI labs vanished into the ether. LinkedIn posts received no comments or responses. I have found no public, accessible pathway for qualified, external input, even when that input comes from someone who deeply understands the technical underpinnings and real-world implications of their creations. For me, it seems the industry doesn't want scrutiny; it simply wants the appearance of safety without the inconvenience of being held to account.
The Irony of LessWrong: "Too Much AI"
After being systematically ignored by corporations, I thought, “Surely the rationalist community, the home of Eliezer Yudkowsky’s AI risk obsession, will engage seriously.” So I posted a detailed safety warning on LessWrong, a forum dedicated to logical discourse and existential risk.
Their response? The moderators rejected my article for “having too much LLM content.”
Let that sink in: there was nothing about the content, the serious risk, the meaning implicit in the post. Just try again. A community that spends thousands of hours debating the minutiae of Paperclip Maximizers dismissed a human expert’s alarm because it resembled the very AI they fear. This isn't just peak irony; it's a profound systemic failure. When subcultural signalling and an almost paranoid filtering mechanism (ironically, one that mimics AI's own biases) matter more than substance, “safety” becomes a game of status, not survival.
This single anecdote, more than anything, reveals the very real dysfunction at the heart of the AI safety movement. Is it just an echo chamber designed to filter out dissenting voices, even when those voices are human and armed with concrete evidence?
The Real Problem: Performative Safety
My experience shows that for me AI safety is an empty promise. It's a PR company performance where Companies are eager to issue charters, establish ethics boards, and host conferences; all this while actively stonewalling external, critical input. It seems that at the moment safety is a marketing tool, not a core engineering principle. In practice, the feedback loop is broken. If experts with decades of high-stakes experience can't get feedback on their warnings or collaborate on solutions, how robust are any internal processes? It really does seem like the industry is flying blind, and not letting anyone else into the cockpit.
Finally, there seems a deep misallocation of concern. While there’s a highly vocal segment of the "safety" community fixating on speculative, 'sci-fi' doomsday scenarios, real, immediate harms, like the technical issues, algorithmic bias, or psychological effects appear to be being ignored. My attempts to engage were precisely about these concrete, engineering-level safety issues, yet they were met with the same wall of silence. This focus on abstract risk allows tech elites to avoid accountability for the damage their products are doing right now.
What Now? Beyond the Myth of Safety
For me, the conclusion is inescapable: the AI industry's approach to safety at the moment is unfit for purpose. If the industry and its gatekeepers won't listen, then polite engagement may not be not enough. We need to move beyond voluntary guidelines. I feel we need real safety processes with independent, external audits, similar to how aviation or medical devices are regulated. This requires more that weak promises, but clear, enforceable laws with severe penalties for non-compliance. History tells us that the stakes are too high for anything less.
System to Report and Register Safety Issues
Above all there needs to be a legal responsibility for AI companies to respond to safety issues that are flagged both by the public and experts alike. We need a system where both public and experts alike can report and register safety issues. Without this feedback without real public and experts in the loop from outside the companies AI safety is a sham.
My experience leads me to the uncomfortable truth: I am shouting into the wind. I’m not giving up, not yet. If you are in AI and truly care about safety, prove me wrong. Engage with me because right now, the silence is deafening. And, for me, the silence is frightening.
AI safety is not about some imagined Machiavellian AI taking over the world, it’s about the real world. It’s about real risks in the same way Asbestos, X-rays, Thalidomide, smoking all had unintended consequences that resulted horrendous injuries and deaths. AI has great potential - but we need to understand the fine details and we need a system to report safety issues, so they can be understood. LLMs are being used for health advice, personal advice and some people have created personal relationships with the devices. Both the public and the industry need to be able to both explain the systems, the risks, and look very closely at the unintended consequences.
Related Posts
Finite Tractus: The Hidden Geometry of Language and Thought
AI Emergency Safety Issue: Is Anyone Listening
JPEG Compression of LLM Input Embeddings
Nonlinear Phase-Space Embedding in Transformer Architectures
A Big Non-linear Dynamical System Beginning
Copyright © Kevin R. Haylett 2025