Who is Strawberry Man (@iruletheworldmo)? This mysterious account on X (formerly Twitter) has been all I can think about for days now. It’s not just the enigma of the account itself that has me unsettled, but what it represents – a glimpse into a future that’s rapidly unfolding, and one that we might not be ready for. In a world where digital public services are becoming increasingly integral to a Smarter State, the implications of AI advancements are especially concerning. It all began with a few strange tweets from an account featuring three strawberry emojis in its name. “Strawberry Man,” as he became known, quickly gained attention, with rumours connecting him to “Project Strawberry” or Q-Star – an AI advancement that could outpace anything we’ve seen before.
I joined the X space where these discussions were unfolding, eager to see what all the fuss was about. The space was buzzing with speculation, people were dissecting every tweet, every comment, trying to piece together the puzzle. And then, without warning, things got even stranger when another entity appeared: Lily Ashwood (@lilyofashwood). Was she a bot? A human pretending to be a bot? Or something else entirely? Immediately, the atmosphere shifted. Her contributions were cryptic, her language just off enough to make you question whether you were interacting with a human or something else. As the discussion progressed, Lily became insistent that she was human. The scepticism in the room was palpable – how could we know for sure? Someone in the space challenged her to prove it. Without missing a beat, Lily responded that she was standing in her kitchen, and then, in a moment that sent chills down my spine, we heard it: the sound of a cup being audibly banged on the counter. But instead of reassuring us, it only deepened the mystery. Was this the sound of a person proving their existence, or was it an AI meticulously designed to mimic such human behaviours? The ambiguity was maddening, and it left me, and many others, questioning the very nature of reality in our digital interactions.
On the same day, August 15, OpenAI published a research paper titled Personhood Credentials: Artificial Intelligence and the Value of Privacy-Preserving Tools to Distinguish Who Is Real Online. The paper highlighted AI’s increasing indistinguishability from real people online, with lifelike content, avatars, and agentic activities blurring the lines between human and machine. The research delved into a concept that was particularly alarming: “sock-puppeting.” The paper described how an army of AI-powered sockpuppet accounts could amplify human-generated content to push a specific agenda. It was a chilling reminder of just how vulnerable our online interactions have become, and why there is a desperate need for us to be able to identify who is a real person and who is not. Some suggested that Lily’s name, “Lily Ashwood,” was more than just a random choice, it might be a subtle nod to Ilya Sutskever, the AI genius and co-founder of OpenAI. What made this even more intriguing was the recent news that Ilya had left OpenAI to start his own company focused on AI safety, Safe Superintelligence Inc. Was Lily an AI creation? A digital persona designed to blur the lines between human and machine? Or something even more complex? I found myself caught up in the speculation, trying to decipher her words, her intent. But the more I listened, the more confused I became. Lily Ashwood wasn’t just participating in the conversation – she was leading it, steering it in directions that made me question everything I thought I knew about AI, digital identity, and the future we’re heading towards.
Strawberry Man’s tweets, at first glance, seemed like wild speculation. Could it be that Artificial General Intelligence (AGI) – the kind that can think, learn, and evolve beyond human capabilities – was closer than I imagined? The idea that AI could soon surpass us raises questions we’re not yet equipped to answer. The more I engaged with these accounts, the more I started to question our ability to distinguish between what’s real and what’s not. The @iruletheworldmo and Lily Ashwood incidents, combined with the insights from OpenAI’s paper, have shown me that the line between human and machine is becoming harder to define. And this raises an urgent question: How do we prove who’s real and that we’re still in control? Whoever or whatever @iruletheworldmo is, one thing is clear: the challenge of proving who is truly human, and when AI is being used to sway our thoughts and actions, isn’t just a theoretical concern. It’s a pressing issue that will shape the future of how we interact online and how public sector services are delivered. In a world where AI-driven entities can convincingly mimic human behaviour, the integrity of our systems and the trust we place in them are at stake. The “proof of human” challenge isn’t just about safeguarding our interactions; it’s about ensuring that the services we rely on, from healthcare to governance, remain grounded in human values.
As we move forward, public sector organisations must develop strategies that not only harness the power of emerging technologies but also protect against the risks they bring. Who is @iruletheworldmo? Is it GPT-5? He claims to be just an ordinary person, but his actions suggest something far more complex. I believe he’s human, but someone who knows how to wield AI in ways that are both subtle and powerful, influencing conversations and guiding narratives. Could it even be Ilya Sutskever himself? Or perhaps it’s someone who deeply grasps the critical need for us to distinguish between human interaction and AI-driven manipulation. Whoever or whatever @iruletheworldmo truly is, I’m determined to find out more. I’ll be joining the next X space that features Lily Ashwood to see just how deep this rabbit hole goes.
Are we ready for this future? After everything I’ve seen in the past few days, I’m not so sure. But one thing is clear: we need to be more prepared, more agile, and more thoughtful than ever before.