ClickCease Skip to content

The Problem with Tilly Norwood

Like any tool, AI is as sinister as the people who use it. Can we really be trusted with it?
by Bill Coffin

A person in black tank top in front of people

In September, I attended Ethisphere’s Whole Company Alignment (WCA) meeting, a companywide, in-person strategic meeting. Given that Ethisphere is fully remote organization, the Whole Company Alignment is a significant event for us, since it gives us the chance to connect face-to-face with colleagues. I love it.

At our last WCA, the topic of generative AI was just starting to blow up, and I spoke with special enthusiasm about the risks I felt it posed. Namely, that as a technology, it relies on content I consider to be stolen. My outlook was in stark contrast to a colleague of mine—with whom I disagreed publicly—who was very pragmatic about the technology. He saw what it could do. Whether we embraced it or not, those around us—including our clients—would. So how will we contend with that as an enterprise?

My colleague, of course, was right. And when I bumped into him during our recent WCA, I told him that, and admitted that my thinking had been too narrow. AI had changed dramatically between the two WCAs with regards to the technology’s sophistication, society’s collective agreement over the legality of LLMs, how we govern AI’s use, and how ethics and compliance teams can use AI to give themselves a radical boost in productivity.

My own skepticism around AI has shifted, as well. For the work that I do, the power of AI is undeniable, and there are many things I’ve been able to do thanks to my embracing this tool. And, seeing where it excels and where it doesn’t has further enabled me to develop my own skills in ways I had not previously imagined.

Case in point: I had the pleasure of working on Ethisphere’s new report, AI in Ethics & Compliance: Risk to Manage, Tool to Leverage, which features an overview of AI regulatory trends, AI governance best practices, and use cases from E&C leaders at Cargill, Palo Alto Networks, and Verisk who are integrating AI into their team’s daily work.

For one of the sections, we decided to include an audio version of the text, and as a matter of due diligence, I recorded my own audio narration of a segment of it myself, also produced the same segment through AI, and asked my colleagues to decide which they preferred. They went with me, and so I recorded the whole section, which was pretty cool. But I was fully prepared to accept that the AI version might be better. And who knows? A year from now, it might very well be. Such is the swiftly tilting nature of modern professionalism: there are no living fossils.

The Girl Who Can’t Say No

Having said that, there is sometimes a dark side to innovation, and as I write this, there is a robust conversation/screaming match being held over the rollout of Tilly Norwood, the first fully AI actor.

Tilly Norwood is ultimately the product of British production company Particle6. At the Zurich Film Festival in late September, Particle6 formally announced its AI division, Xicoia, for which Tilly Norwood is the flagship product. Tilly Norwood is being billed as the first 100% AI-generated actress, and news spread like lightning that Hollywood talent agencies were already lining up to represent the avatar. The objective here is to create the next Hollywood superstar, only one you don’t have to pay directly, who never gets tired, and who can be made to do anything, no matter how objectionable.

Predictably, SAG-AFTRA strongly objects to Tilly Norwood, as one might imagine any professional class would do when facing an existential threat. I get it—I myself have made more than a few comments about John Henry vs. the steam-powered rock drill as I have negotiated my own working relationship with AI. And human entertainers have every right to be concerned; as evidenced by The Sweet Idleness, an upcoming movie which features not only digital actors, but an AI director as well.

The more interesting—and troubling—criticism of Tilly Norwood has come from outside the entertainment industry, namely from those who point out the troubling implications that Xicoia’s first commercialized entertainment avatar is a young woman which—as Xicoia itself boasts—lacks the power to refuse how it might appear in romantic scenes. 

Science fiction authors have warned of the troubling implications of fully digital actors for decades. Even George Lucas—himself a pioneer in the field of digital effects—went on the record as far back as 2002 saying that he thought creating fully digital actors was a step too far. Be that as it may, Tilly Norwood is now a reality, and one can only imagine what unspeakable content is already being made with it.

But here’s the thing. Tilly Norwood isn’t the problem. We are.

Tilly Norwood is a mirror that reflects the desires of those for whom it was built—whether that’s to undercut an entire creative class of human beings, or to entertain dark fantasies that deserve no living audience. Whatever criticism Tilly Norwood receives, it should remain within that critical context.

I mention this because within the last few weeks, there has been another AI avatar rollout with even greater potential implications for us as a species, but has gained a fraction of the news coverage. I am talking about another virtual woman: Diella, the world’s first AI government cabinet minister.

Here Comes the Sun

Diella (which means “sun” in Albanian) was originally introduced on January 19, 2025 as a text-based virtual assistant integrated into eAlbania, the Albanian government’s official online portal. The main role of eAlbania (which launched in 2015) is to provide citizens and businesses with access to a wide variety of government information, submit applications, and request official documents.

Diella was basically an advanced version of Microsoft Clippy for eAlbania, designed by the National Agency for Information Society (AKSHI) to help you find what you were looking for. However, on September 11, 2025, Albanian prime minister Edi Rama promoted Diella to the role of Minister of State for Artificial Intelligence of Albania, making it the first-ever AI to be named to a (virtual) cabinet-level government role.

Critics swiftly denounced the move; according to Albania’s constitution, government ministers must be mentally competent and at least 18 years old. Rama—who has a flair for the dramatic—followed up Diella’s rollout one week later by having it address the Albanian parliament, announcing that its role was to gradually assume greater control of the evaluation and awarding of public contracts as part of a broader anti-corruption effort within the public procurements process.

When we get right down to it, this is a theatric way for the Albanian government to use AI to prevent government procurement fraud, which is a pretty defensible use case, especially given that country’s corruption perception rating. In that, Diella’s role as cabinet member is purely symbolic, but it sends a useful message: We are fighting fraud with something that does not get tired and does not take bribes.

Form Follows Function

Within two weeks, we have seen the introduction to two very different virtual women meant for two very different purposes. One seems to have been created without thought to the consequences of its existence (or worse, specifically to generate those consequences). The other is part of an effort to raise a nation out of criminality and rebuild social trust in its governmental institutions. Both of these efforts have the potential to go south in spectacular fashion. But one of them is intrinsically meant to advance business integrity and make life better for nearly three million people. If I had to throw my weight behind one of these, I know which one it would be. And it wouldn’t be the one trying to land Hollywood representation.

When we look at how ethics and compliance leaders make use of AI, they currently occupy some very interesting territory: tasked with building and enforcing an AI governance model within an organization where they often want to make use of the very same technology to advance its own efforts. There’s an interesting recursion to it: We’d like to create guardrails for AI so we can also use AI to protect those guardrails.

The best-in-class efforts here all have very robust human-in-the-loop oversight built in from the start, which makes these use cases not just defensible but laudable. Look at the Cargills, the Palo Alto Networks, the Verisks of the world, and you’ll see efforts underway that any E&C program would do well to consider emulating.

The bottom line is this: AI is not going away. A refrain worth internalizing is that AI is not going to take your job, but someone using AI could. This technology is not doing anything on its own, even if it looks like it. It is still a fundamentally human tool that reflects the desires and the aspirations of those who use it. We can use it to advance honesty and integrity. Or, we can use it to make deepfakes of Sarah Connor attending an AI governance meeting. The choice—as has always been, but especially in ethics and compliance—is ours.

Compliance DOJ ECCP GUIDANCE
Get access to Expert Insights and Fortune 500 Program Templates and Examples for today’s top risk areas.
Request Guest Access