An Uncanny Moat

Back in the early days of computer animation, the technology at the time really struggled with realism. The first cartoons were necessarily abstract, or cartoony.

As time progressed, the technology caught up. CGI now can be all but indistinguishable from real life. But there was a brief period, as seen in films like The Polar Express or Final Fantasy: The Spirits Within, when the artists aimed for realism and didn’t quite get there.

These films were often critically panned. Eventually, it became clear that the cause was quite deep in the human psyche. These films were realistic enough that we’d mentally classify the characters as real humans, but not so realistic that they actually looked normal. On an instinctive level, people reject these imposters far harder than more stylised graphics that don’t have the pretence of reality.

This phenomenon is known as the Uncanny Valley and has influenced visual design of fake people in films, robots, games etc.

For a time, the recent crop of image generators and LLMs fell into the same boat. Twisting people with the wrong number of fingers or teeth was a common source of derision. People are still puzzling over chatbots that can speak very coherently and yet make wild mistakes with none of the inner light you might expect from a real conversationalist.

Now, or at least very soon, AI threatens to cross that valley and advance up the gentle hills on the opposite side. Not only are we faced with a disinformation storm like nothing before, but AI is going to start challenging exactly how we consider personhood itself.

This is something we need to fight, in addition to all the other worries about AI. I don’t want to get into philosophical weeds about whether LLMs could be considered moral patients. But I think our society and thinking are structured around a clear human/non-human divide. Chatbots threaten to unravel that.

Continue reading

Trainright

A still from Steamboat Willie

There’s been a lot of clamour about generative AI for images, like Midjourney or Stable Diffusion. It’s killing creating jobs or whole industries; it’s illegally using copyrighted data for training purposes; it’s eroding the nature of art itself. I’m sure there are many out there who would be happy to see an outright ban on AI image generators and the like.

On the other hand, it’s undeniable that this is a valuable technology. Not just for the corporations making them, but of benefit to the world. Sure, every artist unpaid is someone else’s money saved, but also as the costs of art fall, that democratises everything around art. A friend of mine made personalised Christmas card this year, a small joy of the world that simply would not have occurred before. I co-wrote a custom murder mystery with ChatGPT in barely longer time than it took to play. The lowering skills bar for indie comics and games is something I hope leads to a profusion of new original things, much as digital art and games engines have spurred it in the past.

How can we resolve these things, to have our cake and eat it? Well, society has faced this problem before and has found a solution that, though imperfect, has endured for centuries.

Continue reading

nice-hooks

I’ve created a new open source library.

I’ve been learning quite a bit about AI and AI Alignment recently. A few weeks ago I joined the Interpretability Hackathon. Sadly my contributions were minimal as I had to leave halfway through, but doing it made me appreciate how bad the tooling is in this area.

So I’ve created nice-hooks, a library for working with pytorch hooks and activations more effectively.

GitHub

Docs

Constrained Text Generation with AI

I was discussing how AI text generation, such as ChatGPT, might end up getting used in computer games. So far, designers are fairly reluctant to adopt the technology. One of the key problems is that you just can’t control the output enough. Language models will break character or respond in inappropriate and toxic ways. Finding a good solution to this is a huge research field, and not likely to get cracked soon.

For the foreseeable future, AI in games is much more likely to be used offline – assets and dialog generation generated up front, so it can be vetted before being integrated into the game.

But it got me thinking, can we vet the AI’s output in advance, but still get the benefits of intelligent decision making at runtime? It turns out, we can! I doubt it’ll be useful in every circumstance, but I can certainly see uses for it, like chatbots, games.

The code and demonstration for this article is available here.

Continue reading