How I Brought a 2D AI-Generated Fox to Life Using Adobe Tools

14 January 2025
By Dynamique Twizere '27

As a student working in the Academic Technology’s PEPS department (Presentation, Events, and Production Support), I’ve had the chance to explore some fascinating projects involving artificial intelligence (AI), from generating images of fictional characters with Midjourney to creating AI-generated voices with ElevenLabs (Shoutout to my supervisor, Dann Hurlbert!). Recently, however, I had one of my most fun and challenging experiences yet. This proof of concept will not be the final product but shows the steps to consider.

drawing of a red fox, dressed in a red tunic with white neck ruffles and sleeves, is holding a quill in its right hand and a book in its left hand. It is sitting on a short stone pillar, surrounded by medieval manuscript text and marginal drawings.

A professor shared an AI-generated image of a classical, fairy-tale-style fox sitting on a stool, holding a feather and a book. Along with the image, he also provided a script, which Dann translated into an AI-generated audio file using ElevenLabs. The task at hand? To bring this 2D AI-generated image to life by animating the fox’s mouth and creating natural movements that would make it look like the character was delivering the script.

I started by exploring several AI animation tools like Runway, D-ID and HeyGen, but none offered the level of control I needed. We reached out to those AI companies, and each admitted that they don’t yet create avatars from artistically characters. Eventually, I turned to Adobe Character Animator, a standout app when it comes to animating 2D characters—but, there is a catch: the characters to be animated have to be “rigged”.

the medieval fox has been diminished so we only see its ears, eye brows, nose, mouth, neck, left arm with quilt, the book, and the stone pillar

Rigging a character involves breaking down the image into components (e.g., eyes, mouth, arms) and labeling them so the program knows how to animate each. Character Animator, like many animation tools, conveniently has built-in characters ready to be animated. However, because I was working with an external image, I had to rig the character myself. Using Adobe Photoshop, I dissected the original AI image into parts and labeled each for specific animations: the eyes (for blinking), the mouth (for lip-syncing), the arms (for movement), etc. Once rigged, I imported it into Character Animator.

Normally, animating forward-facing rigged characters is straightforward: the app automatically links the body parts of the character to the mirrored body parts of the person in front of the computer’s built-in camera. Not in this case, though. The fox in the image sat in profile, making it difficult for the app to automatically map body movements onto the the fox’s body.

This offered another creative challenge: figuring out how to add fixed “anchor” points and bendable joints that would help clarify for the app what parts should move, bend, or not budge. After the rigging, the fox was ready for animation.

screenshot of Adobe Character Animator, with the medieval fox in one segment and the author in another segment, illustrating the rigging process

By either speaking, or playing an audio file on the computer, the fox’s mouth moved in sync, and I could animate its movements in real time—blinking, gesturing, or turning its head. Watching the lifeless AI-generate cartoon come to life as a speaking, expressive character felt magical!

Below is what this “Classic Fox” looked and sounded like using Adobe Character Animator for the avatar, with the voice developed by ElevenLabs.

After this proof of concept, we generated another “Classic Fox,” with a wood block print styled character.

Reflecting on moments like these, I’m continually amazed at how many powerful tools out there can seemingly do anything one can imagine. The ongoing rapid improvements in AI make it easier and easier and more and more accessible to realize our imagination, and I am all here for it!