Introduction

I believe my work speaks for itself:

View higher-res (protected)

The transcript can be found here.

The creative process to deliver this was as follows:

  1. I digitised my head using Polycam running on my mobile phone.
  2. I then grafted my digitised head onto a standard 3D avatar in Character Creator, using an add-on called Headshot.
  3. As the "graft" wasn't perfect, I added a grey flat cap to cover up the "seam".
  4. I wrote out a script and recorded it using my phone.
  5. I used iClone 8 to translate my voice into animation for my mouth.
  6. I added in a stock piece of animation to give my head some movement.

Am I satisfied with this? No ... I'm really not.

I don't like how my digitised face looks. It's creepy. If I had more time (and more capacity afforded me by my Fibromyalgia) I would have used an AI render process to convert the current "not quite photo realistic" look into a stylised cartoon avatar.

I've always been a fan of cartoon work, but in recent years I've been drawn to a stylised style also known as NPR, or Non-Photo Realistc.

I'm also a fan of caricature, and my first experience with an artist doing a caricature of themselves, is this person [1]:

Rolf Harris View full-size

I should note that when I first saw this person on the television I was a child and had no knowledge of their crimes.

I did make an attempt at post-processing my recorded video using AI render, but my own computing resources are not up to the job. I tried to use cloud resources that you can rent by the hour, but suddenly an already complicated process, for me at least, just became a whole lot more complicated.

I asked ChatGPT to give me an idea of what I would look like if the rendering had succeeded, and this is what it came up with:

Kevin Warren - AI render View full-size

As a fallback, since the AI rendering was a bust, I tried to recreate myself as an avatar, from scratch. This is the result:

Kevin Warren - recreated View full-size (protected)

It's not great, I know, but it was a start, at least.

Another aspect of the video that I am disatisfied with is the fact that, apart from the head, the avatar's body does not move at all. It's the reason why the video remains on a head and shoulders shot.

I had the idea of using motion capture to replicate my own movements as I read the script to the avatar.

Ordinarily, this process, called motion capture, is out of bounds to the majority of people. The cheapest set-up costs £4000 (a physical suit that you wear). Incidentally, the motion capture technology used by computer games and films requires a dedicated room, a plethora of cameras positioned at fixed points around the room, and specialist suits for the actors ... and costs tens of thousands.

I had no intention of paying anything like this, but I found out our old Xbox 360 Kinect camera could be used for motion capture ... but after two weeks of wrangling technology (it only works on Windows 10 - my computer is Windows 11) etc etc etc ... I had to stop, and deliver what I had done so far.



UPDATE:
(following tutorial on 5th November 2025)

My tutor requested that I include "process shots of the work progressing" ... so ... for everyone's entertainment, here's what I would actually refer to as 'The Fails':

Actually, before that, here's the output of the 3D scan of my head that I referred to at step 1 above:

Polycam snapshot View full-size (protected)

Fine, I've delayed as much as I can, here now are the fails...

It may be asked why my 3D self is wearing a flat cap, when I myself do not possess one.

Perhaps this next image will clear that up:

Polycam snapshot View full-size (protected)

It sort of makes sense when you look at the 3D scan that I was working from, but the Character Creator plugin that I was working with (Headshot) really should have done a better job. Perhaps when I'm more skilled and experienced with using it, it will.

Once I had my digital self in a form that I was happy with, it needed animating. I imported an animation from another project, but there were issues with it:

I give you ... 'wide-mouthed frog' version:

Polycam snapshot View full-size (protected)

...and 'freaky teeth' version:

Polycam snapshot View full-size (protected)

The same issue caused both the above: The imported animation was for an avatar with differing facial dimensions.

Seen enough?

I didn't think you had, sadly. Okay, on with the horror show...

I spent about a week going round and round in circles with AI Rendering. As I explained above, I wasn't happy with the "uncanny valley" version that I had, and thought a stylised cartoon version would be more comfortable on the eyes.

AI rendering is a slow process, even when you hire cloud resources by the hour.

There are too many variables to count. In somewhat simpler terms, it not only matters what you feed in, but also what instructions you give the AI engine, and finally, how you manage the relationship between your input and the AI's algorithms. There's also an element of randomness. Yes, that's what I'm blaming for these:


Polycam snapshot View full-size (protected)

This one looks nothing like me. (Actually, none of them do)



Polycam snapshot View full-size (protected)

This was the best of the bunch, though still nothing like me and nothing like the look I was striving for.



Polycam snapshot View full-size (protected)

This is a fantastic example of what can happen with the randomness factor. (Why is the pose completely changed in this one?)



Polycam snapshot View full-size (protected)

I tried to push the stylised cartoon style ... and pushed it a bit too hard. I mean, I can see what it was thinking, but still, no. Just ... no.



References:

[1] Martin P. and Culliver P. (2018) Rolf Harris sketch removed from regional theatre to take a stand against sexual assault
Available at:
https://www.abc.net.au/news/2018-10-17/rolf-harris-sketch-removed-years-after-assault-conviction/10387222
(Accessed: 25 October 2025)