## Facial Animation.

**Importing the expressions.**

To get started you need a set of 3D meshes created in your favourite editor.

The important thing is that there is a one to one relationship between all the points.

I was lucky, I found a website that had a free set of low poly heads available for download, have a dig around you may be able to find something similar.

If not you will have to create your own.

Once you have these we can begin.

The first stage is to import the meshes into your XNA game and convert them to 2D textures. This is my version of the class.

The important thing is that there is a one to one relationship between all the points.

I was lucky, I found a website that had a free set of low poly heads available for download, have a dig around you may be able to find something similar.

If not you will have to create your own.

Once you have these we can begin.

The first stage is to import the meshes into your XNA game and convert them to 2D textures. This is my version of the class.

public class Expression { Model model; public Texture2D texture; public float weight; public int size; public void Load(GraphicsDevice graf, IServiceProvider services, String file) { // load the model using a new content manager so we can dispose of it cleanly ContentManager cont = new ContentManager(services); cont.RootDirectory = "Content"; model = cont.Load<Model>(file); List<Vector4> pixels = new List<Vector4>(); for (int i = 0; i < model.Meshes.Count; i++) { // find the number of verts in this mesh int nverts = model.Meshes[i].VertexBuffer.SizeInBytes / model.Meshes[i].MeshParts[0].VertexStride; byte[] oldverts = new byte[model.Meshes[i].VertexBuffer.SizeInBytes]; // find the location of the position and texture data in the stream VertexElement[] vars = model.Meshes[i].MeshParts[0].VertexDeclaration.GetVertexElements(); int ppos = 0; int tpos = 0; for (int ve = 0; ve < vars.GetLength(0); ve++) { if (vars[ve].VertexElementUsage == VertexElementUsage.Position) { ppos = vars[ve].Offset; } if (vars[ve].VertexElementUsage == VertexElementUsage.TextureCoordinate) { tpos = vars[ve].Offset; } } model.Meshes[i].VertexBuffer.GetData<byte>(oldverts); // convert the vertex buffer int spos = 0; for (int k = 0; k < nverts; k++) { float x = BitConverter.ToSingle(oldverts, spos + ppos); float y = BitConverter.ToSingle(oldverts, spos + ppos + 4); float z = BitConverter.ToSingle(oldverts, spos + ppos + 8); Vector4 p = new Vector4(x, y, z, 1); pixels.Add(p); spos += model.Meshes[i].MeshParts[0].VertexStride; } } int maxvertex = pixels.Count; int n = 16; while (n * n < maxvertex) { n *= 2; } size = n; texture = new Texture2D(graf, n, n, 1, TextureUsage.None, SurfaceFormat.Vector4); Vector4[] pix = new Vector4[n * n]; for (int y = 0; y < pixels.Count; y++) { pix[y] = pixels[y]; } texture.SetData<Vector4>(pix); cont.Unload(); cont.Dispose(); model = null; } }

The important thing to notice here is the way the model is parsed.

A model contains multiple Meshes. Each Mesh can have multiple MeshParts.

The Mesh contains the vertexbuffer we want to convert, but the MeshPart contains all the information about the vertex buffer.

Each vertex is stored as a structure, but we don't know what the 3D editor has output in the structure. Luckily we only need the Position information (I have left a bit of code in to get the texture coordinates as a reference for you), so we can simply look at the vertex definition stored in the mesh part and find the offset of the position data.

Once we know the location of the position data we can parse the vertexbuffer and extract the position information for all the points.

At this stage we don't know how many vertices we are dealing with, so I have just added them to a list. Once the list is populated, we find the smallest power of two square texture that can hold all the points and create the texture from the list.

That's it, we now have a texture we can use. All that remains is to clean up after ourselves.

Then onto the drawing stuff.

A model contains multiple Meshes. Each Mesh can have multiple MeshParts.

The Mesh contains the vertexbuffer we want to convert, but the MeshPart contains all the information about the vertex buffer.

Each vertex is stored as a structure, but we don't know what the 3D editor has output in the structure. Luckily we only need the Position information (I have left a bit of code in to get the texture coordinates as a reference for you), so we can simply look at the vertex definition stored in the mesh part and find the offset of the position data.

Once we know the location of the position data we can parse the vertexbuffer and extract the position information for all the points.

At this stage we don't know how many vertices we are dealing with, so I have just added them to a list. Once the list is populated, we find the smallest power of two square texture that can hold all the points and create the texture from the list.

That's it, we now have a texture we can use. All that remains is to clean up after ourselves.

Then onto the drawing stuff.