Which library / algorithm should I use to recognise a specific facial expression (C#)


I have a friend who, after having a stroke, now has locked-in syndrome who and can now only move his eyes and eyelids. I've been working on some C# software (written in Unity3D) to help him to communicate, the ones he has tried so far have not been successful due to the level of his disability.

I am in the process of implementing DLib for facial feature recognition. What I would like to do is to ask the user to repeatedly alternate between a neutral face and a face that is indicating. I don't want to limit the indication gesture specifically to winking/blinking/opening eyes wide, etc as I want other people to be able to freely download the app and indicate in whatever way they find easiest due to their incapacity.

I could really do with some pointers as to how to have my app learn what the user's facial indicator is by comparing the facial landmarks of the two states.

At this point, I don't even know what it is I need to know. I don't know the names of any AI algorithms that might be relevant or anything.

I know posts like this are often closed for being "too vague", but due to the nature of the request, I'd appreciate it if it could be left open as any suggestions at all could potentially be life-changing.

OpenCV would be an excellent choice as I already have a licence for a library in Unity3D.

Peter Morris

Posted 2018-04-26T22:09:57.767

Reputation: 111

1Welcome to AI! I've added some relevant tags. A worthy inquiry. – DukeZhou – 2018-04-27T16:04:45.507


If you are doing this for academic research (free) or for a commercial venture (cost is $25,000) check out Affectiva's Emotion SDK for Unity: https://knowledge.affectiva.com/docs/getting-started-with-the-emotion-sdk-for-unity. It looks like they have a lot of material that would help you get an application written.

– Brian O'Donnell – 2018-04-27T18:59:29.073



Since you are

  • A C# developer already
  • Just getting started and not sure where to go next

I would suggest trying the Emotion API which is now part of the general Face API. This has the benefit of being pre-trained on a very large dataset. You can perform 30,000 recognitions/month for free.


Posted 2018-04-26T22:09:57.767

Reputation: 221


Dlib can be a good point to start for detecting the facial landmarks.

For start, you can try this approach:

  1. Take various image of your friend in the neutral expression and in the indicating expression

  2. Use a subset of landmarks from Dlib (those that relate to the parts that your friend can move) to calculate some distances (see example in pictures, the arrows are the distances). All this distances now are the features that described your friend's neutral and indicating expression

enter image description here

  1. To test if the features that you have obtained are good for classify between the 2 expression try Principal Component Analysis (PCA). Every language has one or more library that implement this (Except maybe C#??). If you features are good enough the representation in the feature spaces with PCA should look like:

enter image description here

where, for example, the red dots represent the features extracted for neutral expression and the blue dots represent the features of the expression indicating. The more spaced the points are, better the classification will work. If the dots are too close or overlapping, trying to go back at point 2 and choose different features.

  1. So you have 2 classes, you can now train a Support Vector Machine (SVM). Then in real-time you have to take a frame from a webcam, calculate in this frame the landmarks and the distances (as before) and pass the vector as input of the SVM that should tell you which expression is.

This is a basic approach. But since you only have to recognize 2 expressions, and since these two expressions are always made by the same person, with a bit of luck and choosing the right features could work very well!


Posted 2018-04-26T22:09:57.767

Reputation: 131