Turkish presidential elections TRT publicity speech facial expression analysis (03.08.2014)

Türkçesi için buraya tıklayınız.

In this study, facial expressions of the three Turkish presidential candidates Demirtas, Erdogan and Ihsanoglu (in alphabetical order) are analyzed during the publicity speeches featured at TRT (Turkish Radio and Television) on 03.08.2014. FaceReader is used for the analysis where 3D modeling of the face is achieved using the active appearance models. Over 500 landmark points are tracked and analyzed for obtaining the facial expressions during the whole speech. All source videos and the data are publicly available for research purposes. If required, the analysis can be repeated and the the results can be validated. All these results are obtained scientifically and they do not represent or reflect any personal view or political opinion. The purpose of this study is to analyse the facial expressions of the presidential candidates during their campaign speeches.

How the System Works

FaceReader is the world’s first tool capable of automatically analyzing facial expressions, providing users with an objective assessment of a person’s facial emotions [1], [2], [3], [4]. The image of the person is used for the analysis in a machine learning framework where a training based facial landmark tracking is performed for obtaining a 3D model of the face. The 3D model is further used to obtain the facial expressions in a supervised learning framework. The video below demonstrates how the system works. For more information please visit Noldus FaceReader website.

Sample analysis of the three candidates (Demirtaş, Erdoğan, İhsanoğlu).

 

candidates

Selahattin Demirtaş

A small part from the video and the analysis result:

Demirtas3E

 

Recep Tayyip Erdoğan

A small part from the video and the analysis result:

Erdogan3E

 

Ekmeleddin İhsanoğlu

A small part from the video and the analysis result:

Ihsanoglu3E

 

 What does the Figures say?

On the left column, the average facial expression of each of the candidates for the 10-15 min video is presented. On the right hand, similar average facial expressions are presented when the neutral moments (gray region on the left) is discarded.

CONCLUSION

First of all, it is important to emphasize that the analysis presented in this study shows how the candidates look like. There is no intention to claim how the candidates would feel during the speech. This work is presented from a scientific point of view and hence, no personal option is reflected.

The three candidate public speech videos of length 10-15 min are analyzed. We would like to share some remarkable observations:

Looking at the left column of the figure, one can conclude that the candidates do not use their facial expressions a lot and they generally have a neutral facial state during their speeches. As a comparison, Ihsanoglu stands outs as the candidate who uses his mimics the most with 32%, Erdogan follows with 15% and Demirtas is the one with the least usage of his mimics with 6%.

Detailed analysis of the part where the neutral state is discarded we can see in the right column that there is a major difference between the candidates. It is observed that the common expression of Erdogan is anger with 82% and fear is following that with 10%. Similarly, the analysis of Ihsanoglu shows that, anger is the most common expression with 74% and disgust follows that with 20%. According to the analysis of Demirtas, the most common expression is happiness with 50% and fear is following that with 34%.

Such an analysis would definitely be more interesting during a real time discussion on a TV program where all the candidates are present.  However, under the current conditions, this is the best possible unbiased way to have a comparative analysis during the publicity speech where TRT is officially liable to broadcast. We hope to make a future study during a real time broadcast with all the candidates present. You can find the contact form below.

 

[1] Kuilenburg, H. van, Wiering, M. and Uyl, M.J. den (2005). A Model Based Method for Automatic Facial Expression Recognition. In Proceedings of the 16th European Conference on Machine Learning (ECML-2005), October 3-7, Porto, Portugal.
[2] Marten J. den Uyl, Hans van Kuilenburg, (2005). The FaceReader: Online facial expression recognition. Proceedings of Measuring Behaviour 2005, 5th International Conference on Methods and Techniques in Behavioural Research, 589-590.
[3] H. Emrah Tasli, Amogh Gudi, and Marten den Uyl, “Remote PPG based vital sign measurement using adaptive facial regions,” in International Conference on Image Processing, 2014.
[4] H. Emrah Tasli, Paul Ivan; “Turkish Presidential Elections TRT Publicity Speech Facial Expression Analysis”; arXiv:1408.3573
Advertisements
Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s