What happens when politicians show emotions? According to emotional contagion theory, one might expect that people mimic politicians’ nonverbal expressions, leading to affect transfer from politician to voter. However, it is more likely that emotional mimicry is conditional upon people’s priors, raising the following questions: do we only mimic the politicians we support? Or can we be tempted by the enemy’s smile? In this study, we theorize and test (1) the extent to which politicians' emotional displays transfer to voters, (2) how party attachment moderates emotional contagion, (3) and which emotions are most contagious. In a pre-registered experiment (N=109), the participants were presented with manipulated facial expressions of politicians. We measure both cognitive-emotional (self-reports) and physiological emotional responses (facial electromyography). The study provides insights in how voters are emotionally affected by politicians’ emotional displays, and contributes to the polarization literature by examining whether and when voters can be emphatic to their opponents emotions.
Images are an efficient and effective form of communication that are increasingly prevalent on social media and other platforms. Large-n analyses of these images are necessary if we are to fully understand the conditions under which visuals influence political attitudes and behavior. Images are also a valuable source of information on a range of social science phenomena. Despite great advances in computer vision, state-of- the-art computer vision algorithms are not trained to recognize many features that are of interest to social scientists (such as emotional responses to images). Therefore, researchers must invest in costly manual annotation. We propose an approach for reducing manual annotation costs. Specifically, we use fine-tuned embeddings from a pre-trained convolutional neural net and k-means clustering to first group similar images in a large corpus. We then randomly sample images from each cluster, label them, and propagate those labels to all of the remaining images in the cluster. We demonstrate the method using a corpus of images drawn from tweets using the FamiliesBelongTogether hashtag. This method makes large-n analysis of images considerably less costly for social scientists.
By collaborating with a number of professional photojournalists, we have obtained access to more than one million images from conflict zones – mainly Iraq. We want to use these images to examine how conflicts are depicted and narrated. From the total population of photographs in the raw shoot, we know which images where submitted to editors, and which where published. For each image we know the time and date of capture. For some images we also know the approximate location (which given the time and data stamps can be extrapolated to all images). To learn the content of the images, we manually label a subset of images. We use this subset in cohort with artificial neural networks to classify/estimate the content of all our images. Combining our image-data with the UCDP geocoded conflict event dataset, we can, e.g., asses the association between the level of violence and the content in the images. Furthermore, because we know if images were submitted to editors or not, and if they were eventually published, we can explore choices and biases throughout the publication process. For instance, we can explore whether images portraying gender roles in a particularly manner are prioritised through the publication process.
Twitter has become a battleground for election campaigns with almost all political candidates vying for office present on this medium. Studies examining the role of Twitter in election campaigns are numerous but almost exclusively focus on the textual content of tweets. Unexamined so far have been the images contained in those tweets and the messaging they attempt to convey. At the same time, we know that images can have powerful communicative effects that go above and beyond text alone. In this presentation, we report preliminary findings of a study of 45,000 campaign images tweeted by candidates during the Irish General Election campaign of 2020. We first present a fine-grained categorisation of those images, including categories such as community engagement, electioneering, image management, and policy content. We then report preliminary results from a set of pre-trained and context-specific classifiers used to predict image categories, relying on text and image features. We will discuss the hurdles we encountered in this work so far, and reflect about the promises and challenges of studying images at scale.
Voters evaluate politicians not just by what they say, but also how they say it, via facial displays of emotions and vocal pitch. Candidate characteristics can shape how leaders use -- and how voters react to -- nonverbal cues. Drawing on role congruity expectations, we focus on how gender shapes the use of and reactions to facial, voice, and textual communication in political debates. Using full-length debate videos from four German national elections (2005--2017) and a minor debate in 2017, we employ computer vision, machine learning, and text analysis to extract facial displays of emotion, vocal pitch, and speech sentiment. Consistent with our expectations, Angela Merkel expresses less anger and is less emotive than her male opponents. We combine second-by-second candidate emotions data with continuous responses recorded by live audiences. We find that voters punish Merkel for anger displays and reward her happiness and general emotional displays.
Confidence in election results is a central pillar of democracy. However, many new democracies struggle to achieve the democratic ideals. Elections are plagued by a number of irregularities, but neither actors on the ground nor scholars are able to reliably determine whether these irregularities are indicative of fraud, or just reflect growing pains of new democracies. In particular, because existing measures of fraud can only be observed across the entire sample of polling stations, scholars are generally unable to isolate specific fraudulent returns -- and thus determine when and where electoral irregularities occur, a crucial question in determining if election results are fraudulent or just problematic. In this paper, we use tools from computer vision and machine learning to identify irregularities on statutory forms for each of over 35,000 polling stations in Kenya’s 2017 presidential election. We show that irregularities on statutory forms correspond to abnormal turnout rates and higher invalid vote rates. Further, we demonstrate that irregularities appear to be concentrated in “stronghold” districts, where one candidate enjoys a preponderance of the vote share.
Early experiences with politics can shape how kids – and later, adults – see and engage with the political world. Yet, we know very little about the ways that kids actually see the political world. Using a new measurement, the Draw A Political Leader (DAPL) tool, we examine the images that more than 1600 children (age 6-12) produced about politics. In the first part of the paper, we examine the common themes in these images and how they vary by the children’s age, gender, race, location, and political attitudes. We then use computer processing of these images to identify the ways that sentiment and subject can (and cannot) be extracted and categorized using automated techniques. These results are compared against human coding. Our paper provides both new methodological approaches for assessing views of politics via DAPL and the automated processes of those images and key insights into the role of images in political socialization.
Why do legislators sometimes deliver passionate speeches and sometimes tedious monologues? We argue that legislators make passionate appeals when they want to signal support or opposition to a bill. Whether legislators choose to send such a signal depends on the preference of the median voter in their districts. We expect legislators to deliver more emphatic speeches if their floor vote is aligned with the preferences of their electorate. To test this argument, we apply automated video analysis to plenary recordings of speeches on key votes in the 111th–115th US House of Representatives (2009–2018). We match the speech emphasis with district preferences on the bills using data from the Cooperative Congressional Election Study. We find that House members who rise in opposition to a bill give more passionate speeches when public preferences are aligned with their vote choice. The paper discusses the implications of these findings for our understanding of legislative debates.
In recent years the personalization of politics has been widely discussed as a global trend that is heavily facilitated by visual cues since well-known politicians are frequently covered on the news and curate their own social media accounts. On a local level, campaign posters often feature district candidates and therefore provide visual cues for voters. In some countries, candidates’ photographs can even be seen on voting ballots. Thus, this study focuses on citizens’ perceptions of candidates and candidate characteristics in order to explain the candidates’ electoral success. Attributes such as candidates’ party membership, gender, ethnic background as well as their physical attractiveness and facial expressions are considered for explaining electoral success in Germany and Ireland. Candidates’ facial expressions are measured by human ratings as well as machine classifications, compared, and subsequently considered in the analysis. The results indicate that physical attractiveness and facial expressions on campaign posters and voting ballots can impact candidate perceptions and vote shares as a result. Thereby this study shows that candidate appearances play an important role in elections, most likely by providing information shortcuts for voters. Furthermore, it supports the notion that the personalization of politics occurs even in parliamentary systems with proportional electoral systems.
How are data about migration portrayed visually, and what do these portrayals imply for communicating this salient issue? Data visualizations—comprising representations of information that aim to enhance understanding—are increasingly commonplace in journalistic and policy settings. Visualization’s popularity is partly motivated by assumptions that quantitative evidence ‘speaks for itself’, and that visual representations of data are more understandable for users. Scholars have begun to question these assumptions, especially in political domains where evidence and values potentially clash. This paper provides an empirical foundation for discussions about images’ roles in politics by contributing novel evidence of the characteristics present in data visualizations about migration that are publicly available. Theoretically, I draw upon frame-building to relate decisions about message content with effects on political behaviors and attitudes. Empirically, I present results analyzing 300 migration visualizations scraped from Google Images. Using content analysis (validated by inter-coder reliability tests) and qualitative social semiotic methods, I identify key patterns in these images, including the dominance of ‘clean’ white layouts and a limited variety of chart types. I conclude by considering the implications for visual framing and migration public opinion, as well as exploring how other computational techniques could be applied to study this dataset.