Skip to main content

SYSTEMATIC REVIEW article

Front. Psychol., 18 February 2022
Sec. Cognitive Science
This article is part of the Research Topic Social Cognition and Social Influence in the Time of Coronavirus Disease (COVID-19) View all 16 articles

Face Processing in Early Development: A Systematic Review of Behavioral Studies and Considerations in Times of COVID-19 Pandemic

  • 1Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
  • 2Centre for Brain and Cognitive Development, Birkbeck, University of London, London, United Kingdom

Human faces are one of the most prominent stimuli in the visual environment of young infants and convey critical information for the development of social cognition. During the COVID-19 pandemic, mask wearing has become a common practice outside the home environment. With masks covering nose and mouth regions, the facial cues available to the infant are impoverished. The impact of these changes on development is unknown but is critical to debates around mask mandates in early childhood settings. As infants grow, they increasingly interact with a broader range of familiar and unfamiliar people outside the home; in these settings, mask wearing could possibly influence social development. In order to generate hypotheses about the effects of mask wearing on infant social development, in the present work, we systematically review N = 129 studies selected based on the most recent PRISMA guidelines providing a state-of-the-art framework of behavioral studies investigating face processing in early infancy. We focused on identifying sensitive periods during which being exposed to specific facial features or to the entire face configuration has been found to be important for the development of perceptive and socio-communicative skills. For perceptive skills, infants gradually learn to analyze the eyes or the gaze direction within the context of the entire face configuration. This contributes to identity recognition as well as emotional expression discrimination. For socio-communicative skills, direct gaze and emotional facial expressions are crucial for attention engagement while eye-gaze cuing is important for joint attention. Moreover, attention to the mouth is particularly relevant for speech learning. We discuss possible implications of the exposure to masked faces for developmental needs and functions. Providing groundwork for further research, we encourage the investigation of the consequences of mask wearing for infants’ perceptive and socio-communicative development, suggesting new directions within the research field.

Introduction

Faces are our primary source of information about other people. We rely on social cues conveyed by human faces to interpret socio-communicative interactions, using information from the face to decode others’ intentions, emotions, and interests. Since the early stages of the COVID-19 pandemic, the World Health Organization (WHO) recommended wearing face masks in social contexts to limit viral diffusion (WHO, 2020). This brings an important change in the facial information available for encoding, leaving eyes uncovered while masking the mouth. Face coverings remove information about facial configuration and potentially affect social cognition by altering face perception and detection of communicative meanings in social contexts in adults (Carragher and Hancock, 2020; Noyes et al., 2021) and school-aged children (Stajduhar et al., 2021). Considering the effects of face coverings on social cognition is important in evaluating the risk–benefit balance of mask mandates in particular settings.

The roots of social cognition begin at birth and critically rely on processing information from faces. Newborns preferentially orient toward faces (Morton and Johnson, 1991; Gamé et al., 2003; Macchi Cassia et al., 2004), an effect driven by the configural location of the eyes and mouth (Morton and Johnson, 1991; Farroni et al., 2005). The most frequent stimulus infants encounter in their environment is the human face (Fausey et al., 2016). Being exposed to a variety of facial features (eyes, eye gaze, and mouth) and emotional expressions within sensitive periods is crucial for the specialization of social brain networks (Johnson, 2005). Thus, given that masks disrupt visual access to facial features, it is important to consider the possible cascading effects of exposure to masked faces on perceptive and socio-communicative development. Since a large corpus of published literature has examined how early exposure to faces contributes to social brain development, we can leverage existing work to ask which aspects of face processing may be altered by exposure to masked faces and whether this has different implications depending on one’ s developmental stage.

In the present paper, we summarize the wide corpus of studies on the development of face processing to understand possible effects of mask wearing as a function of infants’ developmental needs. To generate hypotheses, we consider the changes in facial cues resulting from mask wearing (mouth covered and eyes uncovered and breaking face configuration) and present a guided systematic review of behavioral studies investigating face processing during the first years of life (0–36 months). Mask wearing is discussed in terms of both altering face perception and hindering social communication by removing information about face configuration. Crucially, the aim of this review is to inform future research exploring the developmental effects of mask wearing, which is a key preventive measure to limit COVID-19 diffusion.

Materials and Methods

Two literature searches were conducted on Elsevier’s Scopus® (Ballew, 2009) and OVID databases before February 20th, 2021 to select papers on the topic of face processing in infancy. The search string was {[(face and (face processing or eye or eyes or mouth or gaze or emotion or motion or race) and infan*) not (“autism spectrum disorders” or asd or asc or autis* or ndd or “neurodevelopmental disorde*”)].ti,ab,kw.} yielded 8,828 manuscripts in total. Manuscripts were selected from subject areas of Psychology, Neuroscience and Social Sciences as published or in press articles written in English; then, duplicates were removed resulting in 5155 papers to be screened. We focused on behavioral studies with typically developing infants to get a sense of possible observed effects of mask mandates in community contexts for children in preschool age.

An additional automatic filter was applied before manual abstract screening, such that the retrieved manuscripts’ title, abstract, or keywords had (1) to include or (2) not to include words as indicated in Table 1. This strategy was adopted to limit the search to content which was pertinent to our research questions. Two independent researchers (LC and AG) screened the remaining abstracts (N = 615) and read all the selected papers (N = 110). By reading abstracts, papers were excluded if non-relevant in terms of topic, age, non-behavioral methodology (EEG, NIRS, fMRI, and pupillary reflex), publication type being a review, or meta-analysis, publication date before 2000. Each of the selected papers was assigned to one or more from the following topics: “eyes,” “gaze cueing,” “mouth,” “motion,” “local/global,” “emotion,” “race,” and “face looking.” To limit the focus of this review to the effect of facial features and information that could be altered or hidden by masks, papers focusing on the effect of race on face perception were excluded at this stage.

TABLE 1
www.frontiersin.org

Table 1. Criteria used for manuscript search.

Selection bias could possibly happen based on automation tool selection; however, we attempted to overcome this by carefully selecting relevant references during full-text reading. An additional N = 28 papers were manually included at this stage. Nine papers were excluded after full-text reading as considered out of topic. The final sample was N = 129 papers. The literature selection process is illustrated in the PRISMA flow diagram (Figure 1).

FIGURE 1
www.frontiersin.org

Figure 1. PRISMA flow diagram.

Results

Papers included in the present review covered five main topics: face exposure (i), featural and configural face scanning (ii), eye and eye gaze (iii), mouth (iv), and emotion expression (v). These topics were selected to allow us to extrapolate the possible impact of mask wearing linked to: being exposed to a smaller variety of uncovered faces and possibly to familiar faces more often than before the COVID-19 pandemic (i), being exposed to partly covered faces rather than full faces (ii), having the eyes region uncovered and available to extract information (iii), obtaining limited information from the mouth and mouth movements (iv), and having limited experience with simultaneous changes in face features due to emotional expressions (v). Importantly, perceptual and communicative aspects are examined in each paragraph. In Table 2, we provide a summary of the main information for each included study.

TABLE 2
www.frontiersin.org

Table 2. Summary of studies included in the review.

Environmental Exposure to Faces

Early in life, infants often spend most of their time inside the household. As they grow, their living contexts extend and they encounter more people outside the family (i.e., peers and teachers). To get a sense of the likelihood of being exposed to masked faces in daily routines, we summarize naturalistic and screen-based studies on the extent to which faces are present and looked at in everyday visual environments during the first 3 years of life.

Studies conducted using head mounted cameras showed that within the first year, the amount of face exposure is higher for younger infants; infants see fewer faces as they grow older during the first 2 years of life (Jayaraman et al., 2015, 2017). Indeed, 3-month-old infants are exposed to faces for 21% of their daily time, and this is most frequently the face of the primary caregiver. However, frequency of exposure and consistency of faces vary across contexts, with caregiver’s faces being present in a wide range of contexts but for shorter durations compared to other relatives or strangers (Sugden and Moulson, 2019). Similarly, in a survey study, Rennels and Davis (2008) found that over the first year of life, most adult-infant interactions happen with the primary caregiver and with people of the same age, gender, and race. Furthermore, female faces appear more frequently in infants’ visual environment; infants have 2.5 times more experience of the mother’s compared to the father’s face (Rennels and Davis, 2008). In terms of duration, mean exposure time to unfamiliar individuals shortens with age, possibly because infants move around the environment and shift attention away from faces more frequently as they grow (Rennels and Davis, 2008). At 12 months of age, when infants’ motor abilities are rapidly developing and performing actions might require some effort, face looking and mutual gaze are decreased when parents are standing and face looking has higher motor costs (vs. a low motor cost condition), as shown by eye-tracking data collected during free play. Indeed, parents are keen on spending time on the floor, perhaps to facilitate face looking in their children (Franchak et al., 2018).

Differently from naturalistic studies, screed-based studies show that with age, infants look more at faces when exposed to complex and dynamic social contexts. Within complex arrays, faces attract and hold infants’ attention as in adults at 6 but not 3 months of age (Di Giorgio et al., 2012) and are looked at for longer than objects (Gluckman and Johnson, 2013) or toys at both 4 and 8 months (DeNicola et al., 2013). Orienting to faces is facilitated by direct gaze before 6 months (Simpson et al., 2020), in line with literature supporting the role of direct gaze in engaging attention from the earliest developmental stages (Farroni et al., 2002). After 6 months of age, infants pay increasing attention to moving faces, compared with static images of patterns (Courage et al., 2006). At this age, both upright and inverted faces elicit attention orienting in complex visual displays, but only upright faces hold infants’ attention (Gliga et al., 2009).

Taken together, this evidence suggests that a significant amount of time is spent looking at faces from early in life. While an increase in face looking with age is found when presenting infants with complex arrays in laboratory settings, naturalistic studies highlight that infants look less at faces as they grow. The motor skills required to direct attention to faces in real life situations, as well as the increasing importance of the adults’ hands and objects in social contexts could perhaps explain some of these contrasting results. In lab settings, when face exposure does not depend on postural motor skills, infants increasingly find images of faces more engaging than objects, especially if presented upright and with direct gaze. Thus, they gradually show a preference for the stimuli they are largely exposed to that will scaffold their face perception and social communication skills.

The Development of Face Perception

Faces are a predominant stimulus in an infant’s environment and constitute an important source of learning from soon after birth. Wearing a face mask changes low-level perceptual properties of faces that include contrasts (involving borders and features) as well as the features that are visible. Knowledge of the mechanisms that underpin face perception from birth is necessary to understand whether and when face coverings could impact face perception.

Newborns are predisposed to orient toward face-like configurations (Morton and Johnson, 1991; Gamé et al., 2003; Macchi Cassia et al., 2004) and multiple studies have been conducted over the years aiming to explain mechanisms beneath face preference at birth.

One proposed mechanism is that stimuli with more elements on the upper part—two eyes vs. one mouth in faces—are preferred due to the presence of more receptors, and consequently higher sensitivity, in the part of the retina that perceives the upper visual field (top-heavy hypothesis; Simion et al., 2002). Supporting this hypothesis, Macchi Cassia et al. (2004) found that newborns preferred stimuli with more elements in the upper part regardless of them being a face and concluded that a non-face-specific perceptual bias could account for face preference at birth. At 3 months, when infants’ looking behavior start to be less influenced by automatic processes and they can discriminate top- vs. bottom-heavy stimuli (Chien et al., 2010), Turati et al. (2005) and Chien (2011) found no consistent bias for top-heavy patterns.

Another proposed mechanism for face bias could be linked to low-level visual constraints, as newborns’ looking behavior is strongly affected by low-level stimulus properties, such as image contrast, and their vision is tuned to low frequencies (black and white changes). Relatedly, a primitive subcortical mechanism (CONSPEC, Table 3) could support face detection processes at birth, being later complemented by a domain-relevant mechanism (CONLERN) that gradually enables the system to recognize the face per se instead of a general face-like configuration (Morton and Johnson, 1991; Johnson, 2005; Johnson et al., 2015). Supporting this account, de Heering et al. (2008) manipulated the spatial frequencies of faces to which newborns were habituated and found that face recognition is facilitated by the lowest spatial frequency within the visible range. Further studies manipulating phase contrast of the stimuli revealed that face-characteristic contrast polarity (one or more dark areas surrounded by lighter surface) is required for the upright face preference in newborns (Farroni et al., 2005). The importance of contrasting internal features of faces for face preference was also found in older infants. By 3 months, infants looked longer at face than car images when faces were manipulated using a horizontal filter that altered external borders and the nose feature but preserved the face configuration composed by eyes and mouth. No face preference was shown when images were manipulated with a vertical filter, preserving the face shape but altering the top-heavy face pattern, and with inverted faces (de Heering et al., 2015).

TABLE 3
www.frontiersin.org

Table 3. Glossary.

Hypotheses on the implications of mask wearing on face preference in early life might differ according to the aforementioned theories. Referring to the top-heavy theory, newborns’ exposure to masked faces in the first weeks of life (for example, in case of prolonged hospitalization after birth) should not inhibit face bias as the presence of more elements in the upper part of the stimulus is maintained. However, since this theory is based on the interdependence between the stimulus borders and the internal features (Turati and Simion, 2002), one question remains on whether masked faces are perceived as oval shapes or whether the upper border of the mask is perceived as a face bound. In the latter case, the stimulus composed by forehead and eyes (face region above the mask) would not show the top-down asymmetry and face bias could possibly be inhibited. In the CONSPEC-CONLERN framework, preferential orienting to masked faces at birth supported by subcortical neural pathways (CONSPEC) is expected to be maintained, as contrasts are preserved in the eyes region. One could wonder whether, if infants are exclusively exposed to masked faces in the first 2 months of life, the CONLERN system might theoretically be disadvantaged as it would receive atypical input regarding the face configuration. However, a recent update of the two-process theory of face processing highlights the central role of eye contact in subcortical rapid face detection (Farroni et al., 2002; Johnson et al., 2015). Since eyes are not impacted by face covering, this may compensate for the missed exposure to the entire face configuration for the development of cortical pathways underlying CONLERN. Interestingly, Sai (2005) found that head turns toward the mothers’ face occur only in the presence of mothers’ voice, suggesting that auditory stimuli also contribute to the origins of face processing and might support face preference when the stimulus is partially occluded by the mask; however, this hypothesis has not been tested yet. Besides, some of this will depend on whether there is a critical period, and how long that extends for, given that newborns would presumably be more exposed to masked faces when in the hospital while once at home they would probably see unmasked faces.

Featural and Configural Face Processing

When looking at a face, two different perceptual strategies can be adopted to encode information: featural and configural (Table 3). Expertise in face processing is based on the ability to encode configural information, useful to extract communicative meanings conveyed by emotional expressions, gaze cueing, and identities. Disruptions in the presentation of the typical face configuration have been shown to affect configural processing (i.e., Inversion Effect, Table 3). This is also the case of face masks, as shown in adults (Freud et al., 2020). Whether and to what extent face masks have similar effects in developmental populations is currently unreported. To generate hypotheses on the potential effects of mask wearing on featural and configural face processing from early in life, in this paragraph, we summarize evidence on the emergence and development of these scanning strategies in infancy.

Developmental changes in the strategies employed to encode facial information have been investigated to explore the pathways leading to specialized face processing. Configural face processing appears to gradually develop during the first year of life (Thompson et al., 2001; Bhatt et al., 2005). At birth, newborns’ ability to discriminate face-like patterns relies on their inner features (Turati and Simion, 2002), although there is evidence that they do not need to rely on fine details and spatial relation between features to recognize face-like patterns (Leo and Simion, 2009). A perceptual shift from featural to configural processing is suggested to happen between 4 and 10 months (Schwarzer et al., 2007), with configural face sensitivity to fine spatial resolution specializing sometime between 3 and 5 months of age (Bhatt et al., 2005). For example, Quinn and Tanaka (2009) found 3-month-olds to be more sensitive to configural changes (distance variations between features) than local changes (variations in features’ size) around both the upper (eyes) and lower (mouth) face areas. Between 3 and 7 months, they appear to specialize in detecting local changes happening in the upper vs. lower face region (Quinn and Tanaka, 2009). However, the same effect has been found with objects, suggesting that processing of featural and configural variations might not be face-specific (Quinn and Tanaka, 2009; Quinn et al., 2013). Differently, despite sensitivity to featural and first-order changes being present as early as 3 months, sensitivity to variations in spatial distance among features could only be observed in 5-month-olds (Bhatt et al., 2005). During the second half of the first year of life, infants scan upright faces more efficiently (Kato and Konishi, 2013; Simpson et al., 2014) and, like adults, at 7–8 months they are faster in identifying upright than inverted faces (Tsurumi et al., 2019). While scanning patterns of the different face regions (high, middle, and low) are comparable for upright and inverted faces before 8 months, infants gradually start to scan upright faces more broadly and do so significantly more than inverted faces by 1 year of life (Oakes and Ellis, 2013). Thus, the inversion effect strengthens during the first year, possibly due to infants’ experience with the entire face configuration. The end of the first year seems to be a crucial period for integrating features within the typical upright face configuration, and sufficient exposure to the entire face could be important.

Infants’ face processing ability varies according to different factors beyond age, such as face orientation and pose. The ability to recognize (i.e., show novelty preference following habituation) unfamiliar full faces presented on a ¾ pose is recorded as early as 1–3 days of life (Turati et al., 2008). At 4 months, infants’ performance in face recognition takes advantage of the face being upright if they had been familiarized with different poses of the same face, indicating that this manipulation requires more cognitive resources for face recognition (Turati et al., 2004). At the same age, but not at birth, infants are faster to orient from a central face toward a peripheral face when this is upright than inverted, although motion of the central face stimulus (displaying blinking, mouth opening, or nodding) reduces the speed of orienting toward upright and inverted faces (Valenza et al., 2015). While some studies indicate sensitivity to face orientation at birth (Leo and Simion, 2009), others indicate that from 4 months infants’ face processing ability is sensitive to factors like orientation, pose, and motion that modify the entire face configuration. As argued by Turati et al. (2004), differential sensitivity to inversion indicates a progressive tuning to the characteristics and configuration of a face. Since face masks affect the visible face configuration, it is possible that speed of detection and recognition of masked faces could be altered from 4 months of age.

The number of full unfamiliar faces a child is exposed to can be a factor that affects face expertise, since exposure to multiple different faces provides more opportunities to explore second-order relations. While between 3 and 4 months of age infants do not spontaneously detect changes in spacing among facial features, they can be trained to do so by being repeatedly exposed to faces varying in spatial proportions (Galati et al., 2016), in line with the idea that this is a critical period for developing configural face processing skills. On the contrary, 5- to 8-month-old infants spontaneously use configural face processing as they demonstrate sensitivity to variations in spatial relations among face features that are within the normal range of human variability (Hayden et al., 2007). When presented with pairs of faces where location of spatial features was manipulated, 5- to 8-month-old infants demonstrate sensitivity to symmetry and averageness, reflected by increased looking toward less average/symmetric faces (Rhodes et al., 2002). Accordingly, 7-month-olds look less at shortened and elongated faces, where distance between features are atypical, than faces with an average eye-to-mouth distance (Thompson et al., 2001). Furthermore, Humphreys and Johnson (2007) habituated 4- and 7-months-old infants to morphed faces and found that face regions used for identity recognition narrow with age, allowing more refined recognition and less errors with increased experience of faces (Humphreys and Johnson, 2007). The variety of faces infants are exposed to is important to develop and refine face processing skills. If mask wearing is mandatory outside the home environment and infants only see caregivers without masks, it is possible that identity recognition skills are affected.

Supporting the importance of experience with faces in everyday contexts is the research by Cashon et al. (2013). They found that sitting abilities correlate with configural face processing in 6-month-olds, suggesting that the development of more mature face processing systems based on configural instead of featural strategies also depends on changing in viewpoint and context linked with motor skills. Further, 7-month-old infants can confidently use configural information to discriminate upright but not inverted faces (which are likely never seen in the normal environment; Cohen and Cashon, 2001). Moreover, Schwarzer and Zauner (2003) showed that configural processing is used by 8 month olds to encode facial information coming from real human faces, while featural processing takes place when presented with face-like configurations (handmade drawings). Configural strategies are increasingly employed from 6 to 12 months, but older infants also use featural information for face processing (Rose et al., 2002). Moreover, Sakuta et al. (2014) showed that from 6 months onward infants can discriminate faces according to eye size, while 3 to 5 month olds could not. This suggests that building expertise on eyes alone could compensate for the diminished expertise on full face configuration in face recognition tasks in case of preponderant exposure to masked faces.

Taken together, these studies describe a gradual transition from featural to configural processing with infants using different strategies according to their developmental stage as well as experience with face configurations. Specifically, at birth, newborns rely on internal features to discriminate between faces or face-like patterns. The literature overall supports a transition to using configural strategies for face recognition between 3 and 5 months of age. Configural face processing abilities are clearly manifested from 7 to 8 months and are increasingly used for face recognition toward the end of the first year for upright faces. The development of configural processing is likely driven by experience, possibly with a range of faces. Thus, if infants are just seeing a very small number of people unmasked, it is possible that these skills will develop differently. Featural strategies are used when the configuration is broken, as it happens in the case of inverted faces. They might therefore be used for recognition of masked faces too. Furthermore, we know that infants pay different attention to eyes and mouth according to their developmental needs (i.e., attentional shift to the mouth for language learning; see paragraph 3.2.2). While masks drastically change visibility of facial features, it is possible that they impact infant’s perception of the face configuration differently at different ages. Although their presence could break the CONSPEC (Acerra et al., 2002), this might not affect the communicative valence of the face at birth and throughout the first few months of life, when eyes are more salient than the mouth, while this could happen when attentional shifts to the mouth occur. However, it is also possible that being exposed to a more limited number of faces in a variety of situations (i.e., different distance, lighting, orientation, and expression) could be enough to support the development of configural strategies. Whether masked faces disrupt facial information processing and whether this effect is age-specific remain open questions for future research.

Perceiving Facial Features and Their Communicative Meanings

The development of face processing abilities partly relies on infants’ attention being focused on different facial features during sensitive periods for the development of functions and skills. Crucially, faces are one of the most prominent sources of social communication. Perceptual information from the face contributes to shape trajectories of individual socio-communicative skills. For instance, eye contact engages infants (Farroni et al., 2007) and gaze shifts support their attention allocation in the environment to learn from relevant stimuli (see, for example, Cetincelik et al., 2021), while information coming from others’ mouth supports language development (Lewkowicz and Hansen-Tift, 2012). Face masks change what features can be perceived, covering nose and mouth while leaving the eye region and forehead uncovered. To discuss whether and when face masks could interfere with infants’ socio-cognitive development, we examine published studies on infants’ focus on each facial feature and on how social and communicative skills are learnt from others’ faces.

The Value of Interactive Faces

Soon after birth, newborns appear to preferentially orient to stimuli that carry a socio-communicative meaning, which are preferred over non-communicative cues. Differently to when they are habituated to still faces, newborns do not show novelty preference after being habituated to a live interaction scene where they saw a face producing communicative cues (Cecchini et al., 2011). In classic habituation studies, novelty preference is interpreted as evidence that children can discriminate between the stimulus they have been habituated to and the one they see for the first time. Thus, these results could be interpreted as in favor of the motionless vs. interactive face. On the contrary, the authors argue this proves that newborns’ interest is enhanced and more durable for interactive faces, and therefore, they are equally attracted to both post-habituation faces, regardless of previous exposure. In line with this view, newborns show a significant decrease in the looking time to faces that are not responsive during social interaction versus interactive faces (Nagy, 2008). Moreover, Coulon et al. (2011) and Streri et al. (2013) found that newborns look longer at faces when they have previously been familiarized to a video of the same face with a direct gaze, interacting or talking to them. They also seem to be facilitated in identity recognition when familiarized with dynamic (but not static) emotional faces, as shown by Leo et al. (2018). From these results, it is evident that newborns are wired for interactions; within those, they detect and prefer elements that build up the basis for social communication.

Interactive faces seem to be more powerful than static, non-interactive faces in attracting attentional resources and facilitating the acquisition of face processing skills across the first year of life. Kim and Johnson (2014) found that both 3- and 5-month-old infants look longer at faces directed toward them and in the presence of infant-directed speech. Moreover, faces displaying changes in facial expression facilitate face recognition in 3- to 4-month-olds (Otsuka et al., 2009) and around 5 months, infants can recognize actors based on their actions if exposed for enough time (min 320 s) to the naturalistic scene (Bahrick et al., 2002; Bahrick and Newell, 2008). Similarly, Spencer et al. (2006) showed that infants aged between 3 and 8 months can discriminate between people based on differences in their facial motion. Layton and Rochat (2007) tested whether motion or visual contrast helped infants discriminate their mother from a stranger to which they had been habituated. They found that facial motion improved recognition in 8- but not 4-month-old infants, indicating that dynamic changes are not only encoded but also used for identity recognition by 8 months of age. Of note, when using animated face patterns instead of real faces, infants preferred to look at biologically plausible vertical movements of the internal features (simulating eyes and mouth closure) compared to horizontally moving patterns only at 7–8 months of age and not at 5–6 months (Ichikawa et al., 2011). By obscuring mouth dynamics, masks partly reduce the availability of communicative cues in a face while they leave eye information only available. This could possibly influence face preference or depth of processing. We examine below what information infants receive from the different features to understand whether and when their role is essential for socio-cognitive development.

Eyes

Perception

Perceiving eyes scaffolds the development of face processing from birth. Farroni et al. (2005) conducted a series of experiments manipulating contrast within face-like patterns and real face stimuli. Results showed that newborns’ basic visual capacity is sufficient to perceive eyes within a face, and the authors suggest that this might be a reason for face preference to be manifested soon after birth. Perceiving differences in eyes direction is important not only for face detection, as discussed earlier, but also for identity recognition. Newborns can recognize a previously seen face and this process is facilitated by direct gaze (Rigato et al., 2011a). Averted gaze prevents newborns to display a preference for happy facial expression, that is, conversely observed in the presence of direct gaze (Rigato et al., 2011b). Similarly, Farroni et al. (2007) showed that 4-month-old infants manifested a novelty preference when the face they were previously habituated to had direct but not averted gaze. In older infants, eye contact has been shown to facilitate facial discrimination as well, possibly affecting three-dimensional face recognition. In fact, 8-month-old infants were able to recognize a face they were previously familiarized with even if this was rotated, but only if the familiarized face had a direct gaze (Yamashita et al., 2012).

Perceiving gaze shifts is also a crucial feature that contributes to the emergence of processing skills, as infants learn to extrapolate information about the context from the direction of eye gaze. At 4 months, infants can already orient in the direction cued by the gaze and perform shorter saccades to a peripheral object appearing in the direction of the eye gaze of a central face image (Farroni et al., 2000). Of note, this eye-gaze effect is canceled out if faces display emotional expressions, as these seem to hold their attention and reduce speed to orient toward the referent object (Rigato et al., 2013).

The ability to discriminate eye-gaze direction is at the base of another face processing skill that emerges very early in life, that is, the ability to integrate information about the head and eyes orientation when interpreting directional cues. Otsuka et al. (2016) used artificially created realistic face images in a paradigm inspired by the Wollaston’s effect (Table 3). They found that infants could infer the direction of the gaze based on the head orientation from 4 to 5 months of age. Nakato et al. (2009) investigated the same effect familiarizing infants to the original Wollaston’s drawings and saw that 8-, but not 6- and 7-month-olds, looked longer at illusory direct gaze, providing evidence that they were sensitive to the Wollaston’s effect. Inverted faces disrupt configural processing and inhibit the interpretation of gaze direction in the context of head orientation in the younger infants (Nakato et al., 2009). Thus, while at 4 months of age infants use eye-gaze direction to choose where to direct their attention, the ability to integrate information about eye gaze and head orientation especially in realistic situations develops more gradually until 8 months.

As perceiving the eyes plays a specific role in face processing from birth and the facilitatory role of eye contact and gaze shifts is preserved in more complex tasks as infants grow older, it is reassuring that the eyes region is not covered as a precaution against COVID-19 diffusion. Relatedly, examining studies investigating the role of eyes for developing socio-communicative skills is crucial for the scope of this review.

Communication

Within the face, eyes are a central component for communication. It is not just the quantity of faces infants are exposed to that affects the development of social brain networks—whether faces include eyes looking toward or away from the observer is crucial. Gaze direction can provide two types of social information: eye contact establishes a communicative context between humans, gaze shifts can also be interpreted as initiating “joint attention.”

Eye contact is involved in face detection processes soon after birth. Newborns not only manifest a preference for faces and face-like configurations, as discussed, but among faces they prefer those with direct eye gaze. Farroni et al. (2002) presented 2- to 5-day-old newborns with pairs of faces manipulating the direction of the gaze while keeping the face identity constant and found more frequent orientations and longer looking times toward faces with direct rather than averted gaze. In a subsequent study, they crucially found that the effect is present with upright and straight-ahead faces only (Farroni et al., 2006), that is, in the typical presentation of a face during interaction. Direct gaze also facilitates face recognition in 4-month-old infants (Farroni et al., 2007). Supporting the view that infants are tuned to detect communicative meaningful stimuli contributing to their social development, infants who looked more to their mothers’ eyes at 6 months as well as those who paid greater attention to the talker’s eyes (vs. mouth) at 12 months were found to manifest higher social and communication skills at later ages (Wagner et al., 2013; Pons et al., 2019). Attention to the eyes at these preliminary stages allows infants to engage with and learn from eyes, which support socio-cognitive development and could compensate the effects of mask wearing at later developmental stages.

The direction of the eyes constitutes an important modulator of face processing since early in life, which is integrated with multiple sources of social information. For example, eye gaze modulates infants’ allocation of attention toward emotional expression. Doi et al. (2010) found that at 10 months, infants are faster to orient toward the peripheral target in case of a central happy face with direct gaze, while it takes them longer to disengage from the central facial stimulus when the face displays anger (both if direct and averted gaze). Nevertheless, recent evidence shows that when provided with alternative communicative sensory stimulation (i.e., affective touch) infants still engage with less or non-communicative faces, suggesting that different senses conveying communicative information might compensate for each other. For example, evidence shows that when habituated to faces with averted gaze while simultaneously caressed, 4-month-olds discriminate and recognize the familiar face despite gaze being averted (Della Longa et al., 2019). This is in line with the idea that multiple sensory channels support infants’ face processing and learning. For the scope of this review, this is encouraging as it suggests that communicative meanings might enter the system through different sensory gateways and do not rely exclusively on the visual information available from a face when this is limited by mask wearing.

By the end of the first year of life, infants appear to understand the referential essence of gaze that allows to establish joint attention (Mundy, 2018). Many have studied when and how this mechanism develops. Striano et al. (2007) showed that infants start to gaze more in the direction cued by the adults’ gaze from 6 weeks to 3 months of age. While the degree to which infants looked at the experimenter during live joint attention situations did not differ by age, 3 month olds looked more at the gazed-at object compared to younger infants (Striano et al., 2007). Gredebäck et al. (2008) found that when watching an adult gazing and turning the head toward one of two possible toys, infants aged 5 to 12 months looked significantly more at the attended toy, with no effect of age on overall looking time. The microstructure of the infant gaze revealed that 5-month-olds were equally likely to perform the first gaze shift toward the attended and the unattended toy. Differently, 6-, 9-, and 12-month-old infants oriented their gaze toward the toy immediately. These findings indicate that the ability to orient the gaze following a gaze cue is not fully developed at 5 months of age.

Other information usually provided in conjunction with gaze shifts facilitates infants in processing gaze cues in the first year of life, including head direction, familiarity with the person performing the eye-gaze shift, and ostensive communicative signals. For example, at 3 to 4 months of age, head turns in the adult encourage infants to look in the direction of the adult’s moving hands and objects (Amano et al., 2004). At 5 and 10 months of age, infants seem to rely more on gaze cueing coming from highly familiar (i.e., of the race and sex infants were more exposed to) compared to non-familiar adult models (Pickron et al., 2017). Thus, it is possible that as early as 5 months of age, infants have already learnt the referential value of eye gaze coming from the caregivers. At 6 to 9 months of age, infants orient toward the cued toy first and more frequently in the presence of ostensive communicative cues, such as direct gaze and eyebrows lift or infant-directed speech preceding gaze following (Senju and Csibra, 2008; Senju et al., 2008). This is also observed in non-communicative attention-grabbing situations (e.g., if the model actor performed a shiver before the gaze shift) suggesting that attention, rather than communicative intent, plays a crucial role in eliciting gaze following (Szufnarowska et al., 2014). Perhaps in contrast with this account, a study with infants living in a rural society island in Vanuatu, where face-to-face interactions between infants and adults are less common than in Western cultures, confirmed that orienting toward the other’s gaze direction is not dependent on cultural aspects, but rather on the communicative engagement with the infant before gaze cueing. Vanuatu infants between 5 and 7 months oriented toward the cued object more easily after being addressed with infant-directed speech, compared to adult-directed speech, just like Western infants (Hernik and Broesch, 2019). Thus, the roots of joint attention seem to rely on gaze cuing from 3 to 9 months of age and are boosted, in this age range, by additional information that are not impacted by mask wearing, such as the head direction, familiarity, direct gaze, speech, and head movements.

Toward the end of the first year of age, infants start to integrate gaze direction with other communicative cues, such as facial emotional expression, pointing, and gestures, although eyes remain the most salient source of information until 2 years of age. Using a gaze-cueing task whereby faces manifested emotional expressions (happy, fearful, and angry), Niedźwiecka and Tomalski (2015) found that infants aged between 9 and 12 months were faster in orienting toward a peripheral stimulus in trials where the central stimulus was a happy face with gaze directed toward the same side of the screen. Of note, this gaze cueing effect was present only with happy facial expressions, confirming infants’ tendency to rely more on gaze information provided by positive-valanced faces. By 1 year of age, eye gaze or a combination of eye gaze and pointing, but not pointing alone, toward an object facilitates infants’ gaze shift toward the cued object, showing that gaze is still the preferred cue for learning about the surrounding environment (Von Hofsten et al., 2005). Further, during the second year of life (14 and 18 months), infants are more inclined to look in the direction cued by the adults’ eyes rather than head alone, as observed during a live gaze following task (Brooks and Meltzoff, 2002). However, typically developing children start to direct their attention more toward the adults’ hands for learning and communication from the second year of life. Chen et al. (2020) analyzed joint attention episodes during free play between parent and children using head mounted eye-trackers in children with hearing loss and children with normal hearing matched for chronological (24 to 37 months) and hearing age (12–25 months). They found that from the second year of life, hearing children tend to attend more to the parents’ hand actions, while children with hearing loss rely still more on the parents’ eye-gaze cuing (Chen et al., 2020). Eyes seem to be such a powerful communicative cue that they probably partly compensate for the absence of language information in toddlers with hearing loss.

Eye-gaze cueing is even supporting the development of language skills in the second year of life. For example, when watching a short video of a woman directing her gaze and head toward one of the two objects, 15-month-old infants looked longer at the image corresponding to the word sound played in the test phase (Houston-Price et al., 2006). This indicates that eye-gaze cueing facilitates learning of new words and is promising regarding the possibility that eyes support language acquisition even more importantly when the visual information of the mouth is less available due to mask wearing of the speaking adult. Masked faces probably convey lots of social communicative information through the eyes, so communication is likely to be less affected by masks. Effects might be observed in developmental processes that require mouth input.

Mouth

Perception

Redundant audiovisual information (see glossary on Table 3) is important for speech learning especially during the second half of the first year of life, when infants start to shift their attention from the eyes toward the mouth region, while it creates competition between attentional resources before 3 months (Bahrick et al., 2013). At 3 and 6 months, visual scanning between moving and static faces does not differ, while at 9 months infants shift their fixation more frequently between inner facial and look more at the mouth (vs. eye) region only when familiarized with dynamic faces (Xiao et al., 2015). These results are in line with findings by Oakes and Ellis (2013) with static upright face, which indicated that 4.5- and 6.5-month-old infants look more at eyes and 12-month-olds look more at the mouth. A similar pattern was found by Hunnius and Geuze (2004) who followed up 10 infants longitudinally from 6 to 26 weeks when looking at the mothers’ face. These results could be explained by the increasing importance of mouth looking for speech learning. In fact, from 8 to 10 months, infants’ attention to non-speaking faces is distributed across eye and nose regions (Liu et al., 2011; Wheeler et al., 2011; Geangu et al., 2016) while if the faces are accompanied by speech the moving mouth becomes more salient than the eyes (Lewkowicz and Hansen-Tift, 2012; Haensel et al., 2020). Consistently, a longitudinal study by Tenenbaum et al. (2013) showed that infants shifted their attention to the mouth in the presence of spoken language, but not in the presence of a smile with no language. This effect was observed from 6 to 12 months, with a significant increase between 6 and 9 months of age, concomitantly to the canonical babbling stage. Crucial for the aim of this review to evaluate effects of masks covering the mouth regions, the authors noticed high variability between subjects, suggesting that individual experience interacts with developmental needs to influence how infants deploy their attention over talking faces (Tenenbaum et al., 2013).

It is possible that mouth looking has a key role in the initial phases of speech learning. Lewkowicz and Hansen-Tift (2012) found that while looking at speaking faces (either using infants’ native and non-native language) infants focus more on the mouth from 8 months but they shift their attention to the eye region at 12 months in the native language condition only. At this age, infants have gained experience in their native language and audiovisual information is no longer useful, while they continue to attend to speakers’ mouths in the non-native language condition. The authors suggested that to gain expertise with their native language, infants need to rely on redundant audiovisual information, as they learn how to articulate speech-like syllables by imitating the talkers’ mouth (Lewkowicz and Hansen-Tift, 2012). Similarly, Schure et al. (2016) noted that despite the growing expertise in their native language, at 8 months, infants are interested in information coming from the mouth when it includes non-native speech sounds that contrast with native vowel categories they already know. At 9 months, increased looking to the mouth is observed when infants are presented with incongruent audiovisual information (e.g., seeing a mouth articulating a sound while listening to another; Tomalski et al., 2013), while at 12 months, infants focus on the mouth if they hear non-native language (Kubicek et al., 2013). Also supporting experience dependency of face processing, Fecher and Johnson (2019) found that after habituation with a face paired with a voice, 9-month-olds bilinguals subsequently looked longer to faces paired with a different versus the same voice. Thus, it seems that mouth looking plays a significant role in language learning at multiple development stages, both when speech is novel to infants and when they are in the process of learning it. Indeed, Hillairet de Boisferon et al. (2018) suggested that a second attentional shift toward the mouth region might occur when entering the word acquisition phase of language development, regardless of the spoken language being the child’s mother tongue or not. They showed that 14- and 18-month-olds monolingual English infants looked longer to the mouth of faces speaking in English or Spanish during infant-directed speech (but during adult-directed speech at 18 months only).

In sum, attention to the mouth supports language acquisition especially during sensitive periods spanning over the second half of the first year of life, with differences based on infants’ linguistic experience and ability to integrate auditory and visual information. Once infants are skilled enough in their native language they no longer focus more on the mouth unless visual and auditory information are not congruent, or the face speaks a foreign language. Given the relevance of mouth looking for language processing and learning, multiple questions should be raised about implication of face coverings during these sensitive periods. In particular, one could ask whether masks could affect acquisition of less familiar words or different accents, which would be even more relevant for bilingual populations.

Communication

When interacting with people wearing a mask, we realize that speech comprehension might be difficult, especially if we are speaking a language that is not our mother tongue. What about infants that are learning to decode the communicative meaning of speech without seeing lip and mouth movements? Will this impact their language development? To address these questions, we summarize the literature exploring the role of mouth processing for language development, in monolingual and multilingual environments.

The fact that the mouth region of a face is crucial for learning to communicate using verbal language is evident from studies of infants experiencing a multi-language environment. Comparing mono- and bilingual infants is useful to identify key aspects for the development of speech perception and comprehension skills, since only bilinguals need specific strategies to establish sounds, grammar, and social meaning of each of their languages (Werker and Byers-Heinlein, 2008). Differently from monolinguals, for bilinguals, equal attention toward eyes and mouth was found at 4 months and increased looking times toward the mouth were seen at 8 and 12 months, both while hearing their native and non-native language (Pons et al., 2015). Further, at 8 months, bilinguals can discriminate between two languages based on visual information only while this is not evident in monolinguals. Interestingly, this effect was found using languages infants had never been exposed to, suggesting the bilingual infants’ advantage generalizes to support new language processing (Sebastián-Gallés et al., 2012). Thus, these studies indicate that looking at the mouth is a crucial strategy to language learning, used from 8 months of age by infants who are exposed to multi-language contexts.

Timing of speech sounds and mouth movements involved appears crucial when it comes to detecting and disambiguating speech signals. Hillairet de Boisferon et al. (2017) found that at 10 months (but not at 4, 6, 8, and 12 months) infants looked more to the eyes in case of desynchronized speech, while they looked more to the mouth when audiovisual information was synchronized, both for native and non-native languages (Hillairet de Boisferon et al., 2017). The authors suggested that 10 month olds rely on eye information to disambiguate confusing linguistic information, while mouth looking is used for language learning when it provides useful visual cues (Hillairet de Boisferon et al., 2017). Although these findings also partly suggest that in the absence of coordinated audiovisual inputs language processing might be impacted, they are also somewhat encouraging with respect to possible compensatory effects of the eyes when the talking adult’s mouth is covered.

Exploring multisensory integration supporting speech learning, some authors investigated infants’ ability to match static articulatory configuration with produced sounds and found that this changes with age. For example, Streri et al. (2016) familiarized infants of 3, 6, and 9 months of age with faces producing hearable vowels while occluding the mouth and tested looking preference to pairs of full static images including the familiarization face. Infants looked longer to the congruent face at 3 months and to the incongruent face at 9 months, while no preference was manifested at 6 months. This suggests that infants’ ability to match audiovisual information for language learning consolidates close to 9 months of age (Streri et al., 2016). Of note, the type of sensory information available in the living context shapes how infants deploy attention to and integrate audiovisual cues. Mercure et al. (2019) compared visual scanning pathways of 4 to 8 month old during a McGurk task (see glossary on Table 3) and found that bimodal bilinguals (hearing infants of deaf mothers) do not shift their attention to the mouth as much as monolingual and unimodal bilingual infants do. From 6.5 months onward, bilinguals did not show a novelty preference when the auditory and visual information were not congruent, differently from monolinguals. The authors proposed that audiovisual speech experience is crucial for multi-modal speech processing (Mercure et al., 2019).

Notably, growing up infants and toddlers are more likely to find themselves in social interactive contexts whereby familiar and unfamiliar people interact with each other and not exclusively with them. Souter et al. (2020) found that 18 to 30 month olds prefer to look at the eyes rather than the mouth both when seeing a single actor singing nursery rhymes or talking infant-directed speech, and when multiple actors interacting with each other. Regarding multi-language exposure and conversations, Atagi and Johnson (2020) tracked infants’ gaze while seeing two women talking to each other and addressing the infant in a familiar or unfamiliar language. They found that bilinguals performed more anticipatory looks to talkers’ face when the language was unfamiliar rather than familiar. Thus, during challenging communicative events, different scanning patterns could be observed according to prior language exposure (Atagi and Johnson, 2020), highlighting that even consistent exposure to masked faces could have different effects on children’ language learning depending on their level of exposure to language.

In conclusion, beyond the first year of life, toddlers appear to focus on the mouth when entering the word acquisition phase of language development and then gradually shift again attention to eyes to complement language communicative meaning in function of their linguistic expertise. The differences in scanning strategies observed between monolingual and bilingual toddlers attending to conversations suggest that looking at the face is important when the spoken language is not familiar. Granting access to both visual and auditory speech information is crucial from 8 months of age, as infants make use of the synchronized sound and lip movement stimuli to learn a language. The analyzed literature suggests that face masks, which remove the visual mouth cue while probably muffling voice sounds, could have effects on language learning and understanding. Since children rely on facial cues to increase the amount of information that can help understanding the verbal content, we can expect conversations with masked faces to be more challenging for children who are less familiar with the spoken language.

Emotional Expressions

Perception

Facial expressions also have a central role in early learning; processing expressions require the use of configural information that is hindered by wearing face masks. To consider the potential impact of mask wearing on emotion processing, we describe studies examining its developmental underpinnings.

As discussed, newborns preferentially attend to faces, and especially dynamic faces. However, their ability to distinguish facial expressions is very limited. Newborns show novelty preferences when habituated to faces displaying dynamic changes in emotional expression regardless of the nature of the emotion (happiness and fear) (Leo et al., 2018). A facilitation effect of happy facial expressions is observed over the next months. At 3 months, happy facial expressions facilitate face recognition (Turati et al., 2011) when both eyes and mouth express happiness, but not in the case of happy eyes and an angry mouth or angry eyes and a happy mouth (Brenna et al., 2013). From this evidence, one could hypothesize that wearing a face mask would reduce or eliminate the facilitation effect of happy emotions for face recognition in early infancy because the mouth is not visible (cf Brenna et al., 2013). While these studies suggest that infants can discriminate between different emotional expressions from 3 months of age, others found that they need increased exposure to the emotional expressions (Flom et al., 2018) and the presence of multisensory cues (i.e., emotional voices; Flom and Bahrick, 2007) to show this ability before 5 months of age. Further research is needed to investigate whether the presence of auditory information might support face recognition despite the lack of information coming from the mouth, playing a compensatory role.

The emotional valence of faces has a key role in the development of face perception and learning abilities at later ages. At 6 months, happy emotional expressions increase infants’ preference for a face (Kim and Johnson, 2013) and at 7–8 months, rule learning is facilitated by happy expressions and disrupted by angry faces (Gross and Schwarzer, 2010; Quadrelli et al., 2020). By 8 months, infants not only can recognize changes in emotional expression and facial identity, but also they use these two pieces of information in conjunction for face recognition in upright faces, as suggested by a novelty preference based on emotional expressions independent of the face’s identity (Schwarzer and Jovanovic, 2010). Around this age, infants gradually learn to link auditory and visual emotional cues, and discrimination of the emotional valence of facial features becomes more refined. For instance, in 9-month-old infants, hearing emotional vocal sounds (laughing and grumbling) facilitates gaze shifts toward the face with a congruent facial expression paired with an incongruent face, providing evidence for a role of cross-modal top-down regulation on visual attention to facial expressions (Xiao and Emberson, 2019). The ability to integrate multisensory emotional cues seems to emerge only after the seventh month of age (Yong and Ruffman, 2016). It would be important to clarify whether emotion recognition is impoverished by mask wearing to understand whether it also affects a range of other domains.

The next developmental step includes the ability to discriminate between faces displaying different degrees of the same emotional expression. At 9 and 12 (but not 6) months, infants can discriminate faces along the happy-angry (but not happy-sad) continuum, while they are not able to discriminate variations within the same emotional category (Lee et al., 2015). These findings were interpreted as consistent with an infant inability to discriminate between faces within the same emotional category before the first year of life. This ability may develop in conjunction with emotion’s relevance for the infant. In fact, 6- to 7-month-old infants recognize subtle anger expressions when presented in a static but not dynamic face, suggesting they are sensitive to anger but possibly find it difficult to recognize it within a more dynamic context due to scarce experience of this emotion in their daily environment (Ichikawa et al., 2014; Ichikawa and Yamaguchi, 2014). On the contrary, subtle happy expressions are easier to be recognized in the presence of facial movements at the same age (Ichikawa et al., 2014). Considering the limited availability of facial cues due mask wearing that covers an important source of facial movement, infants might show less refined emotion recognition abilities for subtle emotional changes, especially for emotions that are less experienced in caregiving interactions.

Scanning strategies of faces in 7-month-old infants vary as a function of emotional expression. Segal and Moulson (2020a) showed that infants in general look more at the eyes than the mouth of fearful and happy faces presented side-by-side (Segal and Moulson, 2020a). An examination of infants’ looking time series revealed that they looked significantly longer to the eyes of angry and neutral faces, and to the mouth of happy faces in the first 3,000 ms, but scanning strategies were different for different emotional expressions (Segal and Moulson, 2020b). Interestingly, Geangu et al. (2016) showed that, while overall facial emotion recognition was found in both Western and East Asian 7-month-old infants, scanning strategies were different between the two groups, with Japanese infants looking more at the eyes and less at the mouth of happy and fearful faces compared to British infants. Importantly for the scope of the present review, these findings indicate that, while typical infants finally develop the ability to discriminate between emotional facial expressions, they might reach this milestone through different individual scanning strategies that are shaped by environmental exposure. In this view, we can expect infants who are predominantly presented with masked faces from 3 to 12 months of age to develop different strategies to process and interpret emotional expressions compared to infants who normally see the mouth as part of the emotional face configuration. However, given infants received normal full face exposure at home throughout the COVID-19 pandemic, it is also possible that they develop typical face scanning strategies when looking at non-masked faces.

Communication

Emotional expressions are used as communicative signals about the context. Fearful expressions might indicate the environment is threatening and are gradually prioritized by the infant’s attention system. While 3-month-olds seem to be greatly engaged by happy faces, by 5 months, a fearful attentional bias (Table 3) is observed. For example, attention disengagement from a central face toward a peripheral stimulus is slower for fearful than from a happy or neutral face (Peltola et al., 2008, 2011; Heck et al., 2016). When fearful and happy faces are presented side-by-side, increased attentional bias for the fearful face compared to happy and neutral faces is shown at 7–11 months, while at 5 month olds prefer happy faces (Peltola et al., 2009a, 2013). Of note, face familiarity does not affect fearful bias, as infants look longer to a novel fearful face when habituated to happiness, regardless of faces in the habituation phase being familiar or not (Safar and Moulson, 2017). From this evidence, it appears that the fearful attentional bias emerges toward the 7th month of age, despite a sensitivity to fearful faces can already be observed at 5 months.

To examine whether exposure to masked faces influences infants’ behavior and developmental processes elicited by the fearful bias, we need to know whether this is based on information derived from specific elements of the face or from the full facial configuration. Using artificially created faces, Peltola et al. (2009b) found longer latencies to disengage from fearful full faces but not fearful eyes alone at 7 months. The attentional bias to fearful faces at this age was associated with attachment security at 14 months of age, whereby infants who disengaged more easily from a fearful face in an overlap task showed more signs of attachment disorganization (Peltola et al., 2015). This finding corroborates the idea that early processing of emotional expression from the full face is involved in social development. This evidence appears particularly relevant when considering implications of mask wearing on emotion expression processing during development. According to Peltola et al. (2009b)’s results, eyes appear not to be sufficient for fearful bias to manifest at 7 months, possibly implicating that when wearing masks that leave only the eyes uncovered, fearful expressions might not elicit the same processes they would normally do, with potential cascading effects for later social development.

Importantly, preference for specific emotional expressions might vary depending on the individual infants’ temperamental characteristics as well as their parents’ emotional attitude (de Haan et al., 2004; Pérez-Edgar et al., 2017; Aktar et al., 2018; Fu et al., 2020). Highlighting the intertwin of individual temperament characteristics, caregiver affect dispositions, and the attentional bias toward certain facial expressions, this suggests that individual infants’ and caregivers’ temperament and affect dispositions may modulate the effect of mask wearing on the development of perceptual and communicative aspects of face processing.

Only in the second year, toddlers learn to distinguish between true and pretend emotional valence of the facial configuration. Walle and Campos (2014) examined 16- and 19-month-old behavioral responses to parental display of emotional expressions following a true or pretend distress situation. Parents were instructed to display pain and distress after perceptively hitting or missing their hand with a hammer. Both 16 and 19 month olds reacted with concerned facial expressions and prosocial responses more when they perceived the parents hit their hands (although only at 19 infants reacted with playful behavior and positive affect demonstrating they evaluated the context as playful; Walle and Campos, 2014). Further research will have to evaluate whether interacting with masked adults in times of COVID-19 has no effect on this the ability as emotional expression recognition skills have been acquired or whether mask wearing significantly limits toddlers’ experience to link emotional face configurations to contexts.

Discussion

In the present work, we aimed to leverage the wide corpus of existing literature on sensitive periods for the specialization of face processing skills in early development (summarized in Figure 2) to generate hypotheses on possible effects of adults’ mask wearing adopted to limit COVID-19 diffusion. We asked which aspects of face processing might be altered by exposure to masked faces (Figure 3) and whether implications might differ as a function of infants’ developmental stage (main questions for future research emerged from the present review are summarized in Table 4).

FIGURE 2
www.frontiersin.org

Figure 2. Age periods studied in the literature for each of the addressed topics.

FIGURE 3
www.frontiersin.org

Figure 3. Psychological processes linked to face processing. Created with BioRender.com.

TABLE 4
www.frontiersin.org

Table 4. Outstanding questions.

When investigating the potential impact of mask wearing on face processing during the first years of life, we need to differentiate according to individuals’ likelihood of being exposed to these stimuli. In fact, during the earliest stages of life, infants are more likely to spend most of the time within family contexts where they are not exposed to masked faces, while as they grow their daily environment includes people outside the household.

To discuss implications of mask wearing in infancy, it is crucial to describe how masks modify perceptual assets of faces. First, mask wearing disrupts configural face processing. When a mask is worn, no information can be obtained about the nose, cheeks, chin, mouth, and mouth movements. Second, processing of simultaneous changes in face features building up emotional expressions is limited due to the lower part of the face being covered. This limited exposure to facial configuration could possibly have implications in terms of both low-level perception and detection of communicative meanings. For this reason, developmental research in both areas has systematically been reviewed in the previous sections. Importantly, infants typically make use of multiple scanning strategies and pay differential attention to specific face regions and features to reach developmental milestones. Indeed, they gradually learn to analyze the eyes and gaze direction within the context of the entire face configuration—which contributes to the early face bias, identity recognition, as well as emotional expression discrimination—and they rely on audiovisual redundancy from others’ mouth for language learning. Thus, there could potentially be developmental effects if exposure to full faces is limited by widespread mask wearing.

The Importance of the Full Face

Partially covering the face with a mask disrupts configural face processing, which largely constitutes the basis of facial discrimination and recognition abilities in adults. Developmental findings highlight that despite being sensitive to some configural variations as early as 3–5 months, infants clearly adopt configural scanning strategies around 7–8 months and master their use for upright face recognition toward the end of the first year. In recent studies with adults (Carragher and Hancock, 2020; Noyes et al., 2021) and children (Stajduhar et al., 2021), lower accuracy in identity and emotion recognition have been observed when processing masked faces. From this evidence along with that from developmental studies, we could hypothesize that similar effects could be found testing infants aged around 1 year of life. Moreover, because of the additional COVID-19 preventive measure of social distancing, unfamiliar faces might often be further away. Infants might then rely more on lower spatial frequency information of the face configuration because they cannot perceive details of featural characteristics. However, configural face processing is likely to be disrupted by the mask as well. Thus, social distancing may compound to mask wearing effects on identity and emotion recognition.

It should be noted that infants are not completely deprived of seeing full faces, which they normally encounter in the home environment. Further, since outside opportunities are reduced it is possible that infants living in COVID-19 times spend more time on technological devices where they are likely to be presented with a variety of full faces from streaming services and TV shows as well as videos, video calls on smartphones, and tablets, and other digital devices used by the older family members for socializing (Pandya and Lodha, 2021). In this respect, some suggestions might come from previous literature on monocular pattern deprivation during early development. For example, daily brief exposure to normal visual input greatly reduces the adverse effect of abnormal input due to monocular pattern deprivation during the sensitive period (Wensveen et al., 2006; Schwarzkopf et al., 2007). Given such findings, it is possible that the brief exposure to full faces infants are daily exposed at home throughout the COVID-19 pandemic may still be sufficient for the development of face recognition ability during infancy. However, the number of full faces infants are normally exposed to is reduced during the pandemic and, if the amount of faces they experience contributes to the development of perceptual and socio-communicative skills, some consequences might be observed in the next years. For such reasons, it is fundamental for future studies to explore developmental trajectories of face processing skills in infants born during COVID-19 pandemic accounting for the exposure they had to masked rather than full faces. Since masked faces are often experienced outside the family context, one prospective question concerns whether face processing will specialize more narrowly based on very familiar faces that infants see without masks. It is also possible that those who are highly exposed to masks adopt face recognition processes based on featural strategies. As patterned visual stimuli presented in the first month of life are necessary to initiate functional development of the visual neural pathways (Maurer et al., 1999), it would be important to know whether there are critical periods for exposure to certain visual stimuli in terms of configural face processing. Infant research focusing on face processing in children born in times of the COVID-19 pandemic should collect information about exposure to masked and full faces at the time of testing and possibly in the earlier stages of their life to control or test for effects of individual variability in masked face exposure on their key cognitive phenotype.

Uncovered Eyes

Eye contact plays a crucial role in attention engagement, supporting face detection, and specialization of face processing skills from birth onward (Farroni et al., 2002, 2005; Johnson et al., 2015). While direct gaze facilitates face recognition and learning, gaze shifts coupled with head direction and other ostensive communicative signals scaffold the development of joint attention in the first semester of life. Toward the end of their first year, infants integrate gaze direction and emotion expression or hand actions to direct attention to the referred target, being able to rely on gaze cuing alone during the second year. Since the eye region is left uncovered by face masks, infants can access substantial socio-communicative information. Furthermore, masks could have the effect of driving attention to the eyes region. Relatedly, individuals who find focusing on the eye region or interpreting eye cues difficult [e.g., some autistic individuals (Senju and Johnson, 2009; Ashwin et al., 2015; Moriuchi et al., 2017; Pantelis and Kennedy, 2017)] could benefit from the exclusion of possibly competing visual information from the mouth region. Thus, attending the eyes region of the face might be easier in the presence of masked faces for these children from the first months of age, shaping developmental trajectories of social attention (Klin et al., 2015; Parsons et al., 2019). Alternatively, the mask could have a negative effect; for example, if they constitute an additional distractor. Further, masks may perhaps “force” attention to the eyes (the only visible feature), which may be associated with sensory over-stimulation for some people (Robertson and Baron-Cohen, 2017) and thus accelerate complete withdrawal from faces. Developmental longitudinal research is needed to test these hypotheses.

Mouth for Language Learning

Infants further rely on facial information to learn language, by means of intersensory redundancy (Table 3) coming from mouth movements. They pay particular attention to the mouth between 4 and 8 months of age and gradually shift it to the eye region as their language expertise increases. After the first year, when entering the word acquisition phase, infants again pay selective attention to the interacting adults’ mouth to learn to articulate verbal sounds. If the speaking person has her mouth covered, infants cannot take advantage of audiovisual synchrony that is relevant for speech learning. A disadvantage linked to this could be particularly enhanced within multilingual environments, whereby infants rely on multisensory information to disentangle languages (Sebastián-Gallés et al., 2012; Pons et al., 2015). Sufficient experience with audiovisual information coupling during speech is required to exploit multi-modal speech processing in infancy (Mercure et al., 2019). Importantly, it should be noted that this experience might be acquired within the home environment with familiar adults and children. Further research is needed to elucidate whether partially transparent masks allow infants’ learning in contexts where masks are compulsory, assuming that linguistic stimulation within familial contexts can also play a compensatory role. From the published literature, we learn that bilingual infants make use of visual information coming from the mouth region to disambiguate between languages from 8 months of age. These infants could struggle more, particularly if they are mainly hearing the second language in community contexts (nursery, play-groups, and shops) where masks are used, and not as much at home.

Further compensation for language learning could be derived by eye contact and gaze following, which might foster language learning by directing infants’ attention to relevant cues in the environment (Çetinçelik et al., 2021). Thus, it appears important to investigate how much communicative content is vehiculated by facial features and cues beyond the mouth (i.e., eyes and head movements) and how to promote language learning more comprehensively. Crucially, the likelihood of exposure to masked faces, which intuitively increases with age as infants’ social environment broadens, needs to be considered when addressing these questions. In some countries, for example, face masks are mandatory among all adults within childcare settings; thus, the effects of mask wearing on infant development might be more important if the child spends a lot of time in these settings. Moreover, rules and guidelines might change within the same country depending on governmental decisions to face the COVID-19 pandemic, such that mask wearing might only impact development for a relatively short period of time. Studies investigating effects of mask wearing on development should consider and report these factors when selecting a study sample.

Another factor that may influence the effects of masks on face processing is the type of mask people wear, especially in childcare services. Plain-colored masks covering the mouth could foster attention to the eye region important for identity and emotion recognition as well as joint attention development. However, it is possible that very colorful masks direct infants’ attention away from the eyes, with the risk of limiting infants’ exposure to relevant social information. Transparent masks may allow infants to perceive orofacial movements while speaking and possibly enhance attention to the mouth region, reducing any risk of impacting language development. These factors should be considered by education and healthcare practitioners.

Effects on Emotion Reading

Configural strategies also allow us to perceive and process emotion expressions. While a study on mask wearing effects shows that anger and happiness are discriminable in adults despite the covered mouth (Calbi et al., 2021), other findings also highlight difficulties in emotion reading more broadly due to mask wearing (Carbon, 2020; Noyes et al., 2021). Some authors argue that with masks becoming a common practice in everyday life, people have learnt to rely on eye information to discriminate emotions from masked faces, reflecting an adaptation of face processing secondarily to available visual information (Barrick et al., 2020). However, the perception of negative emotions produced by frowning was enhanced in adults when presented with masked emotional faces (Nestor et al., 2020). In infants, this possible bias toward negative interpretations of others’ expressions might have cascading effects on social communication. In this respect, it would be interesting to investigate emotional expression biases in infants exposed to masked faces during the COVID-19 pandemic and longitudinal effects of this on their own emotional development. During the second year of life, they rely on facial expressions in conjunction with their context (Walle and Campos, 2014). Whether not having access to configural information contribute to difficulties in emotion discrimination and understanding or whether, alternatively, the system specializes to allow processing based on alternative strategies is an interesting avenue for future research.

Limitations

This review has some limitations. First, as for selection criteria, seminal research that hugely contributed to the field has not been discussed due to being published before 2000. We believe the content of such findings was reflected in following research included in the present review. Second, a selection bias might have occurred due to automatic filtering, which we tried to overcome by manually adding relevant literature cited in the included papers. Third, studies on atypical development of face processing, that has been largely investigated, were not included to limit the content of this review to papers investigating typical development of face processing. Future work should compare evidence from typical and atypical development to systematically delineate effects of mask wearing in the context of neurodiversity. Fourth, an important limitation concerns participation bias within studies that have been considered with Western or Asian countries being predominantly involved and participants recruited on voluntary basis. Studies of infants who are typically exposed to covered female faces due to religious reasons have not been found in our systematic search, but a cross-cultural comparison would have provided additional proofs about the possibilities proposed in our review. Last, while our search focuses on the first 3 years of life, we found that most studies pertain to infancy, suggesting that face processing is less investigated beyond the first year of life.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author Contributions

LC, AG, EJHJ, and TF contributed to the conception and design of the study. LC was primarily responsible for the literature search. LC and AG equally contributed to the article selection process, data extraction, and wrote the first draft of the manuscript. TF and EJHJ supervised the study. All authors contributed to manuscript revision and approved the submitted version.

Funding

This study was funded by Beneficentia Stiftung Foundation to TF, by the ESRC grant no. ES/R009368/1 to AG and by the MRC Programme grant nos. MR/K021389/1 and MR/T003057/1.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

This manuscript has been released as a PrePrint at: https://psyarxiv.com/rfnj6 (Carnevali et al., 2021).

References

Acerra, F., Burnod, Y., and de Schonen, S. (2002). Modelling aspects of face processing in early infancy. Dev. Sci. 5, 98–117. doi: 10.1111/1467-7687.00215

CrossRef Full Text | Google Scholar

Aktar, E., Mandell, D. J., de Vente, W., Majdandžić, M., Oort, F. J., van Renswoude, D. R., et al. (2018). Parental negative emotions are related to behavioral and pupillary correlates of infants’ attention to facial expressions of emotion. Infant Behav. Dev. 53, 101–111. doi: 10.1016/j.infbeh.2018.07.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Amano, S., Kezuka, E., and Yamamoto, A. (2004). Infant shifting attention from an adult’s face to an adult’s hand: a precursor of joint attention. Infant Behav. Dev. 27, 64–80. doi: 10.1016/j.infbeh.2003.06.005

CrossRef Full Text | Google Scholar

Ashwin, C., Hietanen, J. K., and Baron-Cohen, S. (2015). Atypical integration of social cues for orienting to gaze direction in adults with autism. Mol. Autism. 6:5. doi: 10.1186/2040-2392-6-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Atagi, N., and Johnson, S. P. (2020). Language experience is associated with infants’ visual attention to speakers. Brain Sci. 10:550. doi: 10.3390/brainsci10080550

PubMed Abstract | CrossRef Full Text | Google Scholar

Bahrick, L. E., Gogate, L. J., and Ruiz, I. (2002). Attention and memory for faces and actions in infancy: the salience of actions over faces in dynamic events. Child Dev. 73, 1629–1643. doi: 10.1111/1467-8624.00495

PubMed Abstract | CrossRef Full Text | Google Scholar

Bahrick, L. E., Lickliter, R., and Castellanos, I. (2013). The development of face perception in infancy: intersensory interference and unimodal visual facilitation. Dev. Psychol. 49, 1919–1930. doi: 10.1037/a0031238

PubMed Abstract | CrossRef Full Text | Google Scholar

Bahrick, L. E., and Newell, L. C. (2008). Infant discrimination of faces in naturalistic events: actions are more salient than faces. Dev. Psychol. 44, 983–996. doi: 10.1037/0012-1649.44.4.983

PubMed Abstract | CrossRef Full Text | Google Scholar

Ballew, B. S. (2009). Elsevier’s scopus® database. J. Electron. Resour. Med. Lib. 6, 245–252. doi: 10.1080/15424060903167252

CrossRef Full Text | Google Scholar

Barrick, E., Thornton, M. A., and Tamir, D. (2020). Mask exposure during COVID-19 changes emotional face processing. PsyArXiv. doi: 10.31234/osf.io/yjfg3

CrossRef Full Text | Google Scholar

Bhatt, R. S., Bertin, E., Hayden, A., and Reed, A. (2005). Face processing in infancy: developmental changes in the use of different kinds of relational information. Child Dev. 76, 169–181. doi: 10.1111/j.1467-8624.2005.00837.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Brenna, V., Proietti, V., Montirosso, R., and Turati, C. (2013). Positive, but not negative, facial expressions facilitate 3-month-olds’ recognition of an individual face. Int. J. Behav. Dev. 37, 137–142. doi: 10.1177/0165025412465363

CrossRef Full Text | Google Scholar

Brooks, R., and Meltzoff, A. N. (2002). The importance of eyes: how infants interpret adult looking behavior. Dev. Psychol. 38, 958–966. doi: 10.1037/0012-1649.38.6.958

PubMed Abstract | CrossRef Full Text | Google Scholar

Calbi, M., Langiulli, N., Ferroni, F., Montalti, M., Kolesnikov, A., Gallese, V., et al. (2021). The consequences of COVID-19 on social interactions: an online study on face covering. Sci. Rep. 11:2601. doi: 10.1038/s41598-021-81780-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Carbon, C.-C. (2020). Wearing face masks strongly confuses counterparts in reading emotions. Front. Psychol. 11:566886. doi: 10.3389/fpsyg.2020.566886

PubMed Abstract | CrossRef Full Text | Google Scholar

Carey, S., and Diamond, R. (1977). From piecemeal to configurational representation of faces. Science 195, 312–314. doi: 10.1126/science.831281

PubMed Abstract | CrossRef Full Text | Google Scholar

Carnevali, L., Gui, A., Jones, E. J., and Farroni, T. (2021). Face processing in early development: a systematic review of behavioral studies and considerations in times of COVID-19 pandemic. PsyArXiv [preprint]. doi: 10.31234/osf.io/rfnj6

CrossRef Full Text | Google Scholar

Carragher, D. J., and Hancock, P. J. B. (2020). Surgical face masks impair human face matching performance for familiar and unfamiliar faces. Cognit. Res. Principles Implications 5, 59. doi: 10.1186/s41235-020-00258-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Cashon, C. H., Ha, O.-R., Allen, C. L., and Barna, A. C. (2013). A U-shaped relation between sitting ability and upright face processing in infants. Child Dev. 84, 802–809. doi: 10.1111/cdev.12024

PubMed Abstract | CrossRef Full Text | Google Scholar

Cecchini, M., Baroni, E., Di Vito, C., Piccolo, F., and Lai, C. (2011). Newborn preference for a new face vs. A previously seen communicative or motionless face. Infant Behav. Dev. 34, 424–433. doi: 10.1016/j.infbeh.2011.04.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Çetinçelik, M., Rowland, C. F., and Snijders, T. M. (2021). Do the eyes have it? A systematic review on the role of eye gaze in infant language development. Front. Psychol. 11:589096. doi: 10.3389/fpsyg.2020.589096

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, C.-H., Castellanos, I., Yu, C., and Houston, D. M. (2020). What leads to coordinated attention in parent-toddler interactions? Children’s hearing status matters. Dev. Sci. 23:e12919. doi: 10.1111/desc.12919

PubMed Abstract | CrossRef Full Text | Google Scholar

Chien, S. H. L. (2011). No more top-heavy bias: infants and adults prefer upright faces but not top-heavy geometric or face-like patterns. J. Vis. 11, 13–13. doi: 10.1167/11.6.13

PubMed Abstract | CrossRef Full Text | Google Scholar

Chien, S. H.-L., Hsu, H.-Y., and Su, B.-H. (2010). Discriminating “top-heavy” versus “bottom-heavy” geometric patterns in 2- to 4.5-month-old infants. Vis. Res. 50, 2029–2036. doi: 10.1016/j.visres.2010.06.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Cohen, L. B., and Cashon, C. H. (2001). Do 7-month-old infants process independent features or facial configurations? Infant Child Dev. 10, 83–92. doi: 10.1002/icd.250

CrossRef Full Text | Google Scholar

Coulon, M., Guellai, B., and Streri, A. (2011). Recognition of unfamiliar talking faces at birth. Int. J. Behav. Dev. 35, 282–287. doi: 10.1177/0165025410396765

PubMed Abstract | CrossRef Full Text | Google Scholar

Courage, M. L., Reynolds, G. D., and Richards, J. E. (2006). Infants’ attention to patterned stimuli: developmental change from 3 to 12 months of age. Child Dev. 77, 680–695. doi: 10.1111/j.1467-8624.2006.00897.x

PubMed Abstract | CrossRef Full Text | Google Scholar

de Haan, M., Belsky, J., Reid, V., Volein, A., and Johnson, M. H. (2004). Maternal personality and infants’ neural and visual responsivity to facial expressions of emotion. J. Child Psychol. Psychiatry 45, 1209–1218. doi: 10.1111/j.1469-7610.2004.00320.x

PubMed Abstract | CrossRef Full Text | Google Scholar

de Heering, A., Dollion, N., Godard, O., Goffaux, V., and Baudouin, J.-Y. (2015). Three-month-old infants’ sensitivity to horizontal information within faces. J. Vis. 15:794. doi: 10.1167/15.12.794

CrossRef Full Text | Google Scholar

de Heering, A., Turati, C., Rossion, B., Bulf, H., Goffaux, V., and Simion, F. (2008). Newborns’ face recognition is based on spatial frequencies below 0.5 cycles per degree. Cognition 106, 444–454. doi: 10.1016/j.cognition.2006.12.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Della Longa, L., Gliga, T., and Farroni, T. (2019). Tune to touch: affective touch enhances learning of face identity in 4-month-old infants. Dev. Cogn. Neurosci. 35, 42–46. doi: 10.1016/j.dcn.2017.11.002

PubMed Abstract | CrossRef Full Text | Google Scholar

DeNicola, C. A., Holt, N. A., Lambert, A. J., and Cashon, C. H. (2013). Attention-orienting and attention-holding effects of faces on 4- to 8-month-old infants. Int. J. Behav. Dev. 37, 143–147. doi: 10.1177/0165025412474751

CrossRef Full Text | Google Scholar

Di Giorgio, E., Turati, C., Altoè, G., and Simion, F. (2012). Face detection in complex visual displays: an eye-tracking study with 3- and 6-month-old infants and adults. J. Exp. Child Psychol. 113, 66–77. doi: 10.1016/j.jecp.2012.04.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Doi, H., Tagawa, M., and Shinohara, K. (2010). Gaze direction modulates the disengagement of attention from facial expression in 10-month-olds. Emotion 10, 278–282. doi: 10.1037/a0017800

PubMed Abstract | CrossRef Full Text | Google Scholar

Farroni, T., Csibra, G., Simion, F., and Johnson, M. H. (2002). Eye contact detection in humans from birth. Proc. Natl. Acad. Sci. 99, 9602–9605. doi: 10.1073/pnas.152159999

PubMed Abstract | CrossRef Full Text | Google Scholar

Farroni, T., Johnson, M. H., Brockbank, M., and Simion, F. (2000). Infants’ use of gaze direction to cue attention: the importance of perceived motion. Vis. Cogn. 7, 705–718. doi: 10.1080/13506280050144399

CrossRef Full Text | Google Scholar

Farroni, T., Johnson, M. H., Menon, E., Zulian, L., Faraguna, D., and Csibra, G. (2005). Newborns’ preference for face-relevant stimuli: effects of contrast polarity. Proc. Natl. Acad. Sci. 102, 17245–17250. doi: 10.1073/pnas.0502205102

PubMed Abstract | CrossRef Full Text | Google Scholar

Farroni, T., Massaccesi, S., Menon, E., and Johnson, M. H. (2007). Direct gaze modulates face recognition in young infants. Cognition 102, 396–404. doi: 10.1016/j.cognition.2006.01.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Farroni, T., Menon, E., and Johnson, M. H. (2006). Factors influencing newborns’ preference for faces with eye contact. J. Exp. Child Psychol. 95, 298–308. doi: 10.1016/j.jecp.2006.08.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Fausey, C. M., Jayaraman, S., and Smith, L. B. (2016). From faces to hands: changing visual input in the first two years. Cognition 152, 101–107. doi: 10.1016/j.cognition.2016.03.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Fecher, N., and Johnson, E. K. (2019). Bilingual infants excel at foreign-language talker recognition. Dev. Sci. 22:e12778. doi: 10.1111/desc.12778

PubMed Abstract | CrossRef Full Text | Google Scholar

Flom, R., and Bahrick, L. E. (2007). The development of infant discrimination of affect in multimodal and unimodal stimulation: the role of intersensory redundancy. Dev. Psychol. 43, 238–252. doi: 10.1037/0012-1649.43.1.238

PubMed Abstract | CrossRef Full Text | Google Scholar

Flom, R., Bahrick, L. E., and Pick, A. D. (2018). Infants discriminate the affective expressions of their peers: the roles of age and familiarization time. Infancy 23, 692–707. doi: 10.1111/infa.12246

PubMed Abstract | CrossRef Full Text | Google Scholar

Franchak, J. M., Kretch, K. S., and Adolph, K. E. (2018). See and be seen: infant-caregiver social looking during locomotor free play. Dev. Sci. 21:e12626. doi: 10.1111/desc.12626

PubMed Abstract | CrossRef Full Text | Google Scholar

Freud, E., Stajduhar, A., Rosenbaum, R. S., Avidan, G., and Ganel, T. (2020). The COVID-19 pandemic masks the way people perceive faces. Sci. Rep. 10:22344. doi: 10.1038/s41598-020-78986-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Fu, X., Morales, S., LoBue, V., Buss, K. A., and Pérez-Edgar, K. (2020). Temperament moderates developmental changes in vigilance to emotional faces in infants: evidence from an eye-tracking study. Dev. Psychobiol. 62, 339–352. doi: 10.1002/dev.21920

PubMed Abstract | CrossRef Full Text | Google Scholar

Galati, A., Hock, A., and Bhatt, R. S. (2016). Perceptual learning and face processing in infancy. Dev. Psychobiol. 58, 829–840. doi: 10.1002/dev.21420

PubMed Abstract | CrossRef Full Text | Google Scholar

Gamé, F., Carchon, I., and Vital-Durand, F. (2003). The effect of stimulus attractiveness on visual tracking in 2- to 6-month-old infants. Infant Behav. Dev. 26, 135–150. doi: 10.1016/S0163-6383(03)00013-4

CrossRef Full Text | Google Scholar

Geangu, E., Ichikawa, H., Lao, J., Kanazawa, S., Yamaguchi, M. K., Caldara, R., et al. (2016). Culture shapes 7-month-olds’ perceptual strategies in discriminating facial expressions of emotion. Curr. Biol. 26, R663–R664. doi: 10.1016/j.cub.2016.05.072

PubMed Abstract | CrossRef Full Text | Google Scholar

Gliga, T., Elsabbagh, M., Andravizou, A., and Johnson, M. (2009). Faces attract infants’ attention in complex displays. Infancy 14, 550–562. doi: 10.1080/15250000903144199

PubMed Abstract | CrossRef Full Text | Google Scholar

Gluckman, M., and Johnson, S. P. (2013). Attentional capture by social stimuli in young infants. Front. Psychol. 4:527. doi: 10.3389/fpsyg.2013.00527

CrossRef Full Text | Google Scholar

Gredebäck, G., Theuring, C., Hauf, P., and Kenward, B. (2008). The microstructure of infants’ gaze as they view adult shifts in overt attention. Infancy 13, 533–543. doi: 10.1080/15250000802329529

CrossRef Full Text | Google Scholar

Gross, C., and Schwarzer, G. (2010). Face recognition across varying poses in 7- and 9-month-old infants: the role of facial expression. Int. J. Behav. Dev. 34, 417–426. doi: 10.1177/0165025409350364

CrossRef Full Text | Google Scholar

Haensel, J. X., Ishikawa, M., Itakura, S., Smith, T. J., and Senju, A. (2020). Cultural influences on face scanning are consistent across infancy and adulthood. Infant Behav. Dev. 61, 101503. doi: 10.1016/j.infbeh.2020.101503

PubMed Abstract | CrossRef Full Text | Google Scholar

Hayden, A., Bhatt, R. S., Reed, A., Corbly, C. R., and Joseph, J. E. (2007). The development of expert face processing: are infants sensitive to normal differences in second-order relational information? J. Exp. Child Psychol. 97, 85–98. doi: 10.1016/j.jecp.2007.01.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Heck, A., Hock, A., White, H., Jubran, R., and Bhatt, R. S. (2016). The development of attention to dynamic facial emotions. J. Exp. Child Psychol. 147, 100–110. doi: 10.1016/j.jecp.2016.03.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Hernik, M., and Broesch, T. (2019). Infant gaze following depends on communicative signals: an eye-tracking study of 5- to 7-month-olds in Vanuatu. Dev. Sci. 22:e12779. doi: 10.1111/desc.12779

PubMed Abstract | CrossRef Full Text | Google Scholar

Hillairet de Boisferon, A., Tift, A. H., Minar, N. J., and Lewkowicz, D. J. (2017). Selective attention to a talker’s mouth in infancy: role of audiovisual temporal synchrony and linguistic experience. Dev. Sci. 20, e12381. doi: 10.1111/desc.12381

PubMed Abstract | CrossRef Full Text | Google Scholar

Hillairet de Boisferon, A., Tift, A. H., Minar, N. J., and Lewkowicz, D. J. (2018). The redeployment of attention to the mouth of a talking face during the second year of life. J. Exp. Child Psychol. 172, 189–200. doi: 10.1016/j.jecp.2018.03.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Houston-Price, C., Plunkett, K., and Duffy, H. (2006). The use of social and salience cues in early word learning. J. Exp. Child Psychol. 95, 27–55. doi: 10.1016/j.jecp.2006.03.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Humphreys, K., and Johnson, M. H. (2007). The development of “face-space” in infancy. Vis. Cogn. 15, 578–598. doi: 10.1080/13506280600943518

CrossRef Full Text | Google Scholar

Hunnius, S., and Geuze, R. H. (2004). Developmental changes in visual scanning of dynamic faces and abstract stimuli in infants: a longitudinal study. Infancy 6, 231–255. doi: 10.1207/s15327078in0602_5

CrossRef Full Text | Google Scholar

Ichikawa, H., Kanazawa, S., and Yamaguchi, M. K. (2011). The movement of internal facial features elicits 7 to 8-month-old infants’ preference for face patterns. Infant Child Dev. 20, 464–474. doi: 10.1002/icd.724

CrossRef Full Text | Google Scholar

Ichikawa, H., Kanazawa, S., and Yamaguchi, M. K. (2014). Infants recognize the subtle happiness expression. Perception 43, 235–248. doi: 10.1068/p7595

CrossRef Full Text | Google Scholar

Ichikawa, H., and Yamaguchi, M. K. (2014). Infants’ recognition of subtle anger facial expression: infants’ recognition of subtle facial expression. Jpn. Psychol. Res. 56, 15–23. doi: 10.1111/jpr.12025

CrossRef Full Text | Google Scholar

Jayaraman, S., Fausey, C. M., and Smith, L. B. (2015). The faces in infant-perspective scenes change over the first year of life. PLoS One 10, e0123780. doi: 10.1371/journal.pone.0123780

PubMed Abstract | CrossRef Full Text | Google Scholar

Jayaraman, S., Fausey, C. M., and Smith, L. B. (2017). Why are faces denser in the visual experiences of younger than older infants? Dev. Psychol. 53, 38–49. doi: 10.1037/dev0000230

PubMed Abstract | CrossRef Full Text | Google Scholar

Johnson, M. H. (2005). Subcortical face processing. Nat. Rev. Neurosci. 6, 766–774. doi: 10.1038/nrn1766

CrossRef Full Text | Google Scholar

Johnson, M. H., Senju, A., and Tomalski, P. (2015). The two-process theory of face processing: modifications based on two decades of data from infants and adults. Neurosci. Biobehav. Rev. 50, 169–179. doi: 10.1016/j.neubiorev.2014.10.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Kato, M., and Konishi, Y. (2013). Where and how infants look: the development of scan paths and fixations in face perception. Infant Behav. Dev. 36, 32–41. doi: 10.1016/j.infbeh.2012.10.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Kim, H. I., and Johnson, S. P. (2013). Do young infants prefer an infant-directed face or a happy face? Int. J. Behav. Dev. 37, 125–130. doi: 10.1177/0165025413475972

CrossRef Full Text | Google Scholar

Kim, H. I., and Johnson, S. P. (2014). Detecting ‘infant-directedness’ in face and voice. Dev. Sci. 17, 621–627. doi: 10.1111/desc.12146

PubMed Abstract | CrossRef Full Text | Google Scholar

Kimchi, R. (1992). Primacy of wholistic processing and global/local paradigm: a critical review. Psychol. Bull. 112, 24–38. doi: 10.1037/0033-2909.112.1.24

PubMed Abstract | CrossRef Full Text | Google Scholar

Klin, A., Shultz, S., and Jones, W. (2015). Social visual engagement in infants and toddlers with autism: early developmental transitions and a model of pathogenesis. Neurosci. Biobehav. Rev. 50, 189–203. doi: 10.1016/j.neubiorev.2014.10.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Kubicek, C., de Boisferon, A. H., Dupierrix, E., Lœvenbruck, H., Gervain, J., and Schwarzer, G. (2013). Face-scanning behavior to silently-talking faces in 12-month-old infants: the impact of pre-exposed auditory speech. Int. J. Behav. Dev. 37, 106–110. doi: 10.1177/0165025412473016

CrossRef Full Text | Google Scholar

Layton, D., and Rochat, P. (2007). Contribution of motion information to maternal face discrimination in infancy. Infancy 12, 257–271. doi: 10.1111/j.1532-7078.2007.tb00243.x

CrossRef Full Text | Google Scholar

Lee, V., Cheal, J. L., and Rutherford, M. D. (2015). Categorical perception along the happy-angry and happy-sad continua in the first year of life. Infant Behav. Dev. 40, 95–102. doi: 10.1016/j.infbeh.2015.04.006

CrossRef Full Text | Google Scholar

Leo, I., Angeli, V., Lunghi, M., Dalla Barba, B., and Simion, F. (2018). Newborns’ face recognition: the role of facial movement. Infancy 23, 45–60. doi: 10.1111/infa.12197

CrossRef Full Text | Google Scholar

Leo, I., and Simion, F. (2009). Newborns’ mooney-face perception. Infancy 14, 641–653. doi: 10.1080/15250000903264047

PubMed Abstract | CrossRef Full Text | Google Scholar

Leppänen, J., Peltola, M. J., Mäntymaa, M., Koivuluoma, M., Salminen, A., and Puura, K. (2010). Cardiac and behavioral evidence for emotional influences on attention in 7-month-old infants. Int. J. Behav. Dev. 34, 547–553. doi: 10.1177/0165025410365804

CrossRef Full Text | Google Scholar

Lewkowicz, D. J., and Hansen-Tift, A. M. (2012). Infants deploy selective attention to the mouth of a talking face when learning speech. Proc. Natl. Acad. Sci. 109, 1431–1436. doi: 10.1073/pnas.1114783109

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, S., Quinn, P. C., Wheeler, A., Xiao, N., Ge, L., and Lee, K. (2011). Similarity and difference in the processing of same- and other-race faces as revealed by eye tracking in 4- to 9-month-olds. J. Exp. Child Psychol. 108, 180–189. doi: 10.1016/j.jecp.2010.06.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Macchi Cassia, V., Turati, C., and Simion, F. (2004). Can a nonspecific bias toward top-heavy patterns explain newborns’ face preference? Psychol. Sci. 15, 379–383. doi: 10.1111/j.0956-7976.2004.00688.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Maurer, D., Lewis, T. L., Brent, H. P., and Levin, A. V. (1999). Rapid improvement in the acuity of infants after visual input. Science 286, 108–110. doi: 10.1126/science.286.5437.108

PubMed Abstract | CrossRef Full Text | Google Scholar

Mercure, E., Kushnerenko, E., Goldberg, L., Bowden-Howl, H., Coulson, K., Johnson, M. H., et al. (2019). Language experience influences audiovisual speech integration in unimodal and bimodal bilingual infants. Dev. Sci. 22:e12701. doi: 10.1111/desc.12701

PubMed Abstract | CrossRef Full Text | Google Scholar

Moriuchi, J. M., Klin, A., and Jones, W. (2017). Mechanisms of diminished attention to eyes in autism. Am. J. Psychiatry 174, 26–35. doi: 10.1176/appi.ajp.2016.15091222

PubMed Abstract | CrossRef Full Text | Google Scholar

Morton, J., and Johnson, M. H. (1991). CONSPEC and CONLERN: a two-process theory of infant face recognition. Psychol. Rev. 98, 164–181. doi: 10.1037/0033-295X.98.2.164

PubMed Abstract | CrossRef Full Text | Google Scholar

Mundy, P. (2018). A review of joint attention and social-cognitive brain systems in typical development and autism spectrum disorder. Eur. J. Neurosci. 47, 497–514. doi: 10.1111/ejn.13720

PubMed Abstract | CrossRef Full Text | Google Scholar

Nagy, E. (2008). Innate intersubjectivity: newborns’ sensitivity to communication disturbance. Dev. Psychol. 44, 1779–1784. doi: 10.1037/a0012665

PubMed Abstract | CrossRef Full Text | Google Scholar

Nakato, E., Otsuka, Y., Konuma, H., Kanazawa, S., Yamaguchi, M. K., and Tomonaga, M. (2009). Perception of illusory shift of gaze direction by infants. Infant Behav. Dev. 32, 422–428. doi: 10.1016/j.infbeh.2009.07.006

CrossRef Full Text | Google Scholar

Nestor, M. S., Fischer, D., and Arnold, D. (2020). “Masking” our emotions: Botulinum toxin, facial expression, and well-being in the age of COVID-19. J. Cosmet. Dermatol. 19, 2154–2160. doi: 10.1111/jocd.13569

PubMed Abstract | CrossRef Full Text | Google Scholar

Niedźwiecka, A., and Tomalski, P. (2015). Gaze-cueing effect depends on facial expression of emotion in 9- to 12-month-old infants. Front. Psychol. 6:122. doi: 10.3389/fpsyg.2015.00122

PubMed Abstract | CrossRef Full Text | Google Scholar

Noyes, E., Davis, J. P., Petrov, N., Gray, K. L. H., and Ritchie, K. L. (2021). The effect of face masks and sunglasses on identity and expression recognition with super-recognizers and typical observers. Royal Society Open. Science 8:201169. doi: 10.1098/rsos.201169

PubMed Abstract | CrossRef Full Text | Google Scholar

Oakes, L. M., and Ellis, A. E. (2013). An eye-tracking investigation of developmental changes in infants’ exploration of upright and inverted human faces. Infancy 18, 134–148. doi: 10.1111/j.1532-7078.2011.00107.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Otsuka, Y., Ichikawa, H., Clifford, C. W., Kanazawa, S., and Yamaguchi, M. K. (2016). Wollaston’s effect in infants: do infants integrate eye and head information in gaze perception? J. Vis. 16, 4–4. doi: 10.1167/16.3.4

PubMed Abstract | CrossRef Full Text | Google Scholar

Otsuka, Y., Konishi, Y., Kanazawa, S., Yamaguchi, M. K., Abdi, H., and O’Toole, A. J. (2009). Recognition of moving and static faces by young infants. Child Dev. 80, 1259–1271. doi: 10.1111/j.1467-8624.2009.01330.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Pandya, A., and Lodha, P. (2021). Social connectedness, excessive screen time during COVID-19 and mental health: a review of current evidence. Front. Hum. Dyn. 3:684137. doi: 10.3389/fhumd.2021.684137

CrossRef Full Text | Google Scholar

Pantelis, P. C., and Kennedy, D. P. (2017). Deconstructing atypical eye gaze perception in autism spectrum disorder. Sci. Rep. 7:14990. doi: 10.1038/s41598-017-14919-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Parsons, J. P., Bedford, R., Jones, E. J. H., Charman, T., Johnson, M. H., and Gliga, T. (2019). Gaze following and attention to objects in infants at familial risk for ASD. Front. Psychol. 10:1799. doi: 10.3389/fpsyg.2019.01799

PubMed Abstract | CrossRef Full Text | Google Scholar

Peltola, M. J., Forssman, L., Puura, K., van Ijzendoorn, M. H., and Leppänen, J. M. (2015). Attention to faces expressing negative emotion at 7 months predicts attachment security at 14 months. Child Dev. 86, 1321–1332. doi: 10.1111/cdev.12380

PubMed Abstract | CrossRef Full Text | Google Scholar

Peltola, M. J., Hietanen, J. K., Forssman, L., and Leppänen, J. M. (2013). The emergence and stability of the attentional bias to fearful faces in infancy. Infancy 18, 905–926. doi: 10.1111/infa.12013

PubMed Abstract | CrossRef Full Text | Google Scholar

Peltola, M. J., Leppänen, J. M., and Hietanen, J. K. (2011). Enhanced cardiac and attentional responding to fearful faces in 7-month-old infants. Psychophysiology 48, 1291–1298. doi: 10.1111/j.1469-8986.2011.01188.x

CrossRef Full Text | Google Scholar

Peltola, M. J., Leppänen, J. M., Mäki, S., and Hietanen, J. K. (2009a). Emergence of enhanced attention to fearful faces between 5 and 7 months of age. Soc. Cogn. Affect. Neurosci. 4, 134–142. doi: 10.1093/scan/nsn046

PubMed Abstract | CrossRef Full Text | Google Scholar

Peltola, M. J., Leppänen, J. M., Palokangas, T., and Hietanen, J. K. (2008). Fearful faces modulate looking duration and attention disengagement in 7-month-old infants. Dev. Sci. 11, 60–68. doi: 10.1111/j.1467-7687.2007.00659.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Peltola, M. J., Leppänen, J. M., Vogel-Farley, V. K., Hietanen, J. K., and Nelson, C. A. (2009b). Fearful faces but not fearful eyes alone delay attention disengagement in 7-month-old infants. Emotion 9, 560–565. doi: 10.1037/a0015806

PubMed Abstract | CrossRef Full Text | Google Scholar

Pérez-Edgar, K., Morales, S., LoBue, V., Taber-Thomas, B. C., Allen, E. K., Brown, K. M., et al. (2017). The impact of negative affect on attention patterns to threat across the first 2 years of life. Dev. Psychol. 53, 2219–2232. doi: 10.1037/dev0000408

PubMed Abstract | CrossRef Full Text | Google Scholar

Pickron, C. B., Fava, E., and Scott, L. S. (2017). Follow my gaze: face race and sex influence gaze-cued attention in infancy. Infancy 22, 626–644. doi: 10.1111/infa.12180

CrossRef Full Text | Google Scholar

Pons, F., Bosch, L., and Lewkowicz, D. J. (2015). Bilingualism modulates infants’ selective attention to the mouth of a talking face. Psychol. Sci. 26, 490–498. doi: 10.1177/0956797614568320

PubMed Abstract | CrossRef Full Text | Google Scholar

Pons, F., Bosch, L., and Lewkowicz, D. J. (2019). Twelve-month-old infants’ attention to the eyes of a talking face is associated with communication and social skills. Infant Behav. Dev. 54, 80–84. doi: 10.1016/j.infbeh.2018.12.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Quadrelli, E., Brenna, V., Monacò, S., Turati, C., and Bulf, H. (2020). Emotional facial expressions affect visual rule learning in 7- to 8-month-old infants. Infant Behav. Dev. 61:101501. doi: 10.1016/j.infbeh.2020.101501

PubMed Abstract | CrossRef Full Text | Google Scholar

Quinn, P. C., and Tanaka, J. W. (2009). Infants’ processing of featural and configural information in the upper and lower halves of the face. Infancy 14, 474–487. doi: 10.1080/15250000902994248

PubMed Abstract | CrossRef Full Text | Google Scholar

Quinn, P. C., Tanaka, J. W., Lee, K., Pascalis, O., and Slater, A. M. (2013). Are faces special to infants? An investigation of configural and featural processing for the upper and lower regions of houses in 3- to 7-month-olds. Vis. Cogn. 21, 23–37. doi: 10.1080/13506285.2013.764370

PubMed Abstract | CrossRef Full Text | Google Scholar

Rennels, J. L., and Davis, R. E. (2008). Facial experience during the first year. Infant Behav. Dev. 31, 665–678. doi: 10.1016/j.infbeh.2008.04.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Rhodes, G., Geddes, K., Jeffery, L., Dziurawiec, S., and Clark, A. (2002). Are average and symmetric faces attractive to infants? Discrimination and looking preferences. Perception 31, 315–321. doi: 10.1068/p3129

PubMed Abstract | CrossRef Full Text | Google Scholar

Rigato, S., Menon, E., Gangi, V. D., George, N., and Farroni, T. (2013). The role of facial expressions in attention-orienting in adults and infants. Int. J. Behav. Dev. 37, 154–159. doi: 10.1177/0165025412472410

CrossRef Full Text | Google Scholar

Rigato, S., Menon, E., Johnson, M. H., Faraguna, D., and Farroni, T. (2011a). Direct gaze may modulate face recognition in newborns. Infant Child Dev. 20, 20–34. doi: 10.1002/icd.684

CrossRef Full Text | Google Scholar

Rigato, S., Menon, E., Johnson, M. H., and Farroni, T. (2011b). The interaction between gaze direction and facial expressions in newborns. Eur. J. Dev. Psychol. 8, 624–636. doi: 10.1080/17405629.2011.602239

CrossRef Full Text | Google Scholar

Robertson, C. E., and Baron-Cohen, S. (2017). Sensory perception in autism. Nat. Rev. Neurosci. 18, 671–684. doi: 10.1038/nrn.2017.112

CrossRef Full Text | Google Scholar

Rose, S. A., Jankowski, J. J., and Feldman, J. F. (2002). Speed of processing and face recognition at 7 and 12 months. Infancy 3, 435–455. doi: 10.1207/S15327078IN0304_02

CrossRef Full Text | Google Scholar

Safar, K., and Moulson, M. C. (2017). Recognizing facial expressions of emotion in infancy: a replication and extension: recognizing emotional faces in infancy. Dev. Psychobiol. 59, 507–514. doi: 10.1002/dev.21515

PubMed Abstract | CrossRef Full Text | Google Scholar

Sai, F. Z. (2005). The role of the mother’s voice in developing mother’s face preference: evidence for intermodal perception at birth. Infant Child Dev. 14, 29–50. doi: 10.1002/icd.376

CrossRef Full Text | Google Scholar

Sakuta, Y., Sato, K., Kanazawa, S., and Yamaguchi, M. K. (2014). The effect of eye size on discriminating faces: can infants recognize facial uncanniness? Jpn. Psychol. Res. 56, 331–339. doi: 10.1111/jpr.12057

CrossRef Full Text | Google Scholar

Schure, S. T., Junge, C., and Boersma, P. (2016). Discriminating non-native vowels on the basis of multimodal, auditory or visual information: effects on infants’ looking patterns and discrimination. Front. Psychol. 7:525. doi: 10.3389/fpsyg.2016.00525

PubMed Abstract | CrossRef Full Text | Google Scholar

Schwarzer, G., and Jovanovic, B. (2010). The relationship between processing facial identity and emotional expression in 8-month-old infants. Infancy 15, 28–45. doi: 10.1111/j.1532-7078.2009.00004.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Schwarzer, G., and Zauner, N. (2003). Face processing in 8-month-old infants: evidence for configural and analytical processing. Vis. Res. 43, 2783–2793. doi: 10.1016/S0042-6989(03)00478-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Schwarzer, G., Zauner, N., and Jovanovic, B. (2007). Evidence of a shift from featural to configural face processing in infancy. Dev. Sci. 10, 452–463. doi: 10.1111/j.1467-7687.2007.00599.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Schwarzkopf, D. S., Vorobyov, V., Mitchell, D. E., and Sengpiel, F. (2007). Brief daily binocular vision prevents monocular deprivation effects in visual cortex. Eur. J. Neurosci. 25, 270–280. doi: 10.1111/j.1460-9568.2006.05273.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Sebastián-Gallés, N., Albareda-Castellot, B., Weikum, W. M., and Werker, J. F. (2012). A bilingual advantage in visual language discrimination in infancy. Psychol. Sci. 23, 994–999. doi: 10.1177/0956797612436817

PubMed Abstract | CrossRef Full Text | Google Scholar

Segal, S. C., and Moulson, M. C. (2020a). Dynamic advances in emotion processing: differential attention towards the critical features of dynamic emotional expressions in 7-month-old infants. Brain Sci. 10, 1–17. doi: 10.3390/brainsci10090585

PubMed Abstract | CrossRef Full Text | Google Scholar

Segal, S. C., and Moulson, M. C. (2020b). What drives the attentional bias for fearful faces? An eye-tracking investigation of 7-month-old infants’ visual scanning patterns. Infancy 25, 658–676. doi: 10.1111/infa.12351

PubMed Abstract | CrossRef Full Text | Google Scholar

Senju, A., and Csibra, G. (2008). Gaze following in human infants depends on communicative signals. Curr. Biol. 18, 668–671. doi: 10.1016/j.cub.2008.03.059

PubMed Abstract | CrossRef Full Text | Google Scholar

Senju, A., Csibra, G., and Johnson, M. H. (2008). Understanding the referential nature of looking: infants’ preference for object-directed gaze. Cognition 108, 303–319. doi: 10.1016/j.cognition.2008.02.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Senju, A., and Johnson, M. H. (2009). Atypical eye contact in autism: models, mechanisms and development. Neurosci. Biobehav. Rev. 33, 1204–1214. doi: 10.1016/j.neubiorev.2009.06.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Senju, A., Vernetti, A., Ganea, N., Hudry, K., Tucker, L., Charman, T., et al. (2015). Early social experience affects the development of eye gaze processing. Curr. Biol. 25, 3086–3091. doi: 10.1016/j.cub.2015.10.019

PubMed Abstract | CrossRef Full Text | Google Scholar

Simion, F., Valenza, E., Cassia, V. M., Turati, C., and Umiltà, C. (2002). Newborns’ preference for up–down asymmetrical configurations. Dev. Sci. 5, 427–434. doi: 10.1111/1467-7687.00237

CrossRef Full Text | Google Scholar

Simpson, E. A., Jakobsen, K. V., Fragaszy, D. M., Okada, K., and Frick, J. E. (2014). The development of facial identity discrimination through learned attention. Dev. Psychobiol. 56, 1083–1101. doi: 10.1002/dev.21194

PubMed Abstract | CrossRef Full Text | Google Scholar

Simpson, E. A., Maylott, S. E., Mitsven, S. G., Zeng, G., and Jakobsen, K. V. (2020). Face detection in 2- to 6-month-old infants is influenced by gaze direction and species. Dev. Sci. 23:e12902. doi: 10.1111/desc.12902

PubMed Abstract | CrossRef Full Text | Google Scholar

Souter, N. E., Arunachalam, S., and Luyster, R. J. (2020). The robustness of eye–mouth index as an eye-tracking metric of social attention in toddlers. Int. J. Behav. Dev. 44, 469–478. doi: 10.1177/0165025419885186

PubMed Abstract | CrossRef Full Text | Google Scholar

Spencer, J., O’Brien, J., Johnston, A., and Hill, H. (2006). Infants’ discrimination of faces by using biological motion cues. Perception 35, 79–89. doi: 10.1068/p5379

PubMed Abstract | CrossRef Full Text | Google Scholar

Stajduhar, A., Ganel, T., Avidan, G., Rosenbaum, R. S., and Freud, E. (2021). Face masks disrupt holistic processing and face perception in school-age children. PsyArXiv. doi: 10.31234/osf.io/fygjq

CrossRef Full Text | Google Scholar

Streri, A., Coulon, M., and Guellaï, B. (2013). The foundations of social cognition: studies on face/voice integration in newborn infants. Int. J. Behav. Dev. 37, 79–83. doi: 10.1177/0165025412465361

CrossRef Full Text | Google Scholar

Streri, A., Coulon, M., Marie, J., and Yeung, H. H. (2016). Developmental change in infants’ detection of visual faces that match auditory vowels. Infancy 21, 177–198. doi: 10.1111/infa.12104

CrossRef Full Text | Google Scholar

Striano, T., Stahl, D., Cleveland, A., and Hoehl, S. (2007). Sensitivity to triadic attention between 6 weeks and 3 months of age. Infant Behav. Dev. 30, 529–534. doi: 10.1016/j.infbeh.2006.12.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Sugden, N. A., and Moulson, M. C. (2019). These are the people in your neighbourhood: consistency and persistence in infants’ exposure to caregivers’, relatives’, and strangers’ faces across contexts. Vis. Res. 157, 230–241. doi: 10.1016/j.visres.2018.09.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Szufnarowska, J., Rohlfing, K. J., Fawcett, C., and Gredebäck, G. (2014). Is ostension any more than attention? Sci. Rep. 4, 1–4.

Google Scholar

Tenenbaum, E. J., Shah, R. J., Sobel, D. M., Malle, B. F., and Morgan, J. L. (2013). Increased focus on the mouth among infants in the first year of life: a longitudinal eye-tracking study. Infancy 18, 534–553. doi: 10.1111/j.1532-7078.2012.00135.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Thompson, L. A., Madrid, V., Westbrook, S., and Johnston, V. (2001). Infants attend to second-order relational properties of faces. Psychon. Bull. Rev. 8, 769–777. doi: 10.3758/BF03196216

PubMed Abstract | CrossRef Full Text | Google Scholar

Tomalski, P., Ribeiro, H., Ballieux, H., Axelsson, E. L., Murphy, E., Moore, D. G., et al. (2013). Exploring early developmental changes in face scanning patterns during the perception of audiovisual mismatch of speech cues. Eur. J. Dev. Psychol. 10, 611–624. doi: 10.1080/17405629.2012.728076

CrossRef Full Text | Google Scholar

Tsurumi, S., Kanazawa, S., Yamaguchi, M. K., and Kawahara, J.-I. (2019). Rapid identification of the face in infants. J. Exp. Child Psychol. 186, 45–58. doi: 10.1016/j.jecp.2019.05.005

CrossRef Full Text | Google Scholar

Turati, C., Bulf, H., and Simion, F. (2008). Newborns’ face recognition over changes in viewpoint. Cognition 106, 1300–1321. doi: 10.1016/j.cognition.2007.06.005

CrossRef Full Text | Google Scholar

Turati, C., Montirosso, R., Brenna, V., Ferrara, V., and Borgatti, R. (2011). A smile enhances 3-month-olds’ recognition of an individual face. Infancy 16, 306–317. doi: 10.1111/j.1532-7078.2010.00047.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Turati, C., Sangrigoli, S., Ruely, J., and Schonen, S. (2004). Evidence of the face inversion effect in 4-month-old infants. Infancy 6, 275–297. doi: 10.1207/s15327078in0602_8

PubMed Abstract | CrossRef Full Text | Google Scholar

Turati, C., and Simion, F. (2002). Newborns’ recognition of changing and unchanging aspects of schematic faces. J. Exp. Child Psychol. 83, 239–261. doi: 10.1016/S0022-0965(02)00148-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Turati, C., Valenza, E., Leo, I., and Simion, F. (2005). Three-month-olds’ visual preference for faces and its underlying visual processing mechanisms. J. Exp. Child Psychol. 90, 255–273. doi: 10.1016/j.jecp.2004.11.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Valenza, E., Otsuka, Y., Bulf, H., Ichikawa, H., Kanazawa, S., and Yamaguchi, M. K. (2015). Face orientation and motion differently affect the deployment of visual attention in newborns and 4-month-old infants. PLoS One 10:e0136965. doi: 10.1371/journal.pone.0136965

PubMed Abstract | CrossRef Full Text | Google Scholar

von Hofsten, C., Dahlström, E., and Fredriksson, Y. (2005). 12-month-old infants’ perception of attention direction in static video images. Infancy 8, 217–231. doi: 10.1207/s15327078in0803_2

CrossRef Full Text | Google Scholar

Wagner, J. B., Luyster, R. J., Yim, J. Y., Tager-Flusberg, H., and Nelson, C. A. (2013). The role of early visual attention in social development. Int. J. Behav. Dev. 37, 118–124. doi: 10.1177/0165025412468064

PubMed Abstract | CrossRef Full Text | Google Scholar

Walle, E. A., and Campos, J. J. (2014). The development of infant detection of inauthentic emotion. Emotion 14, 488–503. doi: 10.1037/a0035305

PubMed Abstract | CrossRef Full Text | Google Scholar

Wensveen, J. M., Harwerth, R. S., Hung, L. F., Ramamirtham, R., Kee, C. S., and Smith, E. L. (2006). Brief daily periods of unrestricted vision can prevent form-deprivation amblyopia. Invest. Ophthalmol. Vis. Sci. 47, 2468–2477. doi: 10.1167/iovs.05-0885

PubMed Abstract | CrossRef Full Text | Google Scholar

Werker, J. F., and Byers-Heinlein, K. (2008). Bilingualism in infancy: first steps in perception and comprehension. Trends Cogn. Sci. 12, 144–151. doi: 10.1016/j.tics.2008.01.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Wheeler, A., Anzures, G., Quinn, P. C., Pascalis, O., Omrin, D. S., and Lee, K. (2011). Caucasian infants scan own- and other-race faces differently. PLoS One 6:e18621. doi: 10.1371/journal.pone.0018621

PubMed Abstract | CrossRef Full Text | Google Scholar

World Health Organization (2020). Advice on the use of masks in the context of COVID-19: interim guidance, 6 April 2020. World Health Organization. Available at: https://apps.who.int/iris/handle/10665/331693. License: CC BY-NC-SA 3.0 IGO

Google Scholar

Xiao, N. G., and Emberson, L. L. (2019). Infants use knowledge of emotions to augment face perception: evidence of top-down modulation of perception early in life. Cognition 193:104019. doi: 10.1016/j.cognition.2019.104019

PubMed Abstract | CrossRef Full Text | Google Scholar

Xiao, N. G., Quinn, P. C., Liu, S., Ge, L., Pascalis, O., and Lee, K. (2015). Eye tracking reveals a crucial role for facial motion in recognition of faces by infants. Dev. Psychol. 51, 744–757. doi: 10.1037/dev0000019

PubMed Abstract | CrossRef Full Text | Google Scholar

Yamashita, W., Kanazawa, S., and Yamaguchi, M. K. (2012). The effect of gaze direction on three-dimensional face recognition in infants. Vis. Res. 68, 14–18. doi: 10.1016/j.visres.2012.06.022

PubMed Abstract | CrossRef Full Text | Google Scholar

Yin, R. K. (1969). Looking at upside-down faces. J. Exp. Psychol. 81, 141–145. doi: 10.1037/h0027474

CrossRef Full Text | Google Scholar

Yong, M. H., and Ruffman, T. (2016). Domestic dogs and human infants look more at happy and angry faces than sad faces. Multisens. Res. 29, 749–771. doi: 10.1163/22134808-00002535

CrossRef Full Text | Google Scholar

Keywords: face processing, development, infancy, social cognition, mask wearing, COVID-19

Citation: Carnevali L, Gui A, Jones EJH and Farroni T (2022) Face Processing in Early Development: A Systematic Review of Behavioral Studies and Considerations in Times of COVID-19 Pandemic. Front. Psychol. 13:778247. doi: 10.3389/fpsyg.2022.778247

Received: 21 September 2021; Accepted: 21 January 2022;
Published: 18 February 2022.

Edited by:

Rosalba Morese, University of Italian Switzerland, Switzerland

Reviewed by:

Yumiko Otsuka, Ehime University, Japan
Zetian Yang, The Rockefeller University, United States

Copyright © 2022 Carnevali, Gui, Jones and Farroni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Teresa Farroni, teresa.farroni@unipd.it

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.