The analysis was performed at the Lithuanian University of Health Sciences Sensory laboratory, which is equipped with sensory booths. Dark chocolate samples were evaluated by 30 panelists (age from 25 till 50 years, 15 women and 15 men). Mammasse and Schlich [32 (link)] reported that the suitable number of panelists could range from 20 to 150. depending on the level of complexity among test samples. Therefore, it should be noted that a small number of panelists might be a limitation of this study. Chocolate samples were prepared as pieces and presented in four separate serving plates with different codes. The panelist tasted the presented samples one by one in front of a webcam (Microsoft Corporation, Redmond, WA, USA). The tasting procedure was recorded. After tasting each sample, the panelist raised their hand and visualized the taste experience with a facial expression. The time for that was not limited. After that, the panelist was asked to evaluate the overall acceptability of sample using a 10-point hedonic scale, ranging from 1 (extremely dislike) to 10 (extremely like). Between samples, the panelists were asked to rinse the mouth with warm (40 ± 2 °C) water. To evaluate the chocolate-elicited emotions (neutral, happy, surprised, sad, scared, angry, contempt, arousal, disgusted, and valence), the recorded videos were analysed with FaceReader 8 software (Noldus Information Technology, Wageningen, The Netherlands). Only part of video when panelist raise his hand was used for the analysis of chocolate-elicited emotions. The intensity of each emotion was expressed in a scale from 0 (no emotion) to 1 (highest intensity of emotion). The experimental scheme used to evaluate the emotions elicited by different chocolate samples is given in Figure 2 .
>
Physiology
>
Mental Process
>
Contempt
Contempt
Contempt is an intense feeling of dislike or scorn towards another person or thing.
It is characterized by a strong sense of superiority and a belief that the object of contempt is worthless or deserving of low regard.
Contempt can manifest in various ways, such as sarcasm, mockery, or complete disregard for the target.
It is often associated with feelings of anger, disgust, and a desire to belittle or humiliate the other.
Contempt can have significant negative impacts on interpersonal relationships and can be damaging to both the perpetrator and the recipient.
Understanding and managing contempt is an important aspect of emotional intelligence and effective communication.
It is characterized by a strong sense of superiority and a belief that the object of contempt is worthless or deserving of low regard.
Contempt can manifest in various ways, such as sarcasm, mockery, or complete disregard for the target.
It is often associated with feelings of anger, disgust, and a desire to belittle or humiliate the other.
Contempt can have significant negative impacts on interpersonal relationships and can be damaging to both the perpetrator and the recipient.
Understanding and managing contempt is an important aspect of emotional intelligence and effective communication.
Most cited protocols related to «Contempt»
Anger
Arousal
Cacao
Contempt
Emotions
Euphoria
Oral Cavity
Taste
Woman
In this stage, we add one more fully connected layer to the pre-trained SE-ResNet-50 model (before the output layer; see Table 2 ) and we then changed the output layer to eight-class classification. In addition, we froze the weights and biases of Stage 1 to Stage 3 on the pre-trained model. This process is usually found in many transfer learning-based systems when computing power is limited. Last we fine-tuned the adjusted model with the AffectNet dataset [5 (link)] to recognize the eight facial expressions (happy, sad, surprise, fear, contempt, anger, disgust, and neutral). Specifically,
Anger
Contempt
Disgust
Fear
Freezing
Transfer, Psychology
(FSCRS; Gilbert et al. 2004 (link)) is a 22-item instrument, which was developed to determine the level of self-criticism and the ability to self-reassure when one faces setbacks and failure. Participants use a 5-point Likert scale to rate the extent to which various statements are true about them (1 = not at all like me; 5 = extremely like me). The first of the three factors, IS, is comprised of nine items that capture the experiences of failure, setback, inadequacy, and defeat, for example: “I think I deserve my self-criticism.”, “I remember and dwell on my failings.”, and “I am easily disappointed with myself.”. The second factor, HS, consists of five items. It captures a destructive disposition to the self, characterized by hatred, contempt, disgust, aggression, and even sadistic desires to harm or attack oneself. Items that load on this factor include: “I have become so angry with myself that I want to hurt or injure myself.” or “I feel a sense of disgust with myself.” (Gilbert et al. 2004 (link)). The third factor, RS, consists of seven items, and captures the capacity to be self-soothing and consider the self with encouragement, support, and validation when faced with negative events. It focuses on positive memories and past successes and results in confidence and tolerance during vulnerability. Items that represent this factor include “I still like being me.”, “I am able to remind myself of positive things about myself.” and “I encourage myself for the future.”.
Anger
Contempt
Disgust
Feelings
Immune Tolerance
Memory
Sadism
Abuse, Emotional
Anger
Contempt
Disgust
Emotions
Hostility
Psychometrics
Visually Impaired Persons
In order to cope with some of the drawbacks associated with human raters, the coding of AUs is more often performed by automated software programs. In general, these programs calibrate a face image against many other faces taken from established databases (Fasel & Luettin, 2003 (link)). The sample specificity of the chosen face databases implies that if the face database and the target face deviate notably from each other (e.g., differing in age or ethnicity), the subsequent emotion codes could be biased. This issue is akin to the bias of human raters discussed above; however, analytic approaches to software-specific bias are easier to investigate and quantify (e.g., Littlewort et al. 2011b ).
There are several emotion expression coding software programs available. We will restrict our discussion to CERT, a program that is frequently used; its recently updated version is now referred to as FACET and is available athttp://emotient.com/index.php .
CERT codes seven emotions (anger, contempt, disgust, fear, happiness, sadness, surprise) and neutral and provides continuous codes for the individual AUs and x- and y-coordinates for many parts of the face (e.g., right eye). The software achieves 87 % accuracy for emotion classification and 80 % accuracy for AU activation in adults (Littlewort et al. 2011b ) and 79 % accuracy for AU activation in children (Littlewort, Bartlett, Salamanca, & Reilly, 2011 ). CERT applies a multivariate logistic regression (MLR) classifier, which has been trained on a variety of face data sets, to estimate the proportion to which each emotion is expressed in the face (see Littlewort et al., 2011b , for details). The MLR classification procedure provides proportion estimates for each emotion; this results in codes for all emotions ranging between 0 and 1, and, across all emotions, the codes always sum to 1.0. Because all emotion codes are reported as proportions relative to a total of 1, CERT appears to have linear dependencies between the emotion codes. CERT works especially well if the coded face is displaying only one of its seven emotional or neutral expressions, as compared with a face expressing mixed emotions. High neutral codes indicate low emotion expression, whereas a low neutral score indicates high emotion expression. Currently, most research with CERT is focused on validation of the software (e.g., Gordon, Tanaka, Pierce, & Bartlett, 2011 (link)). However, CERT has also been used in studies on other facial expressions, not just those related to emotions, including pain (based on AU codes; Littlewort, Bartlett, & Lee, 2009 (link)), level of alertness (indicated by blink rate), and experienced difficulty while watching a lecture (based on indicators for smiling; Bartlett et al., 2010 ), and has been used to develop a tutoring program based on emotion expression (Cockburn et al., 2008 ).
CERT produces several codes per picture or video frame. Recordings over a 5-s period with standard video settings (e.g., 25 frames per second) will therefore yield codes for a total of 125 frames per participant. This results in multivariate time series data with codes that are autocorrelated both over time, due to the inertia of face expressions in very brief periods, and between emotions, because many emotions share AUs (e.g., surprise and fear share AUs associated with widening the eyes) or are based on antagonistic AUs (e.g., happiness expression activates AU12, which raises the corners of the mouth, whereas the sadness expression activates AU 15, which lowers the corners of the mouth). In addition, depending on characteristics of the video or image, there may be missing data that cannot be accurately estimated by the software and produce invalid codes. Given this data-analytic context, we will next discuss unique challenges associated with scoring data from automated emotion expression coding software and potential solutions to these challenges.
There are several emotion expression coding software programs available. We will restrict our discussion to CERT, a program that is frequently used; its recently updated version is now referred to as FACET and is available at
CERT codes seven emotions (anger, contempt, disgust, fear, happiness, sadness, surprise) and neutral and provides continuous codes for the individual AUs and x- and y-coordinates for many parts of the face (e.g., right eye). The software achieves 87 % accuracy for emotion classification and 80 % accuracy for AU activation in adults (Littlewort et al. 2011b ) and 79 % accuracy for AU activation in children (Littlewort, Bartlett, Salamanca, & Reilly, 2011 ). CERT applies a multivariate logistic regression (MLR) classifier, which has been trained on a variety of face data sets, to estimate the proportion to which each emotion is expressed in the face (see Littlewort et al., 2011b , for details). The MLR classification procedure provides proportion estimates for each emotion; this results in codes for all emotions ranging between 0 and 1, and, across all emotions, the codes always sum to 1.0. Because all emotion codes are reported as proportions relative to a total of 1, CERT appears to have linear dependencies between the emotion codes. CERT works especially well if the coded face is displaying only one of its seven emotional or neutral expressions, as compared with a face expressing mixed emotions. High neutral codes indicate low emotion expression, whereas a low neutral score indicates high emotion expression. Currently, most research with CERT is focused on validation of the software (e.g., Gordon, Tanaka, Pierce, & Bartlett, 2011 (link)). However, CERT has also been used in studies on other facial expressions, not just those related to emotions, including pain (based on AU codes; Littlewort, Bartlett, & Lee, 2009 (link)), level of alertness (indicated by blink rate), and experienced difficulty while watching a lecture (based on indicators for smiling; Bartlett et al., 2010 ), and has been used to develop a tutoring program based on emotion expression (Cockburn et al., 2008 ).
CERT produces several codes per picture or video frame. Recordings over a 5-s period with standard video settings (e.g., 25 frames per second) will therefore yield codes for a total of 125 frames per participant. This results in multivariate time series data with codes that are autocorrelated both over time, due to the inertia of face expressions in very brief periods, and between emotions, because many emotions share AUs (e.g., surprise and fear share AUs associated with widening the eyes) or are based on antagonistic AUs (e.g., happiness expression activates AU12, which raises the corners of the mouth, whereas the sadness expression activates AU 15, which lowers the corners of the mouth). In addition, depending on characteristics of the video or image, there may be missing data that cannot be accurately estimated by the software and produce invalid codes. Given this data-analytic context, we will next discuss unique challenges associated with scoring data from automated emotion expression coding software and potential solutions to these challenges.
Adult
Anger
antagonists
Blinking
Child
Contempt
Disgust
Emotions
Ethnicity
Euphoria
Eye
Face
Fear
Happiness
Homo sapiens
Oral Cavity
Pain
Reading Frames
Sadness
Most recents protocols related to «Contempt»
The polymer agar powder (algae Gelidium sesquipedale, Rapunzel, Germany) was used as texture forming with mucoadhesive properties for nutraceutical chewing candies. In addition, gelatine was also tested (Klingai, Lithuania). Xylitol (Natur Hurtig, Nuremberg, Germany), citric acid (Sanitex, Kaunas, Lithuania), and sugar (“Nordic Sugar Kėdainiai“, Kedainiai, Lithuania) were purchased in a local market (JSC ‘Maxima LT’, Kaunas, Lithuania). Ascorbic acid (JSC “Stada Baltics”, Vilnius, Lithuania) was purchased in a local pharmacy company (JSC “Eurovaistine, Kaunas, Lithuania), and grapefruit (Citrus paradise, producer JSC “Zolotonošskaja PKF”, Komunarovskaja, Ukraine) and mint (Mentha spicata, producer JSC “Naujoji Barmune“, Vilnius, Lithuania) essential oils were obtained from JSC “Gintarine vaistine” (Kaunas, Lithuania).
The formula of the control chewing candy (/gummi) group consisted of sugar (17 g), water (20 mL), citric acid (0.7 g) or ascorbic acid (0.9 g), and agar (4.6 g) or gelatine (8.5 g). Furthermore, in the formulation of gummies, sugar was changed by xylitol, and the basic recipe of chewing candy formulation consisted in xylitol (17 g), water (20 mL), citric acid (0.7 g) or ascorbic acid (0.9 g), and agar (4.6 g) or gelatine (8.5 g) (Table 1 ).
Nutraceutical chewing candies were prepared by including different quantities of fermented Spirulina, and mint and grapefruit essential oils were used as Spirulina odour masking agents. Nutraceuticals in chewing candy formulations are given inTable 1 .
For the preparation of nutraceutical chewing candies, firstly, agar or gelatine powder was soaked in water for 30 min and afterwards melted by heating for 15 min at 90 °C. Sugar or xylitol was added and dissolved in the mixture under boiling. The obtained mixture was further heated to 90 °C under stirring. Citric acid or ascorbic acid and different quantities of fermented Spirulina and essential oils were incorporated into nutraceutical chewing candy mass at the end of the process (mass temperature 40 °C). The obtained mass after mixing was poured into a cast, and nutraceutical gummies were dried at 22 ± 2 °C for 24 h to obtain a gel-hard form.
The hardness of nutraceutical chewing candies was evaluated by Texture Profile Analysis (TPA) using a Texture Analyser TA.XT2 (StableMicro Systems Ltd., Godalming, UK) (compression force 0.5 N, test speed 0.5 mm/s, post-test speed 2 mm/s and distance 6 mm). Sensory analysis of nutraceuticals was carried out according to the ISO 6658 method [37 ]. Thirty panellists evaluated the overall acceptability (OA) of gummies using the hedonic scale from 0 (extremely dislike) to 10 (extremely like).
After obtaining an optimal Spirulina and essential oil content, according to overall acceptability, the most acceptable samples were further analysed by evaluating nutraceutical emotions induced in panellists using the FaceReader 6.0 software (Noldus Information Technology, Wageningen, The Netherlands) and scaling eight emotion patterns (neutral, happy, sad, angry, surprised, scared, disgusted, and contempt). The panellists were asked to rate the nutraceutical samples during and after consumption with an intentional facial expression, which was recorded and then characterised by FaceReader 6.0. The panellists were asked to taste the whole presented sample at once, take 15 s to reflect on the taste impressions, then give a signal with a hand and visualised the taste experience of the sample with a facial expression best representing their liking of the sample. The whole procedure was filmed using a high-resolution Microsoft LifeCam Studio webcam mounted on a laptop facing the participants and Media Recorder (Noldus Information Technology, Wageningen, The Netherlands) software. Special care was taken to ensure good illumination of participants’ faces. The recordings, using a resolution of 1280 × 720 at 30 frames per second, were saved as AVI files and subsequently analysed frame by frame with FaceReader 6.0 software. For each sample, the section of intentional facial expression (from the exact point at which the subject had finished raising their hand to give the signal until the subject started lowering their hand again) was extracted and used for statistical analysis.
The formula of the control chewing candy (/gummi) group consisted of sugar (17 g), water (20 mL), citric acid (0.7 g) or ascorbic acid (0.9 g), and agar (4.6 g) or gelatine (8.5 g). Furthermore, in the formulation of gummies, sugar was changed by xylitol, and the basic recipe of chewing candy formulation consisted in xylitol (17 g), water (20 mL), citric acid (0.7 g) or ascorbic acid (0.9 g), and agar (4.6 g) or gelatine (8.5 g) (
Nutraceutical chewing candies were prepared by including different quantities of fermented Spirulina, and mint and grapefruit essential oils were used as Spirulina odour masking agents. Nutraceuticals in chewing candy formulations are given in
For the preparation of nutraceutical chewing candies, firstly, agar or gelatine powder was soaked in water for 30 min and afterwards melted by heating for 15 min at 90 °C. Sugar or xylitol was added and dissolved in the mixture under boiling. The obtained mixture was further heated to 90 °C under stirring. Citric acid or ascorbic acid and different quantities of fermented Spirulina and essential oils were incorporated into nutraceutical chewing candy mass at the end of the process (mass temperature 40 °C). The obtained mass after mixing was poured into a cast, and nutraceutical gummies were dried at 22 ± 2 °C for 24 h to obtain a gel-hard form.
The hardness of nutraceutical chewing candies was evaluated by Texture Profile Analysis (TPA) using a Texture Analyser TA.XT2 (StableMicro Systems Ltd., Godalming, UK) (compression force 0.5 N, test speed 0.5 mm/s, post-test speed 2 mm/s and distance 6 mm). Sensory analysis of nutraceuticals was carried out according to the ISO 6658 method [37 ]. Thirty panellists evaluated the overall acceptability (OA) of gummies using the hedonic scale from 0 (extremely dislike) to 10 (extremely like).
After obtaining an optimal Spirulina and essential oil content, according to overall acceptability, the most acceptable samples were further analysed by evaluating nutraceutical emotions induced in panellists using the FaceReader 6.0 software (Noldus Information Technology, Wageningen, The Netherlands) and scaling eight emotion patterns (neutral, happy, sad, angry, surprised, scared, disgusted, and contempt). The panellists were asked to rate the nutraceutical samples during and after consumption with an intentional facial expression, which was recorded and then characterised by FaceReader 6.0. The panellists were asked to taste the whole presented sample at once, take 15 s to reflect on the taste impressions, then give a signal with a hand and visualised the taste experience of the sample with a facial expression best representing their liking of the sample. The whole procedure was filmed using a high-resolution Microsoft LifeCam Studio webcam mounted on a laptop facing the participants and Media Recorder (Noldus Information Technology, Wageningen, The Netherlands) software. Special care was taken to ensure good illumination of participants’ faces. The recordings, using a resolution of 1280 × 720 at 30 frames per second, were saved as AVI files and subsequently analysed frame by frame with FaceReader 6.0 software. For each sample, the section of intentional facial expression (from the exact point at which the subject had finished raising their hand to give the signal until the subject started lowering their hand again) was extracted and used for statistical analysis.
Agar
Anger
Ascorbic Acid
Candy
Carbohydrates
CD3EAP protein, human
Citric Acid
Citrus
Citrus paradisi
Contempt
Emotions
Face
Gelatins
Lighting
Mentha
Mentha spicata
Nutraceuticals
Odors
Oils, Volatile
Polymers
Powder
Reading Frames
Taste
Xylitol
The original MultiPie [57 (link)], Lucey et al. [58 ], Lyons et al. [59 (link)], and Pantic et al. [60 (link)] datasets of facial expressions were recorded in a laboratory setting, with the individuals acting out a variety of facial expressions. Using this method, we created a spotless, high-quality repository of staged facial expressions. Faces in pictures may look different from their unposed (or “spontaneous”) counterparts. Therefore, recording emotions as they happen became popular among researchers in affective computing. Situations such as this include experiments in which participants’ facial reactions to stimuli are recorded [60 (link),61 ,62 (link)] or emotion-inducing activities are conducted in a laboratory [63 (link)]. These datasets often record a sequence of frames that researchers may use to study expressions’ temporal and dynamic elements, including capturing multi-modal impacts such as speech, bodily signals, and others. However, the number of individuals, the range of head poses, and the settings in which these datasets were collected all contribute to a lack of variety.
Therefore, it is necessary to create methods based on natural, unstaged presentations of emotion. In order to meet this need, researchers have increasingly focused on real-world datasets.Table 1 provides a summary of the evaluated databases’ features across all three affect models: facial action, dimensional model, and category model. In 2017, Mollahosseini et al. [24 (link)] created a facial emotion dataset named AffectNet to develop an emotion recognition system. This dataset is one of the largest facial emotion datasets of the categorical and dimensional models of affect in the real world. After searching three of the most popular search engines with 1250 emotion-related keywords in six languages, AffectNet gathered over a million photos of people’s faces online. The existence of seven distinct facial expressions and the strength of valence and arousal were manually annotated in roughly half of the obtained photos. AffectNet is unrivalled as the biggest dataset of natural facial expressions, valence, and arousal for studies on automated facial expression identification. The pictures have an average 512 by 512 pixel resolution. The pictures in the collection vary significantly in appearance; there are both full color and gray-scale pictures, and they range in contrast, brightness, and background variety. Furthermore, the people in the frame are mostly frontally portrayed, although items such as sunglasses, hats, hair, and hands may obscure the face. As a result, the dataset adequately describes multiple scenarios as it covers a wide variety of real-world situations.
In the ICML 2013 Challenges in Representation Learning [64 (link)], the Facial Expression Recognition 2013 (FER-2013) [65 ] database was first introduced. The database was built by matching a collection of 184 emotion-related keywords to images using the Google Image Search API, which allowed capturing the six fundamental and neutral expressions. Photos were downscaled to 48 × 48 pixels and converted to grayscale. The final collection includes 35,887 photos, most of which were taken in natural real-world scenarios. Our previous work [56 (link)] used the FER-2013 dataset because it is one of the largest publicly accessible facial expression datasets for real-world situations. However, only 547 of the photos in FER-2013 depict emotions such as distaste, and most facial landmark detectors are unable to extract landmarks at this resolution and quality due to the lack of face registration. Additionally, FER-2013 only provides the category model of emotion.
Mehendale [66 (link)] proposed a CNN-based facial emotion recognition and changed the original dataset by recategorizing the images into the following five categories: Anger-Disgust, Fear-Surprise, Happiness, Sadness, and Neutral; the Contempt category was removed. The similarities between the Anger-Disgust and Fear-Surprise facial expressions in the top part of the face provide sufficient evidence to support the new categorization. For example, when someone feels angry or disgusted, their eyebrows will naturally lower, whereas when they are scared or surprised, their eyebrows will raise in unison. The deletion of the contempt category may be rationalized because (1) it is not a central emotion in communication and (2) the expressiveness associated with contempt is localized in the mouth area and is thus undetectable if the individual is wearing a face mask. The dataset is somewhat balanced as a result of this merging process.
In this study, we used the AffectNet [24 (link)] dataset to train an emotional recognition model. Since the intended aim of this study is to determine a person’s emotional state even when a mask covers their face, the second stage was to build an appropriate dataset in which a synthetic mask was attached to each individual’s face. To do this, the MaskTheFace algorithm was used. In a nutshell, this method determines the angle of the face and then installs a mask selected from a database of masks. The mask’s orientation is then fine-tuned by extracting six characteristics from the face [67 ]. The characteristics and features of existing facial emotion recognition datasets are demonstrated inTable 2 .
Therefore, it is necessary to create methods based on natural, unstaged presentations of emotion. In order to meet this need, researchers have increasingly focused on real-world datasets.
In the ICML 2013 Challenges in Representation Learning [64 (link)], the Facial Expression Recognition 2013 (FER-2013) [65 ] database was first introduced. The database was built by matching a collection of 184 emotion-related keywords to images using the Google Image Search API, which allowed capturing the six fundamental and neutral expressions. Photos were downscaled to 48 × 48 pixels and converted to grayscale. The final collection includes 35,887 photos, most of which were taken in natural real-world scenarios. Our previous work [56 (link)] used the FER-2013 dataset because it is one of the largest publicly accessible facial expression datasets for real-world situations. However, only 547 of the photos in FER-2013 depict emotions such as distaste, and most facial landmark detectors are unable to extract landmarks at this resolution and quality due to the lack of face registration. Additionally, FER-2013 only provides the category model of emotion.
Mehendale [66 (link)] proposed a CNN-based facial emotion recognition and changed the original dataset by recategorizing the images into the following five categories: Anger-Disgust, Fear-Surprise, Happiness, Sadness, and Neutral; the Contempt category was removed. The similarities between the Anger-Disgust and Fear-Surprise facial expressions in the top part of the face provide sufficient evidence to support the new categorization. For example, when someone feels angry or disgusted, their eyebrows will naturally lower, whereas when they are scared or surprised, their eyebrows will raise in unison. The deletion of the contempt category may be rationalized because (1) it is not a central emotion in communication and (2) the expressiveness associated with contempt is localized in the mouth area and is thus undetectable if the individual is wearing a face mask. The dataset is somewhat balanced as a result of this merging process.
In this study, we used the AffectNet [24 (link)] dataset to train an emotional recognition model. Since the intended aim of this study is to determine a person’s emotional state even when a mask covers their face, the second stage was to build an appropriate dataset in which a synthetic mask was attached to each individual’s face. To do this, the MaskTheFace algorithm was used. In a nutshell, this method determines the angle of the face and then installs a mask selected from a database of masks. The mask’s orientation is then fine-tuned by extracting six characteristics from the face [67 ]. The characteristics and features of existing facial emotion recognition datasets are demonstrated in
Anger
Arousal
Contempt
Deletion Mutation
Emotions
Eyebrows
Face
Facial Emotion Recognition
Feelings
Hair
Happiness
Head
Oral Cavity
Reading Frames
Sadness
Speech
Participants were asked to install the “Face Recognition and Attendance App” developed by TEKWIND Co., Ltd., including Microsoft Azure, on their smartphones and to take pictures of their faces when they started and left work on workdays. Before taking the pictures, participants were asked to select one time (starting work or leaving work) and one location (office, home, or other). However, the location at which the photos were taken was not included in this study because it was largely influenced by the department and the type of work. To capture natural facial expressions, facial images were taken twice: when the camera button was pressed and about 4 s after the button was pressed. The mean time interval between the first press and the second shot was 3.847 s. Since the first shot was likely to be expressionless because of the button press, we used the more natural facial expression a few seconds after the button was pressed. Eight emotion scores (anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise) were obtained from the face image data captured from the second shot using Microsoft Azure’s estimations. The eight emotion scores estimated from a single facial image sum up to 1.0000. The study period was from 1 November 2021 to 7 February 2022.
Anger
Azure A
Contempt
Disgust
Emotions
Face
Facial Recognition
Fear
Happiness
Sadness
To be able to link our analysis to both the content of the workshop members’ utterances and the patterns of interaction that they create [39 (link),40 (link),41 ], we used an appropriate combination of qualitative methods deriving from interaction-oriented focus group research [42 (link),43 (link)], conversation analysis [44 ,45 ,46 ] and discursive psychology [47 ]. This combination of methodological approaches draws on the idea that there is an “inherent connection between the substantive content of what a person says and the interactive dynamics of how he or she says those things” [42 (link)] (p. 718). Thus, in addition to analysing the content of the workshop members’ talk, we are able to analyse how the participants “display how they align themselves toward other participants with whom they are interacting” [48 ] (p. 16).
In line with this approach, we are interested in the ways in which the participants negotiate the social norms and normativity of their work, and the affective expressions related to these negotiations. We understand normativity as “the judgments by which individuals designate some actions or outcomes as good, desirable, or permissible and others as bad, undesirable, or impermissible” [49 (link)] (p. 3). We do not see social norms as fixed observable categories, but as something that is constructed, challenged and reformulated in and through interaction [47 ]. Similarly, we see emotions and the construction of an emotional stance as a process that both shapes and is shaped by the interactional context [50 ]. Our interest in this article is the expressions of the normative and emotional stances that the participants take towards what they report, and the stances that the recipients take in their next turns of talk [51 (link)]. The idea is that the recipients may take a stance that matches (or does not) the teller’s stance toward the event, i.e., they may affiliate (or disaffiliate) with the teller [51 (link)]. We are also interested in the ways in which some views on MD mobilise mutually congruent assertions of consensus among workshop participants, whereas other views are received with explicit expressions of resistance and moral contempt or implicit expressions of indifference through, for example, silence [52 (link)]. The views that are preceded and followed by explanations and accounts are also of interest as they may demonstrate a participant’s need to justify their views in front of the other participants see, e.g., [53 ,54 (link)].
The micro-level analysis of the participants’ discussions on MD thus focuses on topics that are, on the one hand, associated with congruent affective stances and normative viewpoints, and on the other hand, involve subtle discrepancies between workshop participants.
In line with this approach, we are interested in the ways in which the participants negotiate the social norms and normativity of their work, and the affective expressions related to these negotiations. We understand normativity as “the judgments by which individuals designate some actions or outcomes as good, desirable, or permissible and others as bad, undesirable, or impermissible” [49 (link)] (p. 3). We do not see social norms as fixed observable categories, but as something that is constructed, challenged and reformulated in and through interaction [47 ]. Similarly, we see emotions and the construction of an emotional stance as a process that both shapes and is shaped by the interactional context [50 ]. Our interest in this article is the expressions of the normative and emotional stances that the participants take towards what they report, and the stances that the recipients take in their next turns of talk [51 (link)]. The idea is that the recipients may take a stance that matches (or does not) the teller’s stance toward the event, i.e., they may affiliate (or disaffiliate) with the teller [51 (link)]. We are also interested in the ways in which some views on MD mobilise mutually congruent assertions of consensus among workshop participants, whereas other views are received with explicit expressions of resistance and moral contempt or implicit expressions of indifference through, for example, silence [52 (link)]. The views that are preceded and followed by explanations and accounts are also of interest as they may demonstrate a participant’s need to justify their views in front of the other participants see, e.g., [53 ,54 (link)].
The micro-level analysis of the participants’ discussions on MD thus focuses on topics that are, on the one hand, associated with congruent affective stances and normative viewpoints, and on the other hand, involve subtle discrepancies between workshop participants.
Apathy
Contempt
Emotions
Forehead
Speech
Tabu-Search (TS)30 , proposed by Professor Fred Glover in 1986, is an intelligent global optimization algorithm21 (link). The algorithm mainly simulates human memory function and obtains the global optimal solution by searching the local neighbourhood step by step. In order to avoid falling into local optimality and repeated iterations, the tabu search algorithm adopts a tabu table, which can be obscured by the search area. As such, the algorithm could avoid detour search, and records the solution process of mobile search and selection. Meanwhile, the table exempts some good states in the tabu area, so as to ensure the diversity of search and achieve global optimization46 (link). The procedures of the algorithm47 include the following six ones: (1) setting parameters and the initial solution x. (2) Determine whether the convergence criterion is met? If yes, the optimization result x is output; If not, continue with the next step. (3) Determine the neighborhood of the current solution x and choose the candidate solution. (4) Determine whether the candidate solution of the search meets the contempt criterion? If so, replace x with the best candidate solution y satisfying the aspiration criterion to become the new optimal solution, and update the tabu table with the taboo object corresponding to y, and then move to step two; If not, continue with the next step. (5) replacing the current optimal solution with the best state corresponding to the non-taboo object in the candidate solution set, and update the tabu table with the corresponding taboo object, and then switching to step two. (6) Repeating this process until the convergence criterion48 (link) is met, and finish the searching (Fig. 4 ).
Contempt
Homo sapiens
Memory
Simulate composite resin
Top products related to «Contempt»
FaceReader 8.0 is a software tool developed by Noldus. It is designed to automatically analyze facial expressions and emotions from video recordings or images. The software can detect and classify various facial action units, as well as basic emotions such as happiness, sadness, anger, fear, surprise, and disgust.
Sourced in United States, United Kingdom, Switzerland
MATLAB R2019a is a software package developed by MathWorks for numerical computing and visualization. It provides a programming environment for algorithm development, data analysis, and visualization. MATLAB R2019a includes tools for various applications, such as signal processing, image processing, and control systems.
Sourced in United States, Japan, United Kingdom, Austria, Germany, Czechia, Belgium, Denmark, Canada
SPSS version 22.0 is a statistical software package developed by IBM. It is designed to analyze and manipulate data for research and business purposes. The software provides a range of statistical analysis tools and techniques, including regression analysis, hypothesis testing, and data visualization.
Sourced in Netherlands
FaceReader™ software is a tool designed for the automated analysis of facial expressions. It is capable of detecting and classifying various emotional states, such as happiness, sadness, anger, fear, surprise, and disgust, based on the analysis of facial features. The software provides objective and quantitative data on the detected emotions, which can be useful for research and analysis purposes.
Sourced in United States, Poland, Germany, France, Czechia, Sweden
Statistica 12 is a comprehensive data analysis and visualization software suite developed by StatSoft. It provides a range of tools for data management, statistical analysis, and reporting.
Sourced in United States, United Kingdom, Germany, Canada, Japan, Sweden, Austria, Morocco, Switzerland, Australia, Belgium, Italy, Netherlands, China, France, Denmark, Norway, Hungary, Malaysia, Israel, Finland, Spain
MATLAB is a high-performance programming language and numerical computing environment used for scientific and engineering calculations, data analysis, and visualization. It provides a comprehensive set of tools for solving complex mathematical and computational problems.
More about "Contempt"
Disdain, Scorn, Derision, Disrespect, Disregard, Haughtiness, Arrogance, Condescension, Snobbishness, Superciliousness.
Contempt is an intense negative emotion characterized by a strong sense of superiority and a belief that the object of contempt is worthless or deserving of low regard.
It can manifest in various ways, such as sarcasm, mockery, or complete disregard for the target.
Contempt is often associated with feelings of anger, disgust, and a desire to belittle or humiliate the other.
Understanding and managing contempt is an important aspect of emotional intelligence and effective communication.
Facial expressions associated with contempt include a unilateral lip curl, often accompanied by a raised eyebrow.
FaceReader 8.0 software and MATLAB R2019a can be used to detect and analyze contemptuous expressions.
Studies using SPSS version 22.0 and Statistica 12 have shown that contempt can have significant negative impacts on interpersonal relationships and can be damaging to both the perpetrator and the recipient.
MATLAB can be used to model the dynamics of contempt and its effects on social interactions.
Effective strategies for managing contempt include developing self-awareness, practicing empathy, and focusing on constructive communication.
By understanding and addressing the underlying causes of contempt, individuals can improve their emotional intelligence and build stronger, more positive relationships.
PubCompare.ai's AI-driven protocol optimization can help researchers identify the best approaches for studying and managing contempt.
Contempt is an intense negative emotion characterized by a strong sense of superiority and a belief that the object of contempt is worthless or deserving of low regard.
It can manifest in various ways, such as sarcasm, mockery, or complete disregard for the target.
Contempt is often associated with feelings of anger, disgust, and a desire to belittle or humiliate the other.
Understanding and managing contempt is an important aspect of emotional intelligence and effective communication.
Facial expressions associated with contempt include a unilateral lip curl, often accompanied by a raised eyebrow.
FaceReader 8.0 software and MATLAB R2019a can be used to detect and analyze contemptuous expressions.
Studies using SPSS version 22.0 and Statistica 12 have shown that contempt can have significant negative impacts on interpersonal relationships and can be damaging to both the perpetrator and the recipient.
MATLAB can be used to model the dynamics of contempt and its effects on social interactions.
Effective strategies for managing contempt include developing self-awareness, practicing empathy, and focusing on constructive communication.
By understanding and addressing the underlying causes of contempt, individuals can improve their emotional intelligence and build stronger, more positive relationships.
PubCompare.ai's AI-driven protocol optimization can help researchers identify the best approaches for studying and managing contempt.