The largest database of trusted experimental protocols
> Anatomy > Body Part > Facial Muscles

Facial Muscles

Facial Muscles: The muscles that control the movements of the face, including the muscles of expression and those that control chewing and swallowing.
Theses muscles play a crucial role in facial expressions, speech, and other essential functions.
Researchers can utilize PubCompare.ai's AI-driven protocol optimization tools to effeiciently locate and compare the best protocols from literature, pre-prints, and patents related to facial muscles research.
This cutting-edge technology enhances research accuracy and efficency, allowing scientists to accelerated scientific discovery.

Most cited protocols related to «Facial Muscles»

This paper presents a re-analysis of data we reported previously (Hipp et al., 2011 (link)). We recorded the continuous EEG from 126 scalp sites and the electrooculogram (EOG) from two sites below the eyes all referenced against the nose tip (sampling rate: 1000 Hz; high-pass: 0.01 Hz; low-pass: 250 Hz; Amplifier: BrainAmp, BrainProducts, Munich, Germany; Electrode cap: Electrodes: sintered Ag/AgCl ring electrodes mounted on an elastic cap, Falk Minow Services, Herrsching, Germany). Electrode impedances were kept below 20 kΩ. Offline, the data were high-pass filtered (4 Hz, Butterworth filter of order 4) and cut into trials of 2.5 s duration centered on the presentation of the sound (−1.25 to 1.25 s). First, trials with eye movements, eye blinks, or strong muscle activity were identified by visual inspection and rejected from further analysis (trials retained for further analyses n = 345 ± 50, mean ± s.d.). Next, we used independent component analysis (FastICA, http://www.cis.hut.fi/projects/ica/fastica/; Hyvärinen, 1999 (link)) to remove artifactual signal components (Jung et al., 2000 (link); Keren et al., 2010 (link)). The removed artifactual components constituted facial muscle components (n = 45.8 ± 7.84, mean ± s.d.), microsaccadic artifact components (n = 1.2 ± 0.82, mean ± s.d.), auricular artifact components (O'Beirne and Patuzzi, 1999 (link)) (n = 0.5 ± 0.83, mean ± s.d.), and heart beat components (n = 0.5 ± 0.59, mean ± s.d.). Alternatively to ICA, we accounted for microsaccadic artifacts by removing confounded data sections identified in the radial EOG using the approach and template described in Keren et al. (2010 (link)) (Threshold: 3.5). Importantly, for this analysis step, we did not reject entire trials containing a microsaccadic artifact (79 ± 18%, mean ± s.d., of trials contained at least one saccadic spike artifact), but only invalidated the data in the direct vicinity of detected artifacts (±0.15 s). Whenever the window for time-frequency transform overlapped with invalidated data (see spectral analysis below), it was rejected from further analysis. As a consequence, spectral estimates were based on varying amount of data across time and frequency. We derived the radial EOG as the difference between the average of the two EOG channels and a parietal EEG electrode at the Pz position of the 10–20-system. Notably, rejection based on the radial EOG may miss saccadic spike artifacts of small amplitude that can be detected with high-speed eyetracking (Keren et al., 2010 (link)). However, the fact that we did not find any significant saccadic spike artifacts after radial EOG based rejection at those source locations that before cleaning best captured these artifacts (cf. Figure 7C) suggests that potentially remaining artifacts are small.
Full text: Click here
Publication 2013
Blinking Electrooculograms Eye Movements Facial Muscles Impedance, Electric Muscle Strength Nose Pulse Rate Scalp Sound
Concerning the specificity of elaboration, the procedure was conducted in a few stages. In the first stage, so-called “face banks” were used, and were gathered by an agency specializing in the recruitment of actors, extras, etc.. Native Polish applicants (n = 120) aged 20–30 were chosen and invited to individual meetings: 30-min photography sessions during which the expected work results were discussed. All of the participants provided signed consent to the recruitment agency and agreed to participate in the project before arriving to the laboratory. After the meeting they were informed that if they did not want to participate they are not obligated to continue the cooperation, and none of the taken photographs will be used in the future. The meetings also aimed to choose participants who had acting experience and positive motivation toward cooperation. After this stage, 60 people were selected and invited to take part in the project.
Natural emotional expression is involuntary, and bringing it under any control requires exercises allowing activation of particular muscles. Therefore, a set of exercises based on an actor training system, the Stanislavski method (1936/1988 ), was developed. The aim was to maximize the authenticity of expressed emotions in photography session conditions. The method assumes that realistic presentation of a given emotion should be based on the concept of emotional memory as a point of focus. Thus, actors must first recall an event when he or she felt given emotion, and then recall physical and psychological sensation. Additionally, this technique includes a method of “physical actions” in which emotions are produced through the use of actions. To do so, the actor performs a physical motion or a series of physical activities to create the desired emotional response for the character. For instance, feeling and expressing sadness presumes recalling sad personal experiences as well as some physical action, e.g., sighing and holding head in hands. The training consisted of three parts: (1) Theoretical presentation about the physiology of emotion and mimic expressions and demonstration of key elements essential for gaining a desired expression. During this stage, participants were also presented the theoretical foundations concerning the creation of the set, which facilitated understanding of the authors' intentions and communication during photography sessions. (2) Training workshops taking place under supervision of the set's authors, and (3) Training and exercises “at home.” The final stage was the photography session during which photographs of practiced expressions were registered. During the sessions, participants were first allowed to focus on the given emotion (so perform exercises evoking emotion) and then show facial expression of felt emotion to the camera. No beards, mustaches, earrings or eyeglasses, and preferably no visible make-up was allowed during this stage, yet matting foundation make-up was applied to avoid skin reflections.
After gathering the photographic material, primary evaluation was conducted. Photographs with appropriate activity of muscular/facial units were selected. Facial unit activity was specified according to descriptions of the FACS (Ekman et al., 2002 ) and characteristics of visible facial changes caused by contraction or relaxation of one or more muscles (so called Action Units) during emotional expression (see Table 1 for details). At this stage, approximately 1000 photographs of 46 people were selected by our competent judges. The photographs were subjected to digital processing for format, resolution, and tone standardization. Pictures were initially cropped and resized to 1725 × 1168 pixels and the color-tone was balanced for a white background.
Full text: Click here
Publication 2014
Character Emotions Expressed Emotion Eyeglasses Face Facial Muscles Feelings Fingers Head Memory Motivation Muscle Tissue Physical Examination physiology Reflex Skin Supervision Workshops
Footage from 28 privately owned dogs of varying breeds (approximately 8–10 hours) from the Max Planck Institute for Evolutionary Anthropology DogLab was the primary source for DogFACS development. In addition, we sourced approximately 100 clips from www.youtube.com (permission granted from the copyright holder of each clip) and used ad hoc footage from 86 dogs at four dog shelters (Portsmouth City Dog Kennels; Wood Green, The Animal’s Charity in Cambridge; The Dog’s Trust, West London, Harefield and RSPCA Southridge Animal Centre, London). Each facial movement was documented by appearance changes, minimal criteria for identification and comparison to other species, in line with FACS terminology (Table 1). The muscular basis of each facial movement was verified in light of dissection of a face from a specimen of a domestic dog (AMB) as well as previously published dissections [25] . The manual is freely available and requires certification to use (www.dogfacs.com).
Full text: Click here
Publication 2013
Animals Biological Evolution Breeding Canis familiaris Clip Dissection Face Facial Muscles Light Movement
Structural magnetic resonance images (MRI) were recorded from all subjects at 1 mm3 spatial resolution (1T Philips scanner). Three TMS targets were identified on individual MRIs in the left occipital lobe (Brodmann's area - BA19), the left parietal lobe (BA7) and the left frontal lobe (BA6). Precision and reproducibility of stimulation were achieved by using a Navigated Brain Stimulation (NBS) system (Nexstim Ltd., Helsinki, Finland), that employs a 3D infrared Tracking Position Sensor Unit to map the positions of TMS coil and subject's head within the reference space of individual structural MRI. In addition, the NBS system computes on-line the distribution and intensity (V/m) of the intracranial induced electric field using a locally best-fitting spherical model of the subjects' head and brain and taking into account the exact shape, 3D position and orientation of the TMS coil. Stimulation intensity, expressed as a percentage of the maximal output of the stimulator, was kept between 40–75% for all subjects, corresponding to an electric field between 110–120 V/m on the cortical surface. In each area, the TMS hot spot (i.e. location of the maximum electric field induced by TMS on the cortical surface) was always kept on the convexity of the gyrus, about 1 cm lateral to the midline. These medial stimulation sites were selected because they are easily accessible and far from major head or facial muscles whose unwanted activation may affect EEG recordings. The reproducibility of the stimulation coordinates across sessions was guaranteed by a virtual aiming device that indicated in real-time any deviation from the desired target greater than 3 mm. The TMS stimulator consisted of a Focal Bipulse 8-Coil (mean/outer winding diameter ca. 50/70 mm, biphasic pulse shape, pulse length ca. 280 µs, focal area of the stimulation hot spot 0.68 cm2) driven by a Mobile Stimulator Unit (Eximia TMS Stimulator, Nexstim Ltd., Helsinki, Finland). The coil was always placed tangentially to the scalp, in order to optimize transmission of the magnetic field to the cortical surface. TMS pulses were delivered at an inter-stimulus interval randomly jittered between 700–900 ms (equivalent to ca. 1.1–1.4 Hz).
Full text: Click here
Publication 2010
Brain Cortex, Cerebral Electricity Facial Muscles Head Lobe, Frontal Magnetic Fields Magnetic Resonance Imaging Medical Devices Occipital Lobe Parietal Lobe Pulse Rate Pulses Scalp Transmission, Communicable Disease

Protocol full text hidden due to copyright restrictions

Open the protocol to access the free full text link

Publication 2008
Adult African American Appendectomy Blood Pressure Caucasoid Races Character Clip Face Facial Muscles Males Mood Movement Nose Obstetric Delivery Oculomotor Muscles Pain Patients physiology Pulse Rate Reading Frames Respiratory Rate Severity, Pain Vaginal Diaphragm Visual Analog Pain Scale Woman

Most recents protocols related to «Facial Muscles»

Raw optical density variations were acquired at three wavelengths of light (780, 805 and 830 nm), which were translated into relative chromophore concentrations using a Beer–Lambert equation [84 (link)–86 (link)]. Signals were recorded at 30 Hz. Baseline drift was removed using wavelet detrending provided in NIRS-SPM [51 (link)]. In accordance with recommendations for best practices using fNIRS data [87 (link)], global components attributable to blood pressure and other systemic effects [88 (link)] were removed using a PCA spatial global mean filter [60 (link),62 (link),89 (link)] before general linear model (GLM) analysis. This study involves emotional expressions that originate from specific muscle movements of the face, which may cause artefactual noise in the OxyHb signal. To minimize this potential confound, we used the HbDiff signal, which combines the OxyHb and deOxyHb signals for all statistical analyses. However, following best practices [87 (link)], baseline activity measures of both OxyHb and deOxyHb signals are processed as a confirmatory measure. The HbDiff signal averages are taken as the input to the second level (group) analysis [90 (link)]. Comparisons between conditions were based on GLM procedures using NIRS-SPM [51 (link)]. Event epochs within the time series were convolved with the haemodynamic response function provided from SPM8 [91 ] and fitted to the signals, providing individual ‘beta values’ for each participant across conditions. Group results based on these beta values are rendered on a standard MNI brain template (TD-ICBM152 T1 MRI template [72 (link)]) in SPM8 using NIRS-SPM software with WFU PickAtlas [73 (link),74 (link)].
Full text: Click here
Publication 2023
Beer Blood Pressure Brain Emotions EPOCH protocol Facial Muscles Hemodynamics Light Movement Spectroscopy, Near-Infrared Vision
This study included 11 motions. Figure 1 shows the relation between the different motions and the techniques we used. Different methods were used for different motions based on the characteristics of these motions. OpenFace (Baltrušaitis et al., 2016 ) was used for extracting features of FE. The facial-action-coding system defines the correspondence between facial emotions and facial muscles and divides facial expressions into 46 action units (AU). OpenPose (Cao et al., 2021 (link)) was used for pose estimation, which can provide estimates of the position of 25 2D points of the human body and 21 2D points of the hand. OpenPose was used for motions including FT, HM, TT, LA, AFC, GAIT, and POS. Since some estimation of motions by OpenPose related to joints could cause inaccurate result when asking patients to straighten their arms (including PSOH, PTOH, and KTOH). For alleviating this impact, instead of OpenPose, we used HRNet (Wang et al., 2020 (link)) to increasing the accuracy during joint estimation.
In regard to the relationships between motions and dependent variables, features for the Rig-UE model were from seven motions related to the upper extremities, including FT, HM, PSOH, AFC, PTOH, KTOH, and GAIT. Features for the Rig-LE model were from four motions related to the lower extremities, including LA, AFC, and GAIT. In addition to the features used in the Rig-LE model, the PS model included one additional POS motion, i.e., the standing posture, which can represent the balance of the participant while remaining still. For the Rig-Neck model, we used all the motions.
Parkinson’s disease impacts the left and right limb of patients. Especially for PD of early stage, these patients can have significantly severe limb side, which means directly using feature of left or right can cause problem of inconsistency. To remove the impact of the severe limb side, we calculated some parameters using the two sides of the extremities to represent the overall condition or calculated the difference between the two sides of the participant. For a specific feature related to the two sides, for example, the release speed during FT, we calculated the mean, maximum, and minimum values for both hands and the absolute difference between right hand and left hand.
Full text: Click here
Publication 2023
Arm, Upper Emotions Face Facial Muscles Human Body Joints Lower Extremity Neck Patients Upper Extremity
We used multilevel models to assess the effect of facial expression activation on pain categorization and pain intensity estimates and whether target race, gender, group membership, or similarity had an impact on these effects. We evaluated maximal models (Barr et al., 2013 ) using both the lmer and glmer functions in the lme4 package in R (Bates et al., 2015 (link)). Although our similarity measure allowed for more nuance compared to group membership (i.e., a perceiver can have varying levels of similarity towards targets that may be equivalent by group membership), our factors for similarity and group membership had high multicollinearity; therefore, we evaluated each factor in separate models.
We used logistic multilevel models in each study to assess the odds a trial was rated as painful or not-painful based on expression intensity (i.e., do greater facial expression activations increase the odds of rating a trial as painful) and whether any of our sociocultural measures impacted this relation. If a participant did not have at least three trials in which they observed pain and three trials in which they observed no-pain, then they were excluded from the multilevel logistic models (final nStudy1 = 87, nStudy2 = 160, nStudy3 = 257, nstudy4 = 226). We also used multilevel linear models to assess the effect of facial muscle movement (“ExpressionActivation”) on intensity ratings on trials rated as painful in each study and to evaluate whether sociocultural measures impacted this relationship. For each multilevel model, we verified homoscedasticity of our residuals. We also report the full model formulas in the Supplementary Methods and model betas and significances for each of our predictors in the Supplementary Tables.
Full text: Click here
Publication 2023
Diet, Formula Facial Muscles Gender Movement Pain Severity, Pain
The Sunnybrook facial grading system (SFGS) was developed to measure the recovery of bell’s palsy. It consists of three sections: resting symmetry, degree of voluntary excursion of facial muscles, and degree of synkinesis. On a point scale, the following five facial expressions were evaluated: eyebrow raise, eye closure, open mouth smile, lip pucker, and snarl/show teeth, as a cumulative composite score was generated, with a maximum score of 100 corresponding to full facial function with no synkinesis[11 (link)].
Publication 2023
Bell Palsy Eyebrows Face Facial Muscles Oral Cavity Synkinesis Tooth
The Neuropack S1 MEB9004 EMG (NIHON KODEN, JAPA) was used to measure Electroneurography (ENoG) response by assessing the specific facial muscle evoked compound muscle action potential (CMAP), which was assessed using a bipolar pair of surface electrodes on the muscle. The nerve damage or degeneration of nerve fiber was represented by a reduced CMAP. The damaged side’s CMAP amplitude was compared to the non-affected side and a percentage score was assigned (amplitude of the affected side divided by the amplitude of the nonaffected side)[12 (link)]. The ENoG application was performed by a bipolar surface stimulator sited on the stylomastoid foramen and recorded from surface electrodes over the frontalis and orbicularis oris muscles, as the CMAP was obtained from both muscles to measure amplitude degeneration ratio, and to measure degeneration index using the following equation: [100 -(ENoG amplitude affected/unaffected side) × 100][9 (link),13 ].
Publication 2023
Action Potentials Facial Muscles Fibrosis Muscle Tissue Nerve Degeneration Nervousness Study, Nerve Conduction Stylomastoid Foramen

Top products related to «Facial Muscles»

Sourced in United States
The GoPro HERO4 is a compact, high-performance action camera capable of capturing video at up to 4K resolution and 30 frames per second. It features advanced image stabilization, professional-grade audio, and support for a wide range of accessories and mounts.
Sourced in United States
The MP150 system is a data acquisition and analysis platform designed for life science research. It provides the core functionality to record, display, and analyze physiological signals. The system is capable of interfacing with a wide range of transducers and sensors to capture various biological measures, including but not limited to cardiovascular, respiratory, and neurological activity.
Sourced in United States, United Kingdom, Germany, Canada, Japan, Sweden, Austria, Morocco, Switzerland, Australia, Belgium, Italy, Netherlands, China, France, Denmark, Norway, Hungary, Malaysia, Israel, Finland, Spain
MATLAB is a high-performance programming language and numerical computing environment used for scientific and engineering calculations, data analysis, and visualization. It provides a comprehensive set of tools for solving complex mathematical and computational problems.
Sourced in United States
The 3.5 MHz convex-array probe is a medical imaging device designed for use with ultrasound systems. It is a transducer that converts electrical energy into high-frequency sound waves and vice versa, enabling the capture of images of internal body structures. The probe operates at a frequency of 3.5 MHz and utilizes a convex-shaped array of piezoelectric elements to generate and receive the ultrasound signals.
Sourced in United States
The 7.5 MHz linear probe is a medical imaging device used in ultrasound examinations. It operates at a frequency of 7.5 MHz and is designed to produce high-resolution images of superficial structures, such as the skin, muscles, and blood vessels. The probe's linear array transducer technology allows for the acquisition of detailed, two-dimensional images. This device is a core component of ultrasound imaging systems and is commonly used in various medical specialties.
Sourced in United States
The Brain Vision actiChamp is a high-performance, multi-channel amplifier designed for electroencephalography (EEG) research and clinical applications. The actiChamp provides reliable and accurate data acquisition, with up to 256 channels and a sampling rate of up to 100 kHz per channel. It features low noise, high common-mode rejection, and advanced digital filtering capabilities to ensure high-quality signal recording.
PyCorder is a software application that serves as a data acquisition and recording tool. It is designed to interface with various laboratory equipment and enable the capture and management of experimental data. The software provides core functionalities for data acquisition, storage, and basic manipulation, without making claims about its intended use or interpretation of results.
Sourced in United States, Canada, United Kingdom, Germany
The MP150 is a data acquisition system designed for recording physiological signals. It offers high-resolution data capture and features multiple input channels to accommodate a variety of sensor types. The MP150 is capable of acquiring and analyzing data from various biological and physical measurements.
Sourced in Netherlands, United States
The ActiveTwo is a high-performance, modular biosignal acquisition system designed for medical research and clinical applications. It provides a versatile and flexible platform for recording a wide range of biopotential signals, including EEG, EMG, ECG, and more. The system features a modular design, allowing users to customize the configuration to meet their specific needs.
Sourced in Germany
The ActiCHamp is a high-performance multichannel amplifier designed for electrophysiological research. It features a modular and scalable architecture, allowing for flexible configuration to meet the needs of various research applications. The ActiCHamp provides reliable data acquisition and supports multiple input channels for recording brain activity.

More about "Facial Muscles"