The largest database of trusted experimental protocols
> Procedures > Diagnostic Procedure > Multimodal Imaging

Multimodal Imaging

Multimodal Imaging is a comprehensive approach that combines multiple imaging techniques, such as magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT), to provide a more complete and accurate understanding of biological structures and processes.
This integrated approach allows researchers and clinicians to leverage the strengths of different imaging modalities, resulting in enhanced diagnostic accuracy, improved treatment planning, and a deeper insight into complex physiological and pathological phenomena.
By integrating data from various imaging techniques, Multimodal Imaging enables a more holistic and informative representation of the subject of interest, facilitating better decision-making and more effective patient care.
This powerful tool is widely used in fields like neuroscience, oncology, and cardiovascular medicine, and continues to evolve with the advancements in imaging technologies and data analysis techniques.

Most cited protocols related to «Multimodal Imaging»

Once the parcellation has been created, parcellated representations of data from each modality can be generated using either the group parcellation or the individual subject parcellations. For the statistical cross-validation, we created parcellated myelin, cortical thickness, task fMRI, and resting state functional connectivity datasets using the semi-automated multimodal group parcellation (see Supplementary Methods 7.1). For myelin and cortical thickness, we simply averaged the values of the dense individual subject maps within each area. For task fMRI, we averaged the time series within each area prior to computing task statistics (to benefit from the CNR improvements of parcellation demonstrated in Fig. 4e). For the same reason, we averaged resting state time series within each parcel prior to computing functional connectivity to form a parcellated functional connectome.
For each pair of areas that shared a border in the parcellation, we computed a paired samples two-tailed t-test across subjects on these parcellated data for each feature (ignoring tests that involved the diagonal in the resting state parcellated functional connectome). We thresholded these tests at the Bonferroni-corrected significance level of P < 9 × 10−8 (number of area pairs across both hemispheres (1,050) × number of features (266) × number of tails (2) × 0.05) and an effect size threshold of Cohen’s d > 1. We grouped the features into 4 independent categories (cortical thickness, myelin, task fMRI, and resting state fMRI) to determine for each area pair whether it showed robust and statistically significant differences across multiple modalities. For more details, see Supplementary Methods 7.2.
Publication 2016
Connectome Cortex, Cerebral fMRI Microtubule-Associated Proteins Multimodal Imaging Myelin Sheath Tail
Once the parcellation has been created, parcellated representations of data from each modality can be generated using either the group parcellation or the individual subject parcellations. For the statistical cross-validation, we created parcellated myelin, cortical thickness, task fMRI, and resting state functional connectivity datasets using the semi-automated multimodal group parcellation (see Supplementary Methods 7.1). For myelin and cortical thickness, we simply averaged the values of the dense individual subject maps within each area. For task fMRI, we averaged the time series within each area prior to computing task statistics (to benefit from the CNR improvements of parcellation demonstrated in Fig. 4e). For the same reason, we averaged resting state time series within each parcel prior to computing functional connectivity to form a parcellated functional connectome.
For each pair of areas that shared a border in the parcellation, we computed a paired samples two-tailed t-test across subjects on these parcellated data for each feature (ignoring tests that involved the diagonal in the resting state parcellated functional connectome). We thresholded these tests at the Bonferroni-corrected significance level of P < 9 × 10−8 (number of area pairs across both hemispheres (1,050) × number of features (266) × number of tails (2) × 0.05) and an effect size threshold of Cohen’s d > 1. We grouped the features into 4 independent categories (cortical thickness, myelin, task fMRI, and resting state fMRI) to determine for each area pair whether it showed robust and statistically significant differences across multiple modalities. For more details, see Supplementary Methods 7.2.
Publication 2016
Connectome Cortex, Cerebral fMRI Microtubule-Associated Proteins Multimodal Imaging Myelin Sheath Tail
The WNN procedure begins by first applying standard analytical workflows to each modality independently and constructing KNN graphs for each one. In this manuscript we analyze data falling into three categories: measurements of single-cell gene expression, single-cell surface protein expression, and single-cell chromatin accessibility (ATAC-seq). For most analyses in this manuscript, we use a default value of k = 20, which is also the default value of k in the standard Seurat clustering workflow. For the analysis of the multimodal PBMC atlas, due to the substantial size of the dataset, we used a value of k = 30. In Figure S2, we show that we obtain very similar results from the WNN procedure when varying k across a series of values ranging from 10 to 50.
For clarity, we overview the analytical workflows for each data type below:

Single-cell gene expression: We analyze scRNA-seq data using standard pipelines in Seurat which include normalization, feature selection, and dimensional reduction with PCA. We then construct a KNN graph after dimensional reduction.

We emphasize that WNN analysis can leverage any scRNA-seq preprocessing workflow that generates a KNN graph. For example, users can preprocess their scRNA-seq data with a variety of normalization tools including log-normalization, scran (Lun et al., 2016 (link)) or SCTransform (Hafemeister and Satija, 2019 (link)), and can utilize alternative dimensional reduction procedures such as factor analysis or variational autoencoders. In this manuscript, we use workflows that are available in the Seurat package, and detail exact settings for each analysis later in this document.

Single-cell cell surface protein level expression: We analyze single-cell protein data (representing the quantification of antibody-derived tags (ADTs) in CITE-seq or ASAP-seq data) using a similar workflow to scRNA-seq. We normalize protein expression levels within a cell using the centered-log ratio (CLR) transform, followed by dimensional reduction with PCA, and subsequently construct a KNN graph. Unless otherwise specified, we do not perform feature selection on protein data, and use all measured proteins during dimensional reduction.

Single-cell chromatin accessibility: We analyze single-cell ATAC-seq data using our previously described workflow (Stuart et al., 2019 (link)), as implemented in the Signac package. We reduced the dimensionality of the scATAC-seq data by performing latent semantic indexing (LSI) on the scATAC-seq peak matrix, as suggested by Cusanovich et al. (2018) (link). We first computed the term frequency-inverse document frequency (TF-IDF) of the peak matrix by dividing the accessibility of each peak in each cell by the total accessibility in the cell (the “term frequency”), and multiplied this by the inverse accessibility of the peak in the cell population. This step ‘upweights’ the contribution of highly variable peaks and down-weights peaks that are accessible in all cells. We then multiplied these values by 10,000 and log-transformed this TF-IDF matrix, adding a pseudocount of 1 to avoid computing the log of 0. We decomposed the TF-IDF matrix via SVD to return LSI components, and scaled LSI loadings for each LSI component to mean 0 and standard deviation 1.

As described for scRNA-seq analysis, while we use Seurat and Signac functions in this manuscript, any analytical workflow that computes a KNN graph for surface protein or chromatin accessibility data can also be used in the first step of WNN analysis.
Full text: Click here
Publication 2021
ATAC-Seq Cell Membrane Proteins Cells Chromatin Gene Expression Immunoglobulins Membrane Proteins Multimodal Imaging Protein Domain Proteins Single-Cell RNA-Seq Staphylococcal Protein A
Spatial image preprocessing (distortion correction and image alignment) was carried out using the HCP’s spatial minimal preprocessing pipelines5 (link). This included steps to maximize alignment across image modalities, to minimize distortions relative to the subject’s anatomical space, and to minimize spatial smoothing (blurring) of the data. The data were projected into the 2 mm standard CIFTI grayordinates space, which includes cortical grey matter surface vertices and subcortical grey matter voxels5 (link). This offers substantial improvements in spatial localization over traditional volume-based analyses, enabling more accurate cross-subject and cross-study registrations and avoiding smoothing that mixes signals across differing tissue types or between nearby cortical folds. Additionally, we did minimal smoothing within the CIFTI grayordinates space to avoid mixing across areal borders prior to parcellation.
For cross-subject registration of the cerebral cortex, we used a two-stage process based on the multimodal surface matching (MSM) algorithm14 (link) (see Supplementary Methods 2.1–2.5). An initial ‘gentle’ stage, constrained only by cortical folding patterns (FreeSurfer’s ‘sulc’ measure), was used to obtain approximate geographic alignment without overfitting the registration to folding patterns, which are not strongly correlated with cortical areas in many regions. Previously, we found that more aggressive folding-based registration (either MSM-based or FreeSurfer-based) slightly decreased cross-subject task-fMRI statistics, suggesting that aligning cortical folds too tightly actually reduces alignment of cortical areas14 (link). A second, more aggressive stage used cortical areal features to bring areas into better alignment across subjects while avoiding neurobiologically implausible distortions or overfitting to noise in the data. The areal features used were myelin maps, resting state network maps computed with weighed regression (an improvement over dual regression34 (link) described in the Supplementary Methods 2.3) and resting state visuotopic maps (see Supplementary Methods 4.4). Areal distortion was measured by taking the log base-2 of the ratio of the registered spherical surface tile areas to the original spherical surface tile areas. The mean (across space) of the absolute value of the areal distortion averaged across subjects from both registration stages was 30% less than the standard FreeSurfer folding-based registration and the maximum (across space) of this measure was 54% less. Despite less overall distortion, the areal-feature-based registration delivers substantially more accurate registration of cortical areas than does FreeSurfer folding-based registration as judged by cross-subject task fMRI statistics, an areal feature that was not used to drive the registration14 (link). Because MSM registration preserves topology and is relatively gentle (it does not tear or distort the cortical surface in neurobiologically implausible ways), it is unable to align some cortical areas in some subjects where the areal arrangement differs from the group average (see Supplementary Results and Discussion 1.3–1.4 for more details on atypical areas). Group average registration drift away from the gentle folding-based geographic alignment was removed from the surface registration35 (link) (see Supplementary Methods 2.5) to enable comparisons of this dataset with datasets registered using different areal features (for example, post-mortem cytoarchitecture). Group average registration drift is any consistent effect of the registration during template generation on the mean size, shape, or position of areas on the sphere (as opposed to the desired reductions in cross-subject variation). An obvious example is the 37% increase in average brain volume produced by registration to MNI space4 (link). Uncorrected drifts during surface template generation can cause apparent changes in cortical areal size, shape, and position when comparing across studies.
Resting state fMRI data were denoised for spatially specific temporal artefacts (for example, subject movement, cardiac pulsation, and scanner artefacts) using the ICA+FIX approach, which includes detrending the data and aggressively regressing out 24 movement parameters36 (link),37 (link). We avoided regressing out the ‘global signal’ (mean grey-matter time course) from our data because preliminary analyses showed that this step shifted putative connectivity-based areal boundaries so that they lined up less well with other modalities, likely because of the strong areal specificity of the residual global signal after ICA+FIX clean up. Task fMRI data were temporally filtered using a high pass filter. More details on resting state and task fMRI temporal preprocessing are described in the Supplementary Methods 1.6–1.8. Substantial spatial smoothing was avoided for both datasets, and all images were intensity normalized to account for the receive coil sensitivity field. Artefact maps of large vein effects, fMRI gradient echo signal loss, and surface curvature were computed as described in Supplementary Methods 1.9.
Publication 2016
Autopsy Brain Cortex, Cerebral ECHO protocol fMRI Gray Matter Heart Histocompatibility Testing Hypersensitivity Microtubule-Associated Proteins Movement Multimodal Imaging Myelin Sheath Tears Veins
Data from 40 healthy, unrelated adults (age: 22–35, 17 males) were obtained from the Q3 data release from the Human Connectome Project (HCP) database. The multimodal MRI data consisted of structural MRI, resting-state functional MRI (rfMRI), and diffusion MRI (dMRI), collected on a 3 T Skyra scanner (Siemens, Erlangen, Germany) using a 32-channel head coil. Because subjects 209 733 and 528 446 displayed structural brain abnormalities, they were replaced by 2 other subjects, 100 408 and 106 016, from the unrelated 80 subjects' group. All scanning parameters are detailed and motivated in Van Essen et al. (2013 (link)) and also provided in the supplement. Multimodal MRI data from the database were downloaded in a preprocessed form, that is, after the images had undergone the minimal preprocessing pipeline (v. 3.2). The details of this pipeline have been described previously (Jenkinson et al. 2002 (link), 2012 (link); Glasser et al. 2013 (link); Smith et al. 2013 (link)) and are only summarized in the supplement for completeness.
In addition, another independent group of healthy subjects were included to do the repeatability validation. The dataset included 40 (20 males, age range, 17–20 years, age, 19.10 ± 0.80 years, mean ± SD) right-handed participants. The multimodal MRI data of 40 healthy adults were acquired using a 3.0 T GE MR Scanner (see Zhuo et al. (2016) (link) for a full description of the data sample and acquisition parameters).
Publication 2016
Adult Brain Congenital Abnormality Dietary Supplements Diffusion Magnetic Resonance Imaging fMRI Head Healthy Volunteers Males Multimodal Imaging

Most recents protocols related to «Multimodal Imaging»

To better fuse multimodal features, the feature extraction module express different modal data as low-dimensional semantic vectors and finally train a semantic similarity model, at which point the different modalities can be constrained to a unified representation space and multimodal fusion representation. Here we designed a channel attention for multimodal feature fusion. Specifically, for the image of the mth modality, where m∈[1, 2, 3, 4]. The output features Fm of the feature extraction module are pooled globally in one spatial dimension to obtain a channel description of C×1 × 1 × 1, where C is the number of channels of a single modal feature. A sigmoid activation function is then used to obtain the weighting coefficients. Finally, the weight coefficients are multiplied with the corresponding input features Fm to obtain the new weighted features. The calculation of the weighted features is shown in the following equation:
where σ represents the sigmoid function, and wm represents the parameter matrix at training time. The features of different modalities are stitched together after the maximum pooling layer. Finally, a Fully Connected (FC) layer is created in the corresponding dimension of the channel and output to the classifier to obtain the classification result.
Full text: Click here
Publication 2023
Attention Cloning Vectors Multimodal Imaging Semantic Differential Sigmoid Colon
In this paper, a multimodal fusion model is designed, and the overall flowchart is shown in Figure 1. Specifically, the model consists of a feature extraction encoder and a classification head for multimodal fusion, where weights are shared among the unimodal encoders and features are extracted from each unimodal input encoder, which means that all spatial locations share the same convolution kernel, which greatly reduces the number of parameter layers required for convolution. The feature extraction encoder consists of four residual blocks, and the specific flow is shown in Figure 2. The CNN model constructed in this paper mainly consists of a convolutional layer, a maximum pooling layer and a Dropout layer, and finally a fully connected layer as the output layer.
Full text: Click here
Publication 2023
Head Multimodal Imaging Postoperative Residual Curarization
To better assess the bimodality of the null distributions, for each link we drew 1000 GC and log(ATACseq peak sum of counts + 1) matched trans-peaks using Signac’s function MatchRegionStats() (instead of the default 200), computed their Pearson R and scaled them. To establish if the resulting null distributions were multimodal, we used the mixtools package expectation–maximization function normalmixEM() with k = 2 and epsilon = 1e−03 as described in Ameijeiras-Alonso et al., 201922 (link). Null distributions with nominal p-values < 0.05 were categorised as multimodal.
Full text: Click here
Publication 2023
Multimodal Imaging
By application of our IntelliCage-based behavioral phenotyping design, we assessed multiple facets of cognition as well as sucrose preference, as measure of anhedonia, in female mice only, at age 6 months as described (Dere et al., 2014 (link)). One day after subcutaneous implantation of transponders (see above), mice were group-housed, separated by genotype, and remained in the IntelliCages for 6 days. Place learning was acquired within the first 24 hr (day 1), in which an individual mouse learned that only one out of four corners was rewarded with water, while the other corners remained blocked. The number of mice assigned to each corner was balanced and semi-randomly determined. On day 2, reversal learning was assessed to measure cognitive flexibility as well as perseveration. Mice had to learn that the previously rewarded corner was now blocked, and instead the diametrically opposed corner now rewarded. Sucrose preference was assessed on day 3 by comparing preference for a corner rewarded with a 2% sucrose solution over another corner rewarded with tap water. During these 24 hr, the two previously blocked corners were now rewarded with either sucrose solution or tap water, whereas the remaining corners were blocked. The visits to the respective target corners during place and reversal learning as well as sucrose preference testing were used for statistical analysis. On days 4–5 mice had again access to two rewarding corners providing either a sucrose solution or tap water. However, these corners were now again the diametrically opposed corners to day 3 and access to the corners was only provided for a limited time, namely during the first 2 hr of the active phase of the mice (6–8 PM). Hence, mice were required to form a multimodal association containing the information on the type of reward provided (what) as well as their locations (where) and the time at which to expect the reward (when), rendering this approach an experimental model for the assessment of episodic memory comparably to humans. Visits to rewarded corners during acquisition (day 4) and retrieval (day 5) of episodic-like memory were recorded and the delta between visits to the corner providing sucrose solution and the corner providing tap water was calculated.
Full text: Click here
Publication 2023
Anhedonia Cognition Females Genotype Homo sapiens Memory Memory, Episodic Mice, House Mice, hr Multimodal Imaging Ovum Implantation Sucrose
In Yuan et al study,[29 (link)] patients received standardized general anesthesia and basic analgesic protocol. Intraoperatively, all patients received general anesthesia which was induced by sufentanil 0.5 μg/kg, midazolam 0.04 mg/kg, propofol 1 to 2 mg/kg, and Cisatracurium 2 μg/kg intravenously, followed by continuous intravenous infusion of remifentanil 0.1 to 0.3 μg/(kg·min), propofol 2 to 5 mg/(kg·hr) and inhalation of sevoflurane to maintain anesthesia. Since postoperative day 1, the protocol of oral celecoxib restarted till postoperative 3 weeks when the patients came back to the hospital for taking out the stitches. In Yadeau et al 2016 study,[6 (link)] patients received a standardized anesthetic and multimodal analgesic protocol. In Yadeau et al 2022 study,[28 (link)] patients received a standard intraoperative and postoperative multimodal anesthetic protocol: a spinal-epidural (subarachnoid mepivacaine, 45–60 mg); adductor canal block (ultrasound-guided; 15 cc bupivacaine, 0.25%, with 2 mg preservative-free dexamethasone). For postoperative pain management, patients were scheduled to receive the study medication once daily for 14 days; 4 doses of 1000 mg IV acetaminophen every 6 hours followed by 1000 mg oral acetaminophen every 8 hours; 4 doses of 15 mg IV ketorolac followed by 15 mg meloxicam every 24 hours; and 5 to 10 mg oral oxycodone was given as needed for pain. Patients could have pain medications adjusted as indicated. In Koh et al study,[12 (link)] all patients had a postoperative intravenous patient-controlled anesthesia (PCA) pump that administered 1 mL of a 100-mL mixture containing 2000 mg of fentanyl on demand. In Kim et al study,[27 ] all patients received intravenous PCA encompassing delivery of 1 mL of a 100 mL solution containing 2000 µg of fentanyl postoperatively. In Ho et al study,[26 (link)] patients were routinely offered a single shot spinal anesthesia consisting of an intrathecal dose of bupivacaine 10 to 12.5 mg with fentanyl 10 mg. After surgery, pain treatment consisted of PCA with intravenous injection of morphine. The settings were 1 mg bolus, 5 minutes lockout time, and a maximum hourly limit of 8 mg. All patients were also given acetaminophen 1 g 6 hourly.
Publication 2023
Acetaminophen Analgesics Anesthesia Anesthesia, Intravenous Anesthetics Bupivacaine Cardiac Arrest Celecoxib cisatracurium Dexamethasone Fentanyl General Anesthesia Inhalation Intravenous Infusion Ketorolac Management, Pain Meloxicam Mepivacaine Midazolam Morphine Multimodal Imaging Obstetric Delivery Operative Surgical Procedures Oxycodone Pain Pain, Postoperative Patients Pharmaceutical Preparations Pharmaceutical Preservatives Propofol Pulp Canals Remifentanil Sevoflurane Spinal Anesthesia Subarachnoid Space Sufentanil Ultrasonography

Top products related to «Multimodal Imaging»

Sourced in Germany
The Syngo Multimodality Workplace is a software solution that enables the integration and visualization of medical imaging data from various modalities, including CT, MRI, PET, and ultrasound. It provides a unified platform for healthcare professionals to access, analyze, and manage patient imaging information.
Sourced in United States
The Inveon PET/CT Multimodality System is a preclinical imaging platform that combines Positron Emission Tomography (PET) and Computed Tomography (CT) technologies. It is designed to enable high-resolution, quantitative, and multimodal imaging of small animals for research purposes.
Sourced in Germany, United States, United Kingdom
The Inveon Research Workplace is a comprehensive small-animal imaging platform designed for preclinical research. It integrates multiple imaging modalities, including PET, SPECT, CT, and optical imaging, into a single system. The Inveon Research Workplace allows researchers to acquire high-quality, co-registered images for a wide range of small-animal applications.
Sourced in United Kingdom, Germany, United States, France, Japan, China, Netherlands, Morocco, Spain, Cameroon
The Zetasizer Nano ZS is a dynamic light scattering (DLS) instrument designed to measure the size and zeta potential of particles and molecules in a sample. The instrument uses laser light to measure the Brownian motion of the particles, which is then used to calculate their size and zeta potential.
Sourced in Germany
The Multimodality Workplace is a versatile lab equipment designed for various applications. It offers a compact and integrated platform for handling and processing different sample types. The equipment enables the user to perform various analytical tasks efficiently within a single workstation.
Sourced in Germany, United States, Japan, Netherlands, United Kingdom
The SOMATOM Definition Flash is a computed tomography (CT) scanner developed by Siemens. It is designed to provide high-quality imaging for a wide range of medical applications. The SOMATOM Definition Flash utilizes advanced technology to capture detailed images of the body, enabling medical professionals to make accurate diagnoses and inform treatment decisions.
Sourced in United States, United Kingdom, Germany, Canada, Japan, Sweden, Austria, Morocco, Switzerland, Australia, Belgium, Italy, Netherlands, China, France, Denmark, Norway, Hungary, Malaysia, Israel, Finland, Spain
MATLAB is a high-performance programming language and numerical computing environment used for scientific and engineering calculations, data analysis, and visualization. It provides a comprehensive set of tools for solving complex mathematical and computational problems.
Sourced in United States
The Synergy Model H1M Multimodal Plate Reader is a versatile laboratory instrument designed for various detection modes. It is capable of absorbance, fluorescence, and luminescence measurements. The Synergy H1M provides a flexible platform for a wide range of applications in life science research and drug discovery.
Sourced in Germany, United States, United Kingdom, Japan, Switzerland, Ireland
The Spectralis is an optical coherence tomography (OCT) imaging device developed by Heidelberg Engineering. It captures high-resolution, cross-sectional images of the retina and optic nerve using near-infrared light. The Spectralis provides detailed structural information about the eye, which can aid in the diagnosis and management of various eye conditions.
Sourced in United States, Germany, Switzerland
The Inveon Research Workplace software is a comprehensive platform designed for the management and analysis of data acquired from Siemens' Inveon preclinical imaging systems. The software provides a unified workflow for seamless data acquisition, processing, and visualization. It offers tools for image reconstruction, quantification, and reporting to support research activities.

More about "Multimodal Imaging"

Multimodal Imaging, Comprehensive Imaging Approach, Integrated Imaging Techniques, MRI, PET, CT, Diagnostic Accuracy, Treatment Planning, Physiological and Pathological Phenomena, Neuroscience, Oncology, Cardiovascular Medicine, PubCompare.ai, AI-driven Protocol Comparisons, Imaging Optimization, Syngo Multimodality Workplace, Inveon PET/CT Multimodality System, Inveon Research Workplace, Zetasizer Nano ZS, Multimodality Workplace, SOMATOM Definition Flash, MATLAB, Synergy Model H1M Multimodal Plate Reader, Spectralis, Inveon Research Workplace software.