The cognitive tests are described in Appendix A along with three estimates of their reliabilities. It is apparent that all of the variables had moderately high reliability as evaluated by internal consistency (i.e., coefficient alpha), alternate forms (i.e., correlation with other test versions), and test-retest (i.e., longitudinal stability coefficient) procedures. More details about the tests and administration procedures, as well as results of confirmatory factor analyses indicating the pattern of relations of variables to ability constructs, have been reported in other publications (Salthouse, 2004 ; 2005 (link); 2007 (link); Salthouse, Pink & Tucker-Drob, 2008 (link); Salthouse & Tucker-Drob, 2008 (link)). The three tests representing each cognitive ability were Matrix Reasoning, Shipley Abstraction, and Letter Sets for reasoning, Spatial Relations, Paper Folding, and Form Boards for spatial visualization, Word Recall, Paired Associates, and Logical Memory for memory, and Digit Symbol, Pattern Comparison, and Letter Comparison for perceptual speed. Four tests were used to assess vocabulary: Vocabulary, Picture Vocabulary, Synonym Vocabulary, and Antonym Vocabulary.
>
Physiology
>
Mental Process
>
Spatial Visualization
Spatial Visualization
Spatial Visualization refers to the cognitive ability to mentally manipulate, rotate, and transform spatial information.
This process involves the visual-spatial working memory and is crucial for tasks such as navigation, object recognition, and problem-solving.
Researchers in fields like cognitive psychology, neuroscience, and computer science study spatial visualization to understand its underlying mechanisms and applications.
Advances in artificial intelligence and machine learning have revolutionized spatial visualization research, enabling intelligent comparisons of protocols from literature, preprints, and patents.
Platforms like PubCompare.ai leverage these AI-driven insights to enhance spatial visualization research and provide cutting-edge tools tailored to the needs of researchers in this domain.
Expereincing the power of AI-driven spatial visualization has never been easier.
This process involves the visual-spatial working memory and is crucial for tasks such as navigation, object recognition, and problem-solving.
Researchers in fields like cognitive psychology, neuroscience, and computer science study spatial visualization to understand its underlying mechanisms and applications.
Advances in artificial intelligence and machine learning have revolutionized spatial visualization research, enabling intelligent comparisons of protocols from literature, preprints, and patents.
Platforms like PubCompare.ai leverage these AI-driven insights to enhance spatial visualization research and provide cutting-edge tools tailored to the needs of researchers in this domain.
Expereincing the power of AI-driven spatial visualization has never been easier.
Most cited protocols related to «Spatial Visualization»
Cognition
Cognitive Testing
Fingers
Memory
Mental Recall
Spatial Visualization
Neuroimaging was completed as part of the Philadelphia Neurodevelopmental Cohort (46 (link)). All participants, or their parent or guardian, provided informed consent, and minors provided assent; study procedures were approved by the institutional review boards of both the University of Pennsylvania and the Children’s Hospital of Philadelphia. All participants included in this study were medically healthy, were not taking psychotropic medication at the time of study, and passed strict quality-assurance procedures for four imaging modalities including T1-weighted structural images, diffusion-weighted imaging, resting-state functional MRI (fMRI), and n-back fMRI. The final sample included 727 youths ages 8 to 23 y (420 females; mean = 15.9 y, SD = 3.2 y). From the original study sample, 147 typically developing youths returned for longitudinal neuroimaging assessments ∼1.7 y after baseline (83 females; 294 total scans). For further details regarding image preprocessing and brain network construction see SI Appendix, SI Methods .
To evaluate the relationship between structure–function coupling and previously characterized cortical hierarchies, evolutionary cortical areal expansion (3 (link)) and the principal gradient of intrinsic functional connectivity (2 (link)) were extracted from publicly available atlases. The significance of the spatial correspondence between brain maps was estimated using a conservative spatial permutation test, which generates a null distribution of randomly rotated brain maps that preserve spatial covariance structure of the original data (23 (link)).
We used penalized splines within a GAM to estimate linear and nonlinear age-related changes in structure–function coupling for each brain region. Importantly, the GAM estimates nonlinearities using restricted maximum likelihood, penalizing nonlinearity in order to avoid overfitting the data (47 ). To evaluate regional associations between structure–function coupling and executive function, executive performance was measured as a factor score summarizing accuracy across mental flexibility, attention, working memory, verbal reasoning, and spatial ability tasks administered as part of the Penn Computerized Neurocognitive Battery (SI Appendix, SI Methods ).
Longitudinal developmental change in structure–function coupling was evaluated with two approaches. First, we estimated longitudinal age effects on coupling within a linear mixed effects model, including a random subject intercept in addition to other covariates. Second, we used linear regression models with longitudinal change scores. Longitudinal intraindividual change in coupling (ΔCoupling) and the participation coefficient (ΔPC) were calculated as the difference in regional brain measures between time points. Baseline age, sex, mean relative framewise displacement, and the number of years between time points were included as additional covariates in linear regression models.
The data reported in this paper have been deposited in the database of Genotypes and Phenotypes under accession number dbGaP: phs000607.v2.p2 (https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000607.v2.p2 ).
To evaluate the relationship between structure–function coupling and previously characterized cortical hierarchies, evolutionary cortical areal expansion (3 (link)) and the principal gradient of intrinsic functional connectivity (2 (link)) were extracted from publicly available atlases. The significance of the spatial correspondence between brain maps was estimated using a conservative spatial permutation test, which generates a null distribution of randomly rotated brain maps that preserve spatial covariance structure of the original data (23 (link)).
We used penalized splines within a GAM to estimate linear and nonlinear age-related changes in structure–function coupling for each brain region. Importantly, the GAM estimates nonlinearities using restricted maximum likelihood, penalizing nonlinearity in order to avoid overfitting the data (47 ). To evaluate regional associations between structure–function coupling and executive function, executive performance was measured as a factor score summarizing accuracy across mental flexibility, attention, working memory, verbal reasoning, and spatial ability tasks administered as part of the Penn Computerized Neurocognitive Battery (
Longitudinal developmental change in structure–function coupling was evaluated with two approaches. First, we estimated longitudinal age effects on coupling within a linear mixed effects model, including a random subject intercept in addition to other covariates. Second, we used linear regression models with longitudinal change scores. Longitudinal intraindividual change in coupling (ΔCoupling) and the participation coefficient (ΔPC) were calculated as the difference in regional brain measures between time points. Baseline age, sex, mean relative framewise displacement, and the number of years between time points were included as additional covariates in linear regression models.
The data reported in this paper have been deposited in the database of Genotypes and Phenotypes under accession number dbGaP: phs000607.v2.p2 (
Attention
Biological Evolution
Brain
Brain Mapping
Cortex, Cerebral
Diffusion
Ethics Committees, Research
Executive Function
factor A
Females
fMRI
Genotype
Legal Guardians
Memory, Short-Term
Microtubule-Associated Proteins
Parent
Phenotype
Psychotropic Drugs
Radionuclide Imaging
Spatial Visualization
Vaginal Diaphragm
Youth
Attention
Bipolar Disorder
Cognition
Dementia
Eligibility Determination
Ethnicity
Hispanic or Latino
Major Depressive Disorder
Memory
Memory, Episodic
Mental Disorders
Schizophrenia
Spatial Visualization
Substance Abuse
Anterior Temporal Lobe
Atrophy
Bath
Diagnosis
Ethics Committees, Research
Healthy Volunteers
Memory
Patients
Spatial Visualization
Speech
In order to assess the performance of our WNN method alongside other recently proposed multimodal integration tools, we compared the results of WNN, Total Variational Inference (totalVI version 0.6.7) (Gayoso et al., 2019 (link)) and Multi-omics factor analysis v2 (MOFA+ version 1.1) (Argelaguet et al., 2020 (link)), on the BMNC dataset. We followed the recommended settings and workflows for both methods, and further describe parameter choices below.
For totalVI, we use the RNA and ADT counts matrices as input. We use the subsample_genes function to select 4000 variable genes, and used 500 epochs for model training, as suggested in the totalVI tutorial (https:// scvi-tools.org/en/stable/tutorials/totalvi.html). All other parameters were set to default settings. We identified nearest neighbors, and performed UMAP visualization on the learned latent space.
For MOFA+, we used the same normalization method as Seurat to facilitate direct comparison. As recommended in the MOFA+ tutorial (https://raw.githack.com/bioFAM/MOFA2_tutorials/master/R_tutorials/10x_scRNA_scATAC.html ), we used the z-scored data (‘scaled’ data) from the two assays as view1 and view2 for MOFA+. All other parameters were set to default or recommended settings. We identified nearest neighbors, and performed UMAP visualization based on the learned factors.
The UMAP plots inFigures S2 A and S2B show the results of all three methods (we also include independent RNA and protein analyses in Seurat for comparison). The plots show that the methods generally reveal similar sets of cell types, but with important differences. For example, regulatory T cells, defined by CD25 expression, are only separated in the WNN UMAP. Figure S2 B demonstrates that this is due to the fact that CD25+ cells only form a distinct cluster in WNN analysis.
In order to move beyond visualization and quantify the performance of each method, we averaged the CD25 expression level for the calculated multimodal neighbors of each cell, returning a vector of predicted values. We quantified the performance of the method using the correlation (Pearson;Figure 2 D, Spearman; Figure S2 ), between predicted and measured values. For CD25, WNN analysis achieved the highest correlation, as cells that are CD25+ are correctly identified as neighbors with other cells that are CD25+ in the dataset. We repeated this analysis for all protein features, and found that, WNN analysis consistently achieved the highest correlation. We repeated the analysis for all transcriptomic features as well (Figure S2 ) and observed similar performance for all methods. We note that transcriptomic correlations were also much lower, likely due to the substantial technical noise inherent to scRNA-seq data.
For totalVI, we use the RNA and ADT counts matrices as input. We use the subsample_genes function to select 4000 variable genes, and used 500 epochs for model training, as suggested in the totalVI tutorial (https:// scvi-tools.org/en/stable/tutorials/totalvi.html). All other parameters were set to default settings. We identified nearest neighbors, and performed UMAP visualization on the learned latent space.
For MOFA+, we used the same normalization method as Seurat to facilitate direct comparison. As recommended in the MOFA+ tutorial (
The UMAP plots in
In order to move beyond visualization and quantify the performance of each method, we averaged the CD25 expression level for the calculated multimodal neighbors of each cell, returning a vector of predicted values. We quantified the performance of the method using the correlation (Pearson;
Full text: Click here
Biological Assay
Cells
Cloning Vectors
EPOCH protocol
Gene Expression Profiling
Genes
IL2RA protein, human
Multimodal Imaging
Protein Domain
Proteins
Regulatory T-Lymphocytes
RNA, Small Cytoplasmic
Single-Cell RNA-Seq
Spatial Visualization
Most recents protocols related to «Spatial Visualization»
Mini-mental state examination (MMSE) scale was used to assess the cognitive ability of the patients,[12 (link)] which mainly covered orientation, attention, calculation, short-term memory, language ability, recall ability, visual-spatial ability, and other aspects. The total score of this scale was 30 points, of which 27 to 30 points was normal and <27 points could be judged as POCD. At 1 day, 3 days and 7 days after surgery, each patient was scored by the medical staff using the MMSE scale Patients were included in the control group with all scores ≥ 27 at each time point, while the others with a score < 27 at any point in time were included in POCD group.
Attention
Cognition
Medical Staff
Memory, Short-Term
Mental Recall
Mini Mental State Examination
Patients
Spatial Visualization
Facial recognition test (FRT) was conducted to assess facial recognition and visual perception. In this study, the spatial structure cognitive ability and emotional perception experience ability of facial features as the main characteristics were evaluated through the recognition of facial expressions. The first stage (FRT-1) was to choose the appropriate expression option according to the photo of face, a total of 24 questions meaning full score was 24. In the second stage (FRT-2), according to the given emoticon instruction, two corresponding facial photos were selected from eight similar pictures, a total of 16 questions meaning full score was 16. The software automatically scored based on the number of correct selections.
Full text: Click here
Cognition
Emotions
Face
Facial Emotion Recognition
Facial Recognition
Spatial Visualization
Visual Perception
Mental Rotation Tasks (Vandenberg & Kuse, 1978 (link)) were adopted in this study. It was used to assess children’s general intelligence and spatial cognitive abilities. More specifically, it examines whether children are capable to transform a visual image in a 3-dimensional (3D) space. Figure 3 shows an example of a mental rotation task. Children were first shown a reference image of a 3D object and were then asked to identify which of the four options best matches a rotated version of the object pictured in the reference image. A total of 15 trials were included in the current study. Vandenberg and Kuse (1978 (link)) summarized the literature and found the test–retest reliability estimates ranged from 0.70 to 0.83 and validity evidence was supported by moderate correlations with other tests of spatial visualization, such as Identical Blocks Test (r = 0.54), Chair-Window Test (r = 0.45), and Spatial Relations subtest of the Differential Abilities Test (r = 0.50).![]()
An example of Mental Rotation Tasks
Child
Cognition
Spatial Visualization
Vaginal Diaphragm
In the battle against the enemy, the defense mechanism and ability of the outer space near the wall are critical. The outer defense system of the wall of the coastal defense forts can be summarized into three defense levels. Fig 1 shows the structure of the three defense levels and the components of the third defense level.
Full text: Click here
Defense Mechanisms
Spatial Visualization
Structural homology models of ancestral sequences were generated by MODELLER v10.2 (Webb and Sali, 2016 (link)) using PDB 1M34 as a template for all nitrogenase protein subunits and visualized by ChimeraX v1.3 (Pettersen et al., 2021 (link)).
Extant and ancestral protein sequence space was visualized by machine-learning embeddings, where each protein embedding represents protein features in a fixed-size, multidimensional vector space. The analysis was conducted on concatenated (HDK) nitrogenase protein sequences in our phylogenetic dataset. The embeddings were obtained using the pre-trained language model ESM2 (Lin et al., 2022 (link); Rives et al., 2021 (link)), a transformer architecture trained to reproduce correlations at the sequence level in a dataset containing hundreds of millions of protein sequences. Layer 33 of this transformer was used, as recommended by the authors. The resulting 1024 dimensions were reduced by UMAP (McInnes et al., 2020 ) for visualization in a two-dimensional space.
Protein site-wise conservation analysis was performed using the Consurf server (Ashkenazy et al., 2016 (link)). An input alignment containing only extant, Group I Mo-nitrogenases was submitted for analysis under default parameters. Conserved sites were defined by a Consurf conservation score >7.
Extant and ancestral protein sequence space was visualized by machine-learning embeddings, where each protein embedding represents protein features in a fixed-size, multidimensional vector space. The analysis was conducted on concatenated (HDK) nitrogenase protein sequences in our phylogenetic dataset. The embeddings were obtained using the pre-trained language model ESM2 (Lin et al., 2022 (link); Rives et al., 2021 (link)), a transformer architecture trained to reproduce correlations at the sequence level in a dataset containing hundreds of millions of protein sequences. Layer 33 of this transformer was used, as recommended by the authors. The resulting 1024 dimensions were reduced by UMAP (McInnes et al., 2020 ) for visualization in a two-dimensional space.
Protein site-wise conservation analysis was performed using the Consurf server (Ashkenazy et al., 2016 (link)). An input alignment containing only extant, Group I Mo-nitrogenases was submitted for analysis under default parameters. Conserved sites were defined by a Consurf conservation score >7.
Full text: Click here
Amino Acid Sequence
Cloning Vectors
Homologous Sequences
Nitrogenase
Proteins
Protein Subunits
Spatial Visualization
Top products related to «Spatial Visualization»
Sourced in United States, United Kingdom, Germany, Canada, Japan, Sweden, Austria, Morocco, Switzerland, Australia, Belgium, Italy, Netherlands, China, France, Denmark, Norway, Hungary, Malaysia, Israel, Finland, Spain
MATLAB is a high-performance programming language and numerical computing environment used for scientific and engineering calculations, data analysis, and visualization. It provides a comprehensive set of tools for solving complex mathematical and computational problems.
Sourced in Netherlands, United States
Ethovision 3.0 is a video-based tracking system designed for automated behavioral analysis. It captures and tracks the movement and position of animals in a controlled environment, providing detailed data on their behavior.
Sourced in Spain, United States, China
The Smart 3.0 is a compact and versatile laboratory equipment designed for diverse experimental applications. It functions as a programmable syringe pump, capable of precisely delivering and withdrawing liquids in a controlled manner. The device features an intuitive user interface and a range of connection options to integrate with various experimental setups.
Sourced in Spain
The Smart ver. 2.5 is a versatile laboratory equipment designed for various applications. It features advanced technology and improved functionalities compared to previous versions. The core function of this product is to provide precise and reliable performance in laboratory settings.
Sourced in United States, Japan, United Kingdom, Germany, Belgium, Austria, Spain, France, Denmark, Switzerland, Ireland
SPSS version 20 is a statistical software package developed by IBM. It provides a range of data analysis and management tools. The core function of SPSS version 20 is to assist users in conducting statistical analysis on data.
Sourced in United States
The SMART-2000 is a multi-channel data acquisition and analysis system. It is designed to collect and process real-time data from various sensors and transducers. The device offers simultaneous sampling of up to 8 analog channels with a 16-bit resolution. The SMART-2000 features a large color display, on-board data storage, and connectivity options for integration with computer systems.
Sourced in United States, Germany, United Kingdom
Diphtheria toxin is a type of lab equipment used in scientific research. It is a protein produced by the bacterium Corynebacterium diphtheriae. The core function of diphtheria toxin is to inhibit protein synthesis in eukaryotic cells, leading to cell death.
Sourced in United States, Japan, United Kingdom, Germany, Belgium, China
SPSS Statistics version 21 is a statistical software package developed by IBM. It is designed for data analysis and management, providing tools for data exploration, modeling, and reporting. The software offers a range of statistical techniques and is widely used in academic and professional research settings.
The HMX-F80 is a laboratory equipment product from Samsung. It is designed for use in scientific and research settings. The core function of the HMX-F80 is to provide a controlled environment for various experiments and testing procedures.
More about "Spatial Visualization"
Spatial Cognition, Visual-Spatial Processing, Mental Rotation, 3D Perception, Navigational Skills, Object Identification, Problem-Solving Abilities, Cognitive Psychology, Neuroscience, Computer Vision, Artificial Intelligence, Machine Learning, PubCompare.ai, Metaanalysis, MATLAB, Ethovision 3.0, Smart 3.0, Smart ver. 2.5, SPSS version 20, SMART-2000, Diphtheria toxin, SPSS Statistics version 21, HMX-F80.
Spatial visualization refers to the cognitive capacity to mentally manipulate, rotate, and transform spatial information.
This process involves the visual-spatial working memory and is crucial for tasks such as navigation, object recognition, and problem-solving.
Researchers in fields like cognitive psychology, neuroscience, and computer science study spatial visualization to understand its underlying mechanisms and applications.
Advances in artificial intelligence and machine learning have revolutionized spatial visualization research, enabling intelligent comparisons of protocols from literature, preprints, and patents.
Platforms like PubCompare.ai leverage these AI-driven insights to enhance spatial visualization research and provide cutting-edge tools tailored to the needs of researchers in this domain.
Experience the power of AI-driven spatial visualization has never been easier.
Spatial visualization refers to the cognitive capacity to mentally manipulate, rotate, and transform spatial information.
This process involves the visual-spatial working memory and is crucial for tasks such as navigation, object recognition, and problem-solving.
Researchers in fields like cognitive psychology, neuroscience, and computer science study spatial visualization to understand its underlying mechanisms and applications.
Advances in artificial intelligence and machine learning have revolutionized spatial visualization research, enabling intelligent comparisons of protocols from literature, preprints, and patents.
Platforms like PubCompare.ai leverage these AI-driven insights to enhance spatial visualization research and provide cutting-edge tools tailored to the needs of researchers in this domain.
Experience the power of AI-driven spatial visualization has never been easier.