The largest database of trusted experimental protocols

Imagery, Guided

Imagery, Guided is a mental process in which an individual uses their imagination to visualize or experience a specific scene or situation.
This technique is often used in various therapeutic and self-improvement contexts, such as relaxation, stress management, and goal visualization.
Guided imagery involves the deliberate use of all the senses to create an imaginary experience, which can elicit physiological and emotional responses.
Practi*tioners may use audio recordings, scripts, or live instruction to guide individuals through the imagery process, helping them to vividly imagine the desired scenario.
Imagery, Guided is believed to have the potential to enhance well-being, increase motivation, and facilitate positive behavioral changes.

Most cited protocols related to «Imagery, Guided»

Annual (September to August) ice area flux through Nares Strait for 2016–2019 was determined using an established technique2 ,18 (link),38 . First, sea ice motion from each sequential pair of Sentinel-1 imagery (~0.5 to 1-day time separation) was determined using the Komarov and Barber tracking algorithm39 (link). Sea ice motion was then interpolated to a 30 km buffer region surrounding the gate and sampled at 5 km intervals across. Considering that ice rapidly deforms as it is being funneled through Nares Strait we placed our gate farther north of Nares Strait than has been done previously2 to facilitate improved motion detection. The ice area flux (F) was calculated using: F=ciuiΔx where ci is the ice concentration obtained from the closest Canadian Ice Service ice chart40 to the Sentinel-1 image date, ui is the ice speed normal to the flux gate at the ith location and Δx is the spacing along the gate (5 km). If we assume that the errors of the sea ice motion samples are additive, unbiased, uncorrelated, and normally distributed, then the uncertainty in ice area flux across the gate (σf) can be determined using the following equation: σf=σeLNs1, where, σe is the error in ice motion of 0.43 km/day determined previously39 (link), L is the width of the gate and Ns is the number of samples across the gate. For L=139 km and Ns=27 the uncertainty in ice area flux at our gate is ~±12 km2/day. On monthly or annual timescales, the uncertainty is close to zero.
Full text: Click here
Publication 2021
Buffers Imagery, Guided Sea Ice Cover
As covariate layers for producing SoilGrids250m predictions we used an extensive stack of covariates, which are primarily based on remote sensing data. These include (see e.g. Fig 4):
These covariates were selected to represent factors of soil formation according to Jenny [40 ]: climate, relief, living organisms, water dynamics and parent material. Out of the five main factors, water dynamics and living organisms (especially vegetation dynamics) are not trivial to represent as these operate over long periods of time and often exhibit chaotic behaviour. Using reflectance bands such as the mid-infrared MODIS bands from a single day, would have little use to soil mapping for areas with dynamic vegetation, i.e. with strong seasonal changes in vegetation cover. To account for seasonal fluctuation and for inter-annual variations in surface reflectance, we instead used long-term temporal signatures of the soil surface derived as monthly averages from long-term MODIS imagery (15 years of data). We assume here that, for each location in the world, long-term average seasonal signatures of surface reflectance or vegetation index provide a better indication of soil characteristics than only a single snapshot of surface reflectance. Computing temporal signatures of the land surface requires a considerable investment of time (comparable to the generation of climatic images vs temporary weather maps), but it is possibly the only way to represent the cumulative influence of living organisms on soil formation.
For processing the covariates we used a combination of Open Source GIS software, primarily SAGA GIS [28 (link)], R packages raster [41 ], sp [42 ], GSIF and GDAL [43 ] for reprojecting, mosaicking and merging tiles. SAGA GIS and GDAL were found to be highly suitable for processing large data as parallelization of computing was relatively easy to implement.
We updated the 1 km global soil mask map using the most detailed 30 m resolution global land cover map from 2010. This was combined with the global water mask [44 (link)] and the global sea mask map based on the SRTM DEM [45 (link)] to produce one consistent global soil mask that includes all land areas, expect for: (a) fresh water bodies such as lakes and rivers, and (b) permanent ice.
Full text: Click here
Publication 2017
Climate Human Body Imagery, Guided Microtubule-Associated Proteins Parent Rivers
The input datasets selected for this analysis were MODIS (1) MOD11A2 Land Surface Temperature (LST) 8-day composite data (Wan et al., 2002 ), and (2) MCD43B4 Bidirectional Reflectance Distribution Function (BRDF) – corrected 16-day composite data (Schaaf et al., 2002 ), from which Enhanced Vegetation Index (EVI) was derived using the equation defined in Huete et al. (1999) . The MODIS LST dataset consists of both daytime and nighttime average temperatures aggregated, respectively, from the descending and ascending paths of the NASA Terra Satellite. The BRDF dataset contains 16-day products, with overlapping temporal windows that result in an 8-day temporal resolution, which were derived from data collected by the MODIS sensors on both the Aqua or Terra satellites.
The MODIS data were collected on a per-tile basis and then merged using the MODIS reprojection tool (Dwyer and Schmidt, 2006 ) to create seamless mosaics for all of Africa. A total of 42 tiles were required to cover the continent for each image date (i.e., the day of the year corresponding to the center of the composite temporal window). The BRDF mosaics each consisted of seven spectral bands, three of which were needed to derive the EVI, and mosaics were created for each of these bands prior to deriving the EVI for each image date. The resulting data archives consisted of 594 EVI mosaics (from day 049, 2000 to 361, 2012), and 590 LST-day and LST-night mosaics (from day 065, 2000 to 361, 2012). Temporal mean and standard deviation images were derived on a per-pixel basis from the full mosaic archives for each of the three variables for subsequent use in the gap filling algorithms. Producing images of summary statistics was also useful for identifying pixels that never contain usable data (e.g., ocean pixels) that could be ignored in the gap-filling procedures, thus reducing run-time.
The initial step in the gap filling process was to identify gap pixels in need of filling through the use of a despeckling algorithm, which is a processing step that need only be used if corresponding datasets describing pixel-level data quality do not exist. While MODIS products have associated quality assurance datasets useful for identify potential gaps, we developed a generic gap-finding approach to demonstrate the potential utility of our gap filling approach for a wide range of remotely sensed products. Gaps were identified by finding all pixels that contained a no-data or otherwise unacceptable value within the input mosaic that corresponded to usable pixels within the mean image, thus indicating that the pixel in question contained usable data on other dates. Unacceptable pixel values were identified by calculating a z-score for each pixel based on the mean and standard deviation images, and then searching for any pixel with an absolute z-score exceeding a user-defined threshold (we used 2.58, which corresponds to the 0.99 confidence interval, see supplemental information for more details). When such a pixel was found we examined neighboring pixels (we used a neighborhood size of 40 to 80 pixels) to determine if they were similarly unusual with respect to the mean value of the pixel. If the original z-score was beyond a second user-defined threshold (we used ±0.2) from the median neighborhood z-score, or if too few neighboring pixels were found within a user-defined search radius (we used 10 km), the original pixel was reclassified as a gap. In practice, pixels removed by the despeckling algorithm typically represent approximately 5% of gap pixels or 0.5% of all usable pixels present in the final output images.
Based on the results of the gap identification process the flag image was modified to indicate whether pixels were (1) a no-data pixel that should be ignored in subsequent processing, (2) a usable raw value that could be passed directly through to the final output (and is suitable for use in the gap-filling models), or (3) a gap to be filled. A preliminary analysis of the raw imagery mosaics indicated that, on average, approximately 5–15% of the pixels within an image were gaps in need of filling (Table 1).

The mean and standard deviation percentages of gap pixels within the full Africa mosaics as calculated from the full imagery time-series (e.g., approximately 15% of a typical EVI mosaic consists of gap pixels).

DatasetProportion of missing pixels per image (%)
MeanStandard deviation
EVI14.775.93
LST day5.252.28
LST night8.513.28
Full text: Click here
Publication 2014
Generic Drugs Imagery, Guided Radius Satellite Viruses
This instrument assesses visual and kinesthetic movement imagery ability and is comprised of four visual and four kinesthetic items. Each item entails performing a movement, visually or kinesthetically imaging that movement and then rating the ease or difficulty of generating that image on a 7-point scale from 1 = very hard to see/feel to 7 = very easy to see/feel. The internal consistencies of the MIQ-R have been consistently adequate with Cronbach's α coefficients ranging above 0.79 for both the visual and kinesthetic subscales (21 ,22 ). The bi-factorial structure of the MIQ-R has also been recently confirmed using a small sample of 134 males and females, 17–60 years of age (31 ).
Publication 2007
Feelings Females Imagery, Guided Kinesthesis Males Movement
The development of the MIQ-RS involved several steps. First, the two items (one visual and one kinesthetic) on the MIQ-R that entailed jumping up in the air were removed since people with some movement impairments (e.g. recent stroke patients) would be unable to perform these actions. As a result of deleting these items, each subscale of the questionnaire (i.e. visual, kinesthetic) only contained three items. This was deemed problematic because if subsequent psychometric analysis suggested that one or more items be deleted from either or both of the subscales there would not be sufficient items to adequately represent the constructs being measured. Consequently, eight items (four visual and four kinesthetic) were added that reflected everyday movements: bending forward, pushing (an object like a door), pulling (an object like a door handle) and reaching and grasping (an object like a drinking glass). These movements were selected keeping in mind the tenants on which the original MIQ was developed [e.g. inclusion of relatively simply movements; (15 )] and because these movements are commonly employed in motor control and movement rehabilitation research (32 ,33 ). It seemed logical to have imagery ability measured on some of the same movements used to assess patients’ motor functioning. Therefore, the MIQ-RS is composed of two subscales, visual and kinesthetic and each of these is represented by seven items. The instructions and rating scales for the MIQ-RS are the same as for the MIQ-R (Appendix).
Publication 2007
Cerebrovascular Accident Imagery, Guided Kinesthesis Movement Patients Psychometrics

Most recents protocols related to «Imagery, Guided»

Protocol full text hidden due to copyright restrictions

Open the protocol to access the free full text link

Publication 2023
Anger Anger Management Therapy Awareness Burnout, Psychological Cognition Emotional Regulation Emotions Heart Human Body Imagery, Guided Meditation Mindfulness Muscle Tissue Parent Precipitating Factors Radionuclide Imaging RAGE receptor protein, human Relaxation, Progressive Muscle
Images were retrieved from the proprietary Scanco microCT file type format (.ISQ), which contained single-pixel width two-dimensional (2D) trans-axial projections, or ‘slices’, that together formed stacks depicting an entire pod as three-dimensional (3D) volumes. Thirty-two distinct 3D volumes were included in the experiment dataset, each containing a single entire B. napus pod. All 2D trans-axial (XY) slices were 512 × 512 pixels, therefore the height and width of all 3D volumes was also 512 pixels. Individual 3D volumes varied in length from 505 to 1397 slices, with a total of 29,871 slices in the experimental dataset. The total dataset contained 471 seeds.
The total dataset was split into a model training and validation dataset comprised of 13 3D volumes, 12,475 2D slices and 262 seeds and a model testing dataset containing comprised of 19 3D volumes, 17,396 2D slices and 209 seeds. This split was decided upon due to the uneven number of seeds in each seed pod, with the training and validation dataset containing 262(56%) of seeds and the testing dataset containing 209(44%) of seed. Another factor impacting the split of data was that intact 3D volumes of entire seedpods needed to be used for testing, to demonstrate that reliable seed detection and segmentation could be achieved on the original imagery without any pre-processing. Conversely, Weigert et al. (2020) demonstrated that more accurate results could be obtained in a computationally efficient manner by training a StarDist-3D on smaller sub-volumes of the original 3D volume data containing objects of interest, in this case sub-volumes containing at least one entire seed. The 3D volumes in the model training and validation dataset were therefore comprised of 138 small sub-volumes of stacked 2D slices containing a single seed, or multiple seeds in instances where seeds occupied some of the same 2D slices. These sub-volumes ranged in size between 24 to 84 2D slices depending on the size of the single seed or multiple overlapping seeds contained within. This sub-division was carried out in order to ensure a mixture of seeds from different seed pods could be used for model training and validation. 117 sub-volumes containing 220 seeds were randomly sorted into the final ‘training’ dataset, and 21 volumes containing 42 seeds were sorted into the final ‘validation’ dataset.
Full text: Click here
Publication 2023
Imagery, Guided Plant Embryos X-Ray Microtomography
There is a variety of methods to filter noise, determine stimulation, decode, and predict neural activity. Some authors have attempted neural decoding through training sessions, while others have determined arbitrary thresholds which can be used as a neural ‘switch’ to enable neural plasticity to activate desired movements. When EEG is used in the setting of neural bypasses, recording thresholds are determined which then translate to effector stimulation, often with FES. EEG thresholds to stimulate FES are typically obtained through motor imagery as measured by attention with sensorimotor rhythm and beta/theta oscillation ratios, or Common Spatial Patterns based on Event De/Synchronization [21 (link)–41 ]. Others have used steady-state visual evoked potentials (SSVEP) to trigger FES stimulation [42 (link)]. Furthermore, some studies have used alpha rhythms as a deactivating signal following a stimulation event [43 (link)]. When single-cell recordings are used, the threshold for stimulating FES has typically been cell firing rate [5 (link)]. When ECoG is used, the rate of high-gamma oscillations has been selected as a threshold for effector muscle stimulation [3 (link)]. In studies with microelectrode arrays, neuronal action potential rate, or average spectral high-frequency power, have typically been used as thresholds for stimulation of effector muscles [7 (link)]. Microelectrode arrays have also been used to record mean wavelet power after artifact removal during trials of imagined movements in paralyzed patients [4 (link), 44 (link)]. Microelectrode arrays are often used during training sessions prior to paralysis or with simulated motor tasks to create predictive models of neuronal control of muscle activity [19 (link), 45 (link), 46 (link)].
In some studies, daily calibration is required to train neural decoding algorithms which presents a limitation for the translation of these technologies to real-world environments. One possible approach to this problem is a neural network capable of decoding without daily training sessions [18 (link)]. Other decoding methods consist of gradient boosted trees, support vector machines, and linear methods [12 (link), 47 ]. A drawback to any decoding method is the group of associated assumptions. For example, assumptions made with a regularized linear regression are that outputs are proportional to input changes, additional noise is assumed to be Gaussian noise, and that the regression coefficients are from a Gaussian distribution [47 ]. Bouton et al. suggest that nonlinear methods of decoding may help to increase robustness and accuracy of specific decoders [48 ]. As methods increase in complexity, so do their associated assumptions. Glaser et al. states that a crucial assumption built into decoders is the form of the relation between the input and output. With machine learning methods, multiple decoding models can be organized in ensembles [47 ]. Bouton et al. demonstrates this in their finding that Long Short-Term Memory-based deep learning networks, used in tandem with repeatability-based feature selection based on temporal correlation, results in positive outcomes and accurate decoding [12 (link)].
Developing devices to maximize spatial resolution, temporal resolution, and biocompatibility will be crucial in developing robust neural interfaces. Additionally, a number of unidirectional recording or stimulation devices exist that have not yet been combined into neural bypasses, including novel spinal cord stimulators or neurotransmitter-sensing electrodes [49 (link), 50 (link)]. There exists a lack of comparative analysis across studies with different methodologies to determine the most efficacious methods of achieving neural bypass.
Full text: Click here
Publication 2023
Action Potentials Alpha Rhythm Attention Cells Electrocorticography Gamma Rays Imagery, Guided Medical Devices Memory, Long-Term Microelectrodes Movement Muscle Tissue Nervousness Neuronal Plasticity Neurons Neurotransmitters Patients Precipitating Factors Spinal Cord Trees Visual Evoked Potential
In general, analyzing EEG data is a challenging task with many difficulties (Vallabhaneni et al., 2021 (link)). Due to typically low amplitude signals in the μV range (cp. Figure 1A), small interferences can distort a signal making it unusable (cp. Figure 1B red section compared to ordinary EEG recordings). We denote an interference as any part of a signal that is not directly generated by brain activity or brain activity that is not directly produced as result of an experimental stimulus. It is hard to remove interferences from a signal since these often show similar characteristics as the actual signal. To remove transient interferences before analyzing an EEG signal, various methods have been proposed, e.g., linear regression or blind source separation (Urigüen and Garcia-Zapirain, 2015 (link)). Nevertheless, none of them is supposed to work perfectly and remaining interferences may cause erroneous analysis results (Hagmann et al., 2006 (link)).
Another problem can be the placement and number of electrodes that capture brain activity. Not all regions of the brain are equally active during experiments and some regions are more dominant than others. When less electrodes are used, activation could be missed during the recording which results in no features.
To avoid such errors it is advisable to use a higher number of electrodes and to cover all areas of the head. When the number of electrodes used increases, the time and effort required to preprocess the data increases as well. This can be critical for time-frequency transforms which typically process signals channel- or window-wise (Li et al., 2016 ; Tabar and Halici, 2016 (link)).
In recent years, deep learning neural network approaches have been applied to a wide range of neuroscientific problems like feedback on motor imagery tasks (MI) (Tabar and Halici, 2016 (link)), emotion recognition (Ng et al., 2015 ), seizure detection (Thodoroff et al., 2016 ) and many other tasks (Gong et al., 2021 (link)) (see Table 4). These studies typically apply standard convolutional and recurrent neural networks (Craik et al., 2019 (link)). Many studies use handcrafted features as input for deep neural networks. However, extracting features can be time-consuming and often requires expert domain knowledge to extract features which represent the signal correctly. To avoid loss of information during the preprocessing phase, the aim of neurobiological analysis should be an analysis of raw data. If more information is provided to the neural network, better results can be expected. To the best of our knowledge, no study exists that systematically compares feed-forward and recurrent neural networks in all their flavors for raw signal EEG data analysis.
Full text: Click here
Publication 2023
Brain Emotions Flavor Enhancers Head Imagery, Guided Seizures Transients Visually Impaired Persons
Google Earth Pro was used to manually digitise current windfarm developments across European blanket bogs. All vector layers obtained from the official government websites were imported into Google Earth Pro and each blanket bog was assessed individually. Individual wind turbines, track length and total area with evidence of windfarm development were digitised using the most recent imagery available at the maximum scale possible to define all the visible infrastructures at the best resolution possible. A file with the current aerial imagery year has been provided in additional materials. All turbines were digitised using a point shapefile, tracks using a polyline shapefile and affected area using a polygon shapefile in ArcGIS 10.8.1 (Fig. 6). All files were exported in .kml format and imported into ArcGIS 10.8.1 for further analysis (Fig. 6). All vector files were managed by country to respect the coordinate systems and geographical projections for each individual region in the analysis.

Flowchart of spatial analyses undertook in this research to obtain the final products using ArcGIS 10.8.1 and QGIS.

Full text: Click here
Publication 2023
Cloning Vectors Europeans Imagery, Guided Wind

Top products related to «Imagery, Guided»

Sourced in United States, United Kingdom, Germany, Canada, Japan, Sweden, Austria, Morocco, Switzerland, Australia, Belgium, Italy, Netherlands, China, France, Denmark, Norway, Hungary, Malaysia, Israel, Finland, Spain
MATLAB is a high-performance programming language and numerical computing environment used for scientific and engineering calculations, data analysis, and visualization. It provides a comprehensive set of tools for solving complex mathematical and computational problems.
Sourced in Sao Tome and Principe
PhotoScan is a software product developed by Agisoft. It is designed for the creation of high-quality 3D models from digital images. The software utilizes advanced algorithms to process and align multiple photographs, enabling the reconstruction of 3D geometry and textures.
Sourced in United States
The Bio-Gel imagery apparatus is a laboratory instrument designed for the visualization and analysis of gel-based samples. The core function of this equipment is to capture and process digital images of gels, such as those used in electrophoresis or chromatography experiments. The apparatus provides a controlled and standardized environment for imaging various types of gels, ensuring consistent and reliable data acquisition.
Sourced in Sao Tome and Principe
Metashape is a photogrammetric processing software developed by Agisoft. It is designed to generate high-quality 3D models from digital images. Metashape uses advanced algorithms to reconstruct 3D geometry and textures from multiple overlapping images.
Sourced in Germany, United States, United Kingdom, China
The Tim Trio is a versatile laboratory equipment that serves as a temperature-controlled magnetic stirrer. It features three independent stirring positions, allowing simultaneous operation of multiple samples. The device maintains a consistent temperature range to ensure accurate and reliable results during experimental procedures.
Sourced in United States
Psychtoolbox is a free, open-source software package designed for the presentation of stimuli and the collection of behavioral and physiological data in psychology and neuroscience experiments. It provides a set of MATLAB functions for creating and controlling visual, auditory, and other sensory stimuli, as well as for recording responses and physiological signals.
Sourced in United States, Canada, Japan
Presentation software is a computer program designed to create and display visual presentations. It allows users to organize and present information, such as text, images, and multimedia, in a structured format. The core function of this software is to facilitate the creation and display of digital slides or pages that can be used for various purposes, including business meetings, educational lectures, and public speaking events.
Sourced in United States, Austria, Japan, Cameroon, Germany, United Kingdom, Canada, Belgium, Israel, Denmark, Australia, New Caledonia, France, Argentina, Sweden, Ireland, India
SAS version 9.4 is a statistical software package. It provides tools for data management, analysis, and reporting. The software is designed to help users extract insights from data and make informed decisions.
Sourced in United States, Germany, United Kingdom, France, Canada, Hungary, Japan, Spain
Fluoroshield is a protective screen for use in laboratory settings. It is designed to shield against the potential hazards of fluorescent light exposure. The core function of Fluoroshield is to provide a barrier that safeguards personnel in the vicinity from excessive ultraviolet (UV) and visible light emissions.
Sourced in United States, Japan, China, Germany, United Kingdom, Singapore, Spain, France, Canada, Italy, Switzerland, Belgium
The 7300 Real-Time PCR System is a laboratory instrument designed for quantitative real-time polymerase chain reaction (qRT-PCR) analysis. It provides precise detection and quantification of target DNA sequences in samples. The system includes a thermal cycler, optical detection module, and analysis software to enable real-time monitoring of PCR amplification.

More about "Imagery, Guided"

Guided Visualization, Mental Imagery, Imaginative Rehearsal, Imagery Therapy, Visualization Techniques, Imaginal Exposure, Sensory Imagery, Directed Daydreaming, Biofeedback-Assisted Relaxation, Psychotoolbox, Photoscan, MATLAB Imagery, Metashape, Tim Trio, SAS 9.4 Imagery Analysis.
Guided imagery is a powerful mind-body technique that involves the deliberate use of all the senses to create vivid mental experiences.
By imagining a specific scene or situation, individuals can evoke physiological and emotional responses that can promote relaxation, reduce stress, enhance motivation, and facilitate positive behavioral changes.
This technique is often utilized in various therapeutic and self-improvement contexts, such as stress management, goal visualization, and performance enhancement.
Guided imagery can be facilitated through audio recordings, scripts, or live instruction, with practitioners guiding individuals through the imagery process.
The technique is believed to have the potential to improve overall well-being and can be seamlessly integrated into workflows involving tools like MATLAB, PhotoScan, Metashape, and Psychtoolbox.
Whether you're looking to optimize your research protocols, enhance your mental well-being, or unlock new levels of performance, exploring the power of guided imagery can be a valuable addition to your toolkit.