The largest database of trusted experimental protocols

Lead

Lead is a dense, malleable, and highly corrosive heavy metal that is widely used in various industrial and consumer products.
It can have detrimental effects on human health, particularly in children, where it can cause developmental and neurological issues.
Exposure to lead can occur through various sources, including paint, dust, soil, and drinking water.
Identifying and mitigating lead exposure is crucial for public health and safety.
Researchers and healthcare professionals should carefully consider the risks and benefits of lead-containing products and develop strategies to minimize lead exposure in the environment and the population.

Most cited protocols related to «Lead»

Non-A/C/G/T bases on reads are simply treated as mismatches, which is implicit in the algorithm (Fig. 3). Non-A/C/G/T bases on the reference genome are converted to random nucleotides. Doing so may lead to false hits to regions full of ambiguous bases. Fortunately, the chance that this may happen is very small given relatively long reads. We tried 2 million 32 bp reads and did not see any reads mapped to poly-N regions by chance.
Publication 2009
Genome Nucleotides Poly A
For each alignment, BWA calculates a mapping quality score, which is the Phred-scaled probability of the alignment being incorrect. The algorithm is similar to MAQ's except that in BWA we assume the true hit can always be found. We made this modification because we are aware that MAQ's formula overestimates the probability of missing the true hit, which leads to underestimated mapping quality. Simulation reveals that BWA may overestimate mapping quality due to this modification, but the deviation is relatively small. For example, BWA wrongly aligns 11 reads out of 1 569 108 simulated 70 bp reads mapped with mapping quality 60. The error rate 7 × 10−6 (= 11/1 569 108) for these Q60 mappings is higher than the theoretical expectation 10−6.
Publication 2009
KOBAS 2.0 has two consecutive programs ‘annotate’ and ‘identify’, which is similar to KOBAS 1.0 (1 (link),2 (link)). The first program ‘annotates’ each input gene with putative pathways and diseases by mapping the gene to genes in KEGG GENES or terms in KO which are linked to pathway and disease terms in backend databases. For ID mapping, input IDs are mapped directly to genes using the cross-links we parsed from KEGG GENES. Then, if necessary, IDs are mapped to KO terms. For sequence similarity mapping, each input sequence is BLASTed against all sequences in KEGG GENES. The default cutoffs are BLAST E-value <10−5 and rank ≤5. They mean that an input sequence is assigned KO term(s) of the first BLAST hit that (i) has known KO assignments; (ii) has BLAST E-value <10−5; and (iii) has less than five other hits with a lower E-value that do not have KO assignments (1 (link)). A new option in KOBAS 2.0 is that users can map against genes in user-specified species instead of all genes by BLASTing against only sequences of the user-specified species. In order to reduce possible false positives due to multidomain proteins, we added a new option to allow users to set a cutoff of BLAST subject coverage. Another new option allows users to restrict sequence mapping to only orthologs as defined by Ensembl Compara (38 (link)).
The second program ‘identifies’ statistically significantly enriched pathways and diseases by comparing results from the first program against the background (usually genes from the whole genome, or all probe sets on a microarray). Users can define their own background distribution in KOBAS 2.0 (for example, result from the first program to ‘annotate’ all probe sets on a microarray). If users do not upload a background file, KOBAS 2.0 uses the genes from whole genome as the default background distribution. Here, we consider only pathways and diseases for which there are at least two genes mapped in the input. Users can choose to perform statistical test using one of the following four methods: binomial test, chi-square test, Fisher's exact test and hypergeometric test, and perform FDR correction. The purpose of performing FDR correction is to reduce the Type-1 errors. When a large number of pathway and disease terms are considered, multiple hypotheses tests are performed, which leads to a high overall Type-1 error even for a relatively stringent P-value cutoff. KOBAS 1.0 supports the FDR correction method QVALUE (39 ). In KOBAS 2.0, we add two more popular FDR correction methods: Benjamini-Hochberg (40 ) and Benjamini-Yekutieli (41 ).
Publication 2011
Chromosome Mapping Genes Genome Microarray Analysis Proteins
Like BLAST, both BLAT and SSAHA2 report all significant alignments or typically tens of top-scoring alignments, but this is not the most desired output in read mapping. We are typically more interested in the best alignment or best few alignments, covering each region of the query sequence. For example, suppose a 1000 bp query sequence consists of a 900 bp segment from one chromosome and a 100 bp segment from another chromosome; 400 bp out of the 900 bp segment is a highly repetitive sequence. For BLAST, to know this is a chimeric read we would need to ask it to report all the alignments of the 400 bp repeat, which is costly and wasteful because in general we are not interested in alignments of short repetitive sequences contained in a longer unique sequence. On this example, a useful output would be to report one alignment each for the 900 bp and the 100 bp segment, and to indicate if the two segments have good suboptimal alignments that may render the best alignment unreliable. Such output simplifies downstream analyses and saves time on reconstructing the detailed alignments of the repetitive sequence.
In BWA-SW, we say two alignments are distinct if the length of the overlapping region on the query is less than half of the length of the shorter query segment. We aim to find a set of distinct alignments which maximizes the sum of scores of each alignment in the set. This problem can be solved by dynamic programming, but as in our case a read is usually aligned entirely, a greedy approximation would work well. In the practical implementation, we sort the local alignments based on their alignment scores, scan the sorted list from the best one and keep an alignment if it is distinct from all the kept alignments with larger scores; if alignment a2 is rejected because it is not distinctive from a1, we regard a2 to be a suboptimal alignment to a1 and use this information to approximate the mapping quality (Section 2.7).
Because we only retain alignments largely non-overlapping on the query sequence, we might as well discard seeds that do not contribute to the final alignments. Detecting such seeds can be done with another heuristic before the Smith–Waterman extension and time spent on unnecessary extension can thus be saved. To identify these seeds, we chain seeds that are contained in a band (default band width 50 bp). If on the query sequence a short chain is fully contained in a long chain and the number of seeds in the short chain is below one-tenth of the number of seeds in the long chain, we discard all the seeds in the short chain, based on the observation that the short chain can rarely lead to a better alignment than the long chain in this case. Unlike the Z-best strategy, this heuristic does not have a noticeable effect on alignment accuracy. On 1000 10 kb simulated data, it halves the running time with no reduction in accuracy.
Publication 2010
BP 100 BP 400 Chimera Chromosomes Plant Embryos Radionuclide Imaging Repetitive Region Sequence Alignment Toxic Epidermal Necrolysis
In order to understand the modeling choices underlying our new imputation algorithm, it is crucial to consider the statistical issues that arise in imputation datasets. For simplicity, we will discuss these issues in the context of Scenario A, although we will also extend them to Scenario B in the Results section. Fundamentally, imputation is very similar to phasing, so it is no surprise that most imputation algorithms are based on population genetic models that were originally used in phasing methods. The most important distinction between phasing and imputation datasets is that the latter include large proportions of systematically missing genotypes.
Large amounts of missing data greatly increase the space of possible outcomes, and most phasing algorithms are not able to explore this space efficiently enough to be useful for inference in large studies. A standard way to overcome this problem with HMMs [6] (link),[11] (link) is to make the approximation that, conditional on the reference panel, each study individual's multilocus genotype is independent of the genotypes for the rest of the study sample. This transforms the inference problem into a separate imputation step for each study individual, with each step involving only a small proportion of missing data since the reference panel is assumed to be missing few, if any, genotypes.
In motivating our new imputation methodology, we pointed out that modeling the study individuals independently, rather than jointly, sacrifices phasing accuracy at typed SNPs; this led us to propose a hybrid approach that models the study haplotypes jointly at typed SNPs but independently at untyped SNPs. We made the latter choice partly to improve efficiency – it is fast to impute untyped alleles independently for different haplotypes, which allows us to use all of the information in large reference panels – but also because of the intuition that there is little to be gained from jointly modeling the study sample at untyped SNPs.
By contrast, the recently published BEAGLE [13] (link) imputation approach fits a full joint model to all individuals at all SNPs. To overcome the difficulties caused by the large space of possible genotype configurations, BEAGLE initializes its model using a few ad-hoc burn-in iterations in which genotype imputation is driven primarily by the reference panel. The intuition is that this burn-in period will help the model reach a plausible part of parameter space, which can be used as a starting point for fitting a full joint model.
This alternative modeling strategy raises the question of whether, and to what extent, it is advantageous to model the study sample jointly at untyped SNPs. One argument [20] (link) holds that there is no point in jointly modeling such SNPs because all of the linkage disequilibrium information needed to impute them is contained in the reference panel. A counterargument is that, as with any statistical missing data problem, the “correct” inference approach is to create a joint model of all observed and missing data. We have found that a full joint model may indeed improve accuracy on small, contrived imputation datasets (data not shown), and this leads us to believe that joint modeling could theoretically increase accuracy in more realistic datasets.
However, a more salient question is whether there is any useful information to be gained from jointly modeling untyped SNPs, and whether this information can be obtained with a reasonable amount of computational effort. Most imputation methods, including our new algorithm, implicitly assume that such information is not worth pursuing, whereas BEAGLE assumes that it is. We explore this question further in the sections that follow.
Full text: Click here
Publication 2009
Alleles Genotype Haplotypes Hybrids Hypertelorism, Severe, With Midface Prominence, Myopia, Mental Retardation, And Bone Fragility Intuition Joints Seizures Single Nucleotide Polymorphism

Most recents protocols related to «Lead»

Example 1

A 1 g compressed SAM sheet was formed without embossing. To ensure that Comparative Example 1 had the same compactness as Example 1, meaning that both samples experienced the same compressing pressure, the SAM sheets were each placed between two flat metal plates and compressed twice with a 1000 lb load for 10 minutes using the Carver hydraulic compressor (CE, Model 4350). In this way, the void volumes between and within SAM particles are quite close, if not the same, for Comparative Example 1 and Example 1. The sample was dried in a convection oven at 80° C. for 12 hours before testing.

A 1 g compressed SAM sheet was formed without embossing. The prepared SAM sheet was placed on a flat metal plate, covered with a 1″×1″ metal patterned plate with protruding balls of 250 μm diameter, the balls side facing downward towards the SAM sheet (FIG. 1). The Carver hydraulic compressor (CE, Model 4350) was used to create the embossing pattern by applying a 1000 lb load to a plasticized SAM sheet for 5 minutes. After that, the SAM sheet was flipped over and compressed one more time with the metal balls under same pressure and same dwell time. The resultant SAM sheet has a clear pattern on the surface (FIG. 2). The scale bar shows the diameter of dent of 243 μm. The size of the dent is consistent with the size of metal balls of the embossing plate.

The final 1 g compressed SAM sheet had two-sided embossing. The sample was dried in a convection oven at 80° C. for 12 hours before testing.

The protrusions of this example were ball-shaped, but the protrusion of the pins could be any shape. Shapes without sharper corners, such as spheres, could be less damaging to the SAM particles. The depth of the indentations from the shapes could be in the range of from about 10 μm to 200

Absorbency Evaluation.

Equal masses of embossed and non-embossed SAM sheet samples were each individually dropped in a 100 mL beaker containing 30 mL NaCl solution, which contained blue dye to improve visualization during testing. The time and process of the SAM sheet completely absorbing the saline solution was monitored and compared.

The testing process for both samples to compare their absorbency properties is shown in FIGS. 3a-3e. FIG. 3a shows the testing beakers with 30 mL NaCl solution and blue dye. FIG. 3b shows at the start of the testing (0 min) by adding SAM sheets into the respective NaCl solutions. FIG. 3c shows the completion of absorption of liquid for Example 1 at 27 minutes. After completion, the swollen SAM particles were cast off onto white paper to verify the complete absorption of the fluid (FIG. 3d). At 40 min, Comparative Example 1 completed absorbing all fluid and was cast off onto white paper to verify completion (FIG. 3e). By the time Comparative Example 1 was cast off onto white paper, Example 1 had already turned white because it had finished the absorbing process 13 minutes earlier and the absorbed fluid already diffused into the center of each SAM particle. Absorbency times are summarized in Table 1.

TABLE 1
Absorbency times for SAM sheets.
SampleIntake time (min)
Comparative Example 140
Example 127

Compressing SAM particles into sheets generally leads to lower intake rates and higher intake times compared with SAM particles that are not compressed into sheets due to the loss of free volume within SAM molecular structure and surface area. However, the results demonstrated herein prove that SAM with surface embossing could lead to increase of surface area, thereby increasing the absorbency intake rate compared to the compressed SAM without embossing.

Flexible Absorbent Binder Film.

FAB is a proprietary crosslinked acrylic acid copolymer that develops absorbency properties after it is applied to a substrate and dried, FAB itself can also be casted into film and dried, yet the resultant 100% FAB film is quite rigid and stiff. The chemistry of FAB is similar to standard SAPs except that the latent crosslinking component allows it to be applied onto the substrate of choice as an aqueous solution and then converted into a superabsorbent coating upon drying. When the water is removed, the crosslinker molecules in the polymeric chain come into contact with each other and covalently bond to form a crosslinked absorbent.

In the examples of this disclosure, FAB was coated on a nonwoven substrate to provide a single layer with both intake and retention functions, as well as flexibility. FAB solution with 32% (wt/wt) solids was coated on a nonwoven substrate through a slot die with two rolls. After coating, the coated film was cured by drying in a convection oven at 55° C. for 20-30 minutes, or until the film was dry, to remove the water.

Compression embossing was applied on FAB films. Two-sided embossing was applied on a FAB film. The absorbent properties were characterized and compared through saline absorption testing. The FAB film with an embossed pattern showed 91.67% faster intake rate compared with the FAB film without an embossed pattern.

Full text: Click here
Patent 2024
acrylate Convection Electroplating Metals Molecular Structure Muscle Rigidity Polymers Pressure Retention (Psychology) Saline Solution SKAP2 protein, human Sodium Chloride Urination

Example 2

The next experiments asked whether inhibition of the same set of FXN-RFs would also upregulate transcription of the TRE-FXN gene in post-mitotic neurons, which is the cell type most relevant to FA. To derive post-mitotic FA neurons, FA(GM23404) iPSCs were stably transduced with lentiviral vectors over-expressing Neurogenin-1 and Neurogenin-2 to drive neuronal differentiation, according to published methods (Busskamp et al. 2014, Mol Syst Biol 10:760); for convenience, these cells are referred to herein as FA neurons. Neuronal differentiation was assessed and confirmed by staining with the neuronal marker TUJ1 (FIG. 2A). As expected, the FA neurons were post-mitotic as evidenced by the lack of the mitotic marker phosphorylated histone H3 (FIG. 2B). Treatment of FA neurons with an shRNA targeting any one of the 10 FXN-RFs upregulated TRE-FXN transcription (FIG. 2C) and increased frataxin (FIG. 2D) to levels comparable to that of normal neurons. Likewise, treatment of FA neurons with small molecule FXN-RF inhibitors also upregulated TRE-FXN transcription (FIG. 2E) and increased frataxin (FIG. 2F) to levels comparable to that of normal neurons.

It was next determined whether shRNA-mediated inhibition of FXN-RFs could ameliorate two of the characteristic mitochondrial defects of FA neurons: (1) increased levels of reactive oxygen species (ROS), and (2) decreased oxygen consumption. To assay for mitochondrial dysfunction, FA neurons an FXN-RF shRNA or treated with a small molecule FXN-RF inhibitor were stained with MitoSOX, (an indicator of mitochondrial superoxide levels, or ROS-generating mitochondria) followed by FACS analysis. FIG. 3A shows that FA neurons expressing an NS shRNA accumulated increased mitochondrial ROS production compared to EZH2- or HDAC5-knockdown FA neurons. FIG. 3B shows that FA neurons had increased levels of mitochondrial ROS production compared to normal neurons (Codazzi et al., (2016) Hum Mol Genet 25(22): 4847-485). Notably, inhibition of FXN-RFs in FA neurons restored mitochondrial ROS production to levels comparable to that observed in normal neurons. In the second set of experiments, mitochondrial oxygen consumption, which is related to ATP production, was measured using an Agilent Seahorse XF Analyzer (Divakaruni et al., (2014) Methods Enzymol 547:309-54). FIG. 3C shows that oxygen consumption in FA neurons was ˜60% of the level observed in normal neurons. Notably, inhibition of FXN-RFs in FA neurons restored oxygen consumption to levels comparable to that observed in normal neurons. Collectively, these preliminary results provide important proof-of-concept that inhibition of FXN-RFs can ameliorate the mitochondrial defects of FA post-mitotic neurons.

Mitochondrial dysfunction results in reduced levels of several mitochondrial Fe-S proteins, such as aconitase 2 (ACO2), iron-sulfur cluster assembly enzyme (ISCU) and NADH:ubiquinone oxidoreductase core subunit S3 (NDUFS3), and lipoic acid-containing proteins, such as pyruvate dehydrogenase (PDH) and 2-oxoglutarate dehydrogenase (OGDH), as well as elevated levels of mitochondria superoxide dismutase (SOD2) (Urrutia et al., (2014) Front Pharmacol 5:38). Immunoblot analysis is performed using methods known in the art to determine whether treatment with an FXN-RF shRNA or a small molecule FXN-RF inhibitor restores the normal levels of these mitochondrial proteins in FA neurons.

Full text: Click here
Patent 2024
Aconitate Hydratase Biological Assay Cells Cloning Vectors Enzymes EZH2 protein, human frataxin Genets HDAC5 protein, human Histone H3 Immunoblotting Induced Pluripotent Stem Cells inhibitors Iron Ketoglutarate Dehydrogenase Complex Mitochondria Mitochondrial Inheritance Mitochondrial Proteins MitoSOX NADH NADH Dehydrogenase Complex 1 NEUROG1 protein, human Neurons Oxidoreductase Oxygen Consumption Proteins Protein Subunits Psychological Inhibition Pyruvates Reactive Oxygen Species Repression, Psychology Seahorses Short Hairpin RNA Sulfur sulofenur Superoxide Dismutase Superoxides Thioctic Acid Transcription, Genetic

Example 2

PAO1, the parent strain of PGN5, is a wild-type P. aeruginosa strain that produces relatively small amounts of alginate and exhibits a non-mucoid phenotype; thus, PGN5 is also non-mucoid when cultured (FIG. 3A). In PAO1, the alginate biosynthetic operon, which contains genes required for alginate production, is negatively regulated. Activation of this operon leads to alginate production and a mucoid phenotype. For example, over-expression of mucE, an activator of the alginate biosynthetic pathway, induces a strong mucoid phenotype in the PAO1 strain (e.g., P. aeruginosa strain VE2; FIG. 3B). The plasmid pUCP20-pGm-mucE, which constitutively over-expresses MucE, was used to test whether the genetically-modified PGN5 strain could produce alginate. Indeed, the presence of this plasmid in PGN5 (PGN5+mucE) induced a mucoid phenotype (FIG. 3B). To measure the amount of alginate produced by PGN5+mucE on a cellular level, a standard carbazole assay was performed, which showed that the PGN5+mucE and VE2 (i.e., PAO1+mucE) strains produce comparable amounts of alginate (FIG. 3C; 80-120 g/L wet weight).

To examine whether the alginate produced by PGN5+mucE was similar in composition to alginate produced by VE2, HPLC was performed to compare the M and G content of alginate produced by each strain. The chromatograms obtained from alginate prepared from VE2 and PGN5+mucE were identical (FIG. 3D), and the M:G ratios were comparable to a commercial alginate control (data not shown). To confirm that the physical properties of VE2 and PGN5+mucE alginates were also similar, alginate gels were prepared from alginate produced by each strain and the viscosity and yield stress was measured. The viscosities of VE2 and PGN5+mucE alginate gels were comparable at 73.58 and 72.12 mPa, respectively (FIG. 3E). Similarly, the yield stress of VE2 and PGN5+mucE alginate gels were comparable at 47.34 and 47.16 Pa, respectively (FIG. 3G).

Full text: Click here
Patent 2024
Alginate Alginates Anabolism Biological Assay Biosynthetic Pathways carbazole Cells Gels Genes High-Performance Liquid Chromatographies Operon Parent Phenotype Physical Processes Plasmids Pseudomonas aeruginosa Strains Viscosity

Example 8

To evaluate which lipid composition within the dendrimer nanoparticles lead to improved siRNA delivery, the identity and concentration of different phospholipids and PEG-lipids were varied. Three different cell lines (HeLa-Luc, A549-Luc, and MDA-MB231-Luc) were used. The cells were present at 10K cells per well and a 24 hour incubation. The readout was determined 24 hours post transfection. In the nanoparticles, DSPC and DOPE were used as phospholipids and PEG-DSPE, PEG-DMG, and PEG-DHD were used as PEG-lipids. The compositions contain a lipid or dendrimer:cholesterol:phospholipid:PEG-lipid mole ratio of 50:38:10:2. The mole ratio of lipid/dendrimer to siRNA was 100:1 with 100 ng dose being used. The RiboGreen, Cell-titer Fluor, and OneGlo assays were used to determine the effectiveness of these compositions. Results show the relative luciferase activity in HeLa-Luc cells (FIG. 17A), A549-Luc (FIG. 17B), and MDA-MB231-Luc (FIG. 17C). The six formulations used in the studies include: dendrimer (lipid)+cholesterol+DSPC+PEG-DSPE (formulation 1), dendrimer (lipid)+cholesterol+DOPE+PEG-DSPE (formulation 2), dendrimer (lipid)+cholesterol+DSPC+PEG-DMG (formulation 3), dendrimer (lipid)+cholesterol+DOPE+PEG-DMG (formulation 4), dendrimer (lipid)+cholesterol+DSPC+PEG-DSPE (formulation 5), and dendrimer (lipid)+cholesterol+DOPE+PEG-DHD (formulation 6).

Further experiments were run to determine which phospholipids showed the increased delivery of siRNA molecules. A HeLa-Luc cell line was used with 10K cells per well, 24 hour incubation, and readout 24 hours post transfections. The compositions contained either DOPE or DOPC as the phospholipid with PEG-DHD as the PEG-lipid. The ratio of lipid (or dendrimer):cholesterol:phospholipid:PEG-lipid was 50:38:10:2 in a mole ratio with the mole ratio of dendrimer (or lipid) to siRNA of 200:1. These compositions was tested at a 50 ng dose using the Cell-titer Fluor and OneGlo assays. These results are shown in FIGS. 18A & 18B.

Full text: Click here
Patent 2024
1,2-oleoylphosphatidylcholine Biological Assay Cell Lines Cells Cholesterol Dendrimers Figs HeLa Cells Lipid Nanoparticles Lipids Luciferases Nevus Obstetric Delivery Phospholipids polyethylene glycol-distearoylphosphatidylethanolamine RNA, Small Interfering Transfection

Example 15

In a 15th example, reference is made to FIGS. 12 and 13. FIG. 12 shows an example of the first measurement signal stream F1 and of the second measurement signal stream F2 in the situation where the subject suffers a temporary disappearance of all control of cerebral origin, which is characteristic of central hypopnoea. This disappearance is characterized by the mouth opening passively because it is no longer held up by the muscles. It is therefore seen in the streams F1 and F2 that between the peaks the signal does not indicate any activity. On the other hand at the moment of the peak there is observed a high amplitude of the movement of the mandible. Toward the end of the peaks there is seen a movement that corresponds to a non-respiratory frequency, which is the consequence of cerebral activation that will then result in a micro-arousal. The digit 1 indicates the period of hypopnoea where a reduction of the flow is clearly visible on the stream F5th from the thermistor. The digits 2 and 3 indicate the disappearance of mandibular movement in the streams F1 and F2 during the period of central hypopnoea. FIG. 13 shows an example of the first measurement signal stream F1 and of the second measurement signal stream F2 in the situation where the subject experiences a prolonged respiratory effort that will terminate in cerebral activation. It is seen that the signal from the accelerometer F1 indicates at the location indicated by H a large movement of the head and of the mandible. Thereafter the stream F2 remains virtually constant whereas in that F1 from the accelerometer the level drops, which shows that there is in any event a movement of the mandible, which is slowly lowered. There then follows a high peak I that is a consequence of a change in the position of the head during the activation that terminates the period of effort. The digit 1 indicates this long period of effort marked by snoring. It is seen, as indicated by the digit 2, that the effort is increasing with time. This effort terminates, as indicated by the digit 3, in cerebral activation that results in movements of the head and the mandible, indicated by the letter I.

The analysis unit holds in its memory models of these various signals that are the result of processing employing artificial intelligence as described hereinbefore. The analysis unit will process these streams using those results to produce a report on the analysis of those results.

It was found that the accelerometer is particularly suitable for measuring movements of the head whereas the gyroscope, which measures rotation movements, was found to be particularly suitable for measuring rotation movements of the mandible. Thus cerebral activation that leads to rotation of the mandible without the head changing position can be detected by the gyroscope. On the other hand, an IMM type movement will be detected by the accelerometer, in particular if the head moves on this occasion. An RMM type movement will be detected by the gyroscope, which is highly sensitive thereto.

Full text: Click here
Patent 2024
ARID1A protein, human Arousal Exhaling Fingers Gene Expression Regulation Head Head Movements Mandible Medical Devices Memory Movement Muscle Tissue Oral Cavity Respiratory Rate Sleep Thumb Vision

Top products related to «Lead»

Sourced in Japan, United States, Germany, United Kingdom, China, France
The Hitachi H-7650 is a transmission electron microscope (TEM) designed for high-resolution imaging of materials. It provides a core function of nanoscale imaging and analysis of a wide range of samples.
Sourced in Japan, United States, Germany, China, United Kingdom
The HT7700 is a high-resolution transmission electron microscope (TEM) designed for materials analysis and characterization. It provides advanced imaging and analytical capabilities for a wide range of applications in materials science, nanotechnology, and life sciences. The core function of the HT7700 is to enable high-resolution, high-contrast imaging and elemental analysis of nanoscale structures and materials.
Sourced in Japan, United States, Germany, United Kingdom, France, Spain
The JEM-1400 is a transmission electron microscope (TEM) produced by JEOL. It is designed to provide high-quality imaging and analysis of a wide range of materials at the nanoscale level. The JEM-1400 offers a maximum accelerating voltage of 120 kV and features advanced optics and detectors to enable detailed examination of samples.
Sourced in Germany, Japan, United States, Austria, Switzerland, China, France
The EM UC7 is an ultra-high-resolution ultramicrotome designed for sectioning of biological and materials samples for transmission electron microscopy (TEM) analysis. It features a high-precision cutting mechanism and advanced control systems to produce ultra-thin sections with consistent thickness and quality.
Sourced in United States, Germany, United Kingdom, Italy, Switzerland, India, China, Sao Tome and Principe, France, Canada, Japan, Spain, Belgium, Poland, Ireland, Israel, Singapore, Macao, Brazil, Sweden, Czechia, Australia
Glutaraldehyde is a chemical compound used as a fixative and disinfectant in various laboratory applications. It serves as a cross-linking agent, primarily used to preserve biological samples for analysis.
Sourced in Japan, United States, Germany, United Kingdom, India
The JEM-1400Plus is a transmission electron microscope (TEM) manufactured by JEOL. It is designed for high-resolution imaging and analysis of a wide range of samples. The JEM-1400Plus provides stable and reliable performance for various applications in materials science, biological research, and other related fields.
Sourced in Germany, United States, Austria, Japan
An ultramicrotome is a precision instrument used to cut ultra-thin sections of materials for examination under an electron microscope. It employs a diamond or glass knife to produce sections ranging from 50 to 100 nanometers in thickness, allowing for the detailed study of the internal structure of samples at the cellular and subcellular level.
Sourced in United States, Panama, Australia, Germany
Embed 812 is a high-quality embedding medium used in electron microscopy sample preparation. It is a cross-linking epoxy resin that provides excellent support and preservation of ultrastructural details in biological and material science specimens.
Sourced in Germany, United States, Japan, Austria, Switzerland
The EM UC7 ultramicrotome is a precision instrument designed for the preparation of ultra-thin sections for electron microscopy. It is capable of producing sections with thicknesses ranging from 15 nm to 5 μm, enabling detailed examination of specimen ultrastructure. The EM UC7 features advanced cutting technology, automated functions, and intuitive controls to facilitate efficient and consistent sample preparation.
Sourced in Japan, United States, Germany, United Kingdom, France
The JEM-1230 is a transmission electron microscope (TEM) manufactured by JEOL. It is designed to provide high-quality imaging and analysis of a wide range of materials. The JEM-1230 operates at an accelerating voltage of 120 kV and offers a resolution of 0.2 nanometers.

More about "Lead"

Lead is a heavy, dense, and highly corrosive metal that is ubiquitous in industrial and consumer products.
Exposure to lead can have serious health consequences, particularly for children, leading to developmental and neurological issues.
Sources of lead exposure include paint, dust, soil, and drinking water.
Identifying and mitigating lead exposure is crucial for public health and safety.
Researchers and healthcare professionals must carefully consider the risks and benefits of lead-containing products and develop strategies to minimize lead exposure in the environment and the population.
Techniques like electron microscopy, using instruments like the JEM-1400, JEM-1400Plus, and EM UC7 ultramicrotome, can help analyze lead-containing materials and assess their impact.
Embedding samples in resin, such as Embed 812, and utilizing tools like the Glutaraldehyde fixative can also aid in the study of lead-related issues.
Optimizing research and product development processes is also important.
AI-driven platforms like PubCompare.ai can enhance research accuracy by helping to locate relevant protocols from literature, pre-prints, and patents, and enabling AI-driven comparisons to identify the best protocols and products for specific needs.
This can streamline the research process and optimize outcomes, leading to more effective strategies for mitigating lead exposure and improving public health.