Lead
It can have detrimental effects on human health, particularly in children, where it can cause developmental and neurological issues.
Exposure to lead can occur through various sources, including paint, dust, soil, and drinking water.
Identifying and mitigating lead exposure is crucial for public health and safety.
Researchers and healthcare professionals should carefully consider the risks and benefits of lead-containing products and develop strategies to minimize lead exposure in the environment and the population.
Most cited protocols related to «Lead»
The second program ‘identifies’ statistically significantly enriched pathways and diseases by comparing results from the first program against the background (usually genes from the whole genome, or all probe sets on a microarray). Users can define their own background distribution in KOBAS 2.0 (for example, result from the first program to ‘annotate’ all probe sets on a microarray). If users do not upload a background file, KOBAS 2.0 uses the genes from whole genome as the default background distribution. Here, we consider only pathways and diseases for which there are at least two genes mapped in the input. Users can choose to perform statistical test using one of the following four methods: binomial test, chi-square test, Fisher's exact test and hypergeometric test, and perform FDR correction. The purpose of performing FDR correction is to reduce the Type-1 errors. When a large number of pathway and disease terms are considered, multiple hypotheses tests are performed, which leads to a high overall Type-1 error even for a relatively stringent P-value cutoff. KOBAS 1.0 supports the FDR correction method QVALUE (39 ). In KOBAS 2.0, we add two more popular FDR correction methods: Benjamini-Hochberg (40 ) and Benjamini-Yekutieli (41 ).
In BWA-SW, we say two alignments are distinct if the length of the overlapping region on the query is less than half of the length of the shorter query segment. We aim to find a set of distinct alignments which maximizes the sum of scores of each alignment in the set. This problem can be solved by dynamic programming, but as in our case a read is usually aligned entirely, a greedy approximation would work well. In the practical implementation, we sort the local alignments based on their alignment scores, scan the sorted list from the best one and keep an alignment if it is distinct from all the kept alignments with larger scores; if alignment a2 is rejected because it is not distinctive from a1, we regard a2 to be a suboptimal alignment to a1 and use this information to approximate the mapping quality (
Because we only retain alignments largely non-overlapping on the query sequence, we might as well discard seeds that do not contribute to the final alignments. Detecting such seeds can be done with another heuristic before the Smith–Waterman extension and time spent on unnecessary extension can thus be saved. To identify these seeds, we chain seeds that are contained in a band (default band width 50 bp). If on the query sequence a short chain is fully contained in a long chain and the number of seeds in the short chain is below one-tenth of the number of seeds in the long chain, we discard all the seeds in the short chain, based on the observation that the short chain can rarely lead to a better alignment than the long chain in this case. Unlike the Z-best strategy, this heuristic does not have a noticeable effect on alignment accuracy. On 1000 10 kb simulated data, it halves the running time with no reduction in accuracy.
Large amounts of missing data greatly increase the space of possible outcomes, and most phasing algorithms are not able to explore this space efficiently enough to be useful for inference in large studies. A standard way to overcome this problem with HMMs [6] (link),[11] (link) is to make the approximation that, conditional on the reference panel, each study individual's multilocus genotype is independent of the genotypes for the rest of the study sample. This transforms the inference problem into a separate imputation step for each study individual, with each step involving only a small proportion of missing data since the reference panel is assumed to be missing few, if any, genotypes.
In motivating our new imputation methodology, we pointed out that modeling the study individuals independently, rather than jointly, sacrifices phasing accuracy at typed SNPs; this led us to propose a hybrid approach that models the study haplotypes jointly at typed SNPs but independently at untyped SNPs. We made the latter choice partly to improve efficiency – it is fast to impute untyped alleles independently for different haplotypes, which allows us to use all of the information in large reference panels – but also because of the intuition that there is little to be gained from jointly modeling the study sample at untyped SNPs.
By contrast, the recently published BEAGLE [13] (link) imputation approach fits a full joint model to all individuals at all SNPs. To overcome the difficulties caused by the large space of possible genotype configurations, BEAGLE initializes its model using a few ad-hoc burn-in iterations in which genotype imputation is driven primarily by the reference panel. The intuition is that this burn-in period will help the model reach a plausible part of parameter space, which can be used as a starting point for fitting a full joint model.
This alternative modeling strategy raises the question of whether, and to what extent, it is advantageous to model the study sample jointly at untyped SNPs. One argument [20] (link) holds that there is no point in jointly modeling such SNPs because all of the linkage disequilibrium information needed to impute them is contained in the reference panel. A counterargument is that, as with any statistical missing data problem, the “correct” inference approach is to create a joint model of all observed and missing data. We have found that a full joint model may indeed improve accuracy on small, contrived imputation datasets (data not shown), and this leads us to believe that joint modeling could theoretically increase accuracy in more realistic datasets.
However, a more salient question is whether there is any useful information to be gained from jointly modeling untyped SNPs, and whether this information can be obtained with a reasonable amount of computational effort. Most imputation methods, including our new algorithm, implicitly assume that such information is not worth pursuing, whereas BEAGLE assumes that it is. We explore this question further in the sections that follow.
Most recents protocols related to «Lead»
Example 1
A 1 g compressed SAM sheet was formed without embossing. To ensure that Comparative Example 1 had the same compactness as Example 1, meaning that both samples experienced the same compressing pressure, the SAM sheets were each placed between two flat metal plates and compressed twice with a 1000 lb load for 10 minutes using the Carver hydraulic compressor (CE, Model 4350). In this way, the void volumes between and within SAM particles are quite close, if not the same, for Comparative Example 1 and Example 1. The sample was dried in a convection oven at 80° C. for 12 hours before testing.
A 1 g compressed SAM sheet was formed without embossing. The prepared SAM sheet was placed on a flat metal plate, covered with a 1″×1″ metal patterned plate with protruding balls of 250 μm diameter, the balls side facing downward towards the SAM sheet (
The final 1 g compressed SAM sheet had two-sided embossing. The sample was dried in a convection oven at 80° C. for 12 hours before testing.
The protrusions of this example were ball-shaped, but the protrusion of the pins could be any shape. Shapes without sharper corners, such as spheres, could be less damaging to the SAM particles. The depth of the indentations from the shapes could be in the range of from about 10 μm to 200
Absorbency Evaluation.
Equal masses of embossed and non-embossed SAM sheet samples were each individually dropped in a 100 mL beaker containing 30 mL NaCl solution, which contained blue dye to improve visualization during testing. The time and process of the SAM sheet completely absorbing the saline solution was monitored and compared.
The testing process for both samples to compare their absorbency properties is shown in
Compressing SAM particles into sheets generally leads to lower intake rates and higher intake times compared with SAM particles that are not compressed into sheets due to the loss of free volume within SAM molecular structure and surface area. However, the results demonstrated herein prove that SAM with surface embossing could lead to increase of surface area, thereby increasing the absorbency intake rate compared to the compressed SAM without embossing.
Flexible Absorbent Binder Film.
FAB is a proprietary crosslinked acrylic acid copolymer that develops absorbency properties after it is applied to a substrate and dried, FAB itself can also be casted into film and dried, yet the resultant 100% FAB film is quite rigid and stiff. The chemistry of FAB is similar to standard SAPs except that the latent crosslinking component allows it to be applied onto the substrate of choice as an aqueous solution and then converted into a superabsorbent coating upon drying. When the water is removed, the crosslinker molecules in the polymeric chain come into contact with each other and covalently bond to form a crosslinked absorbent.
In the examples of this disclosure, FAB was coated on a nonwoven substrate to provide a single layer with both intake and retention functions, as well as flexibility. FAB solution with 32% (wt/wt) solids was coated on a nonwoven substrate through a slot die with two rolls. After coating, the coated film was cured by drying in a convection oven at 55° C. for 20-30 minutes, or until the film was dry, to remove the water.
Compression embossing was applied on FAB films. Two-sided embossing was applied on a FAB film. The absorbent properties were characterized and compared through saline absorption testing. The FAB film with an embossed pattern showed 91.67% faster intake rate compared with the FAB film without an embossed pattern.
Example 2
The next experiments asked whether inhibition of the same set of FXN-RFs would also upregulate transcription of the TRE-FXN gene in post-mitotic neurons, which is the cell type most relevant to FA. To derive post-mitotic FA neurons, FA(GM23404) iPSCs were stably transduced with lentiviral vectors over-expressing Neurogenin-1 and Neurogenin-2 to drive neuronal differentiation, according to published methods (Busskamp et al. 2014, Mol Syst Biol 10:760); for convenience, these cells are referred to herein as FA neurons. Neuronal differentiation was assessed and confirmed by staining with the neuronal marker TUJ1 (
It was next determined whether shRNA-mediated inhibition of FXN-RFs could ameliorate two of the characteristic mitochondrial defects of FA neurons: (1) increased levels of reactive oxygen species (ROS), and (2) decreased oxygen consumption. To assay for mitochondrial dysfunction, FA neurons an FXN-RF shRNA or treated with a small molecule FXN-RF inhibitor were stained with MitoSOX, (an indicator of mitochondrial superoxide levels, or ROS-generating mitochondria) followed by FACS analysis.
Mitochondrial dysfunction results in reduced levels of several mitochondrial Fe-S proteins, such as aconitase 2 (ACO2), iron-sulfur cluster assembly enzyme (ISCU) and NADH:ubiquinone oxidoreductase core subunit S3 (NDUFS3), and lipoic acid-containing proteins, such as pyruvate dehydrogenase (PDH) and 2-oxoglutarate dehydrogenase (OGDH), as well as elevated levels of mitochondria superoxide dismutase (SOD2) (Urrutia et al., (2014) Front Pharmacol 5:38). Immunoblot analysis is performed using methods known in the art to determine whether treatment with an FXN-RF shRNA or a small molecule FXN-RF inhibitor restores the normal levels of these mitochondrial proteins in FA neurons.
Example 2
PAO1, the parent strain of PGN5, is a wild-type P. aeruginosa strain that produces relatively small amounts of alginate and exhibits a non-mucoid phenotype; thus, PGN5 is also non-mucoid when cultured (
To examine whether the alginate produced by PGN5+mucE was similar in composition to alginate produced by VE2, HPLC was performed to compare the M and G content of alginate produced by each strain. The chromatograms obtained from alginate prepared from VE2 and PGN5+mucE were identical (
Example 8
To evaluate which lipid composition within the dendrimer nanoparticles lead to improved siRNA delivery, the identity and concentration of different phospholipids and PEG-lipids were varied. Three different cell lines (HeLa-Luc, A549-Luc, and MDA-MB231-Luc) were used. The cells were present at 10K cells per well and a 24 hour incubation. The readout was determined 24 hours post transfection. In the nanoparticles, DSPC and DOPE were used as phospholipids and PEG-DSPE, PEG-DMG, and PEG-DHD were used as PEG-lipids. The compositions contain a lipid or dendrimer:cholesterol:phospholipid:PEG-lipid mole ratio of 50:38:10:2. The mole ratio of lipid/dendrimer to siRNA was 100:1 with 100 ng dose being used. The RiboGreen, Cell-titer Fluor, and OneGlo assays were used to determine the effectiveness of these compositions. Results show the relative luciferase activity in HeLa-Luc cells (
Further experiments were run to determine which phospholipids showed the increased delivery of siRNA molecules. A HeLa-Luc cell line was used with 10K cells per well, 24 hour incubation, and readout 24 hours post transfections. The compositions contained either DOPE or DOPC as the phospholipid with PEG-DHD as the PEG-lipid. The ratio of lipid (or dendrimer):cholesterol:phospholipid:PEG-lipid was 50:38:10:2 in a mole ratio with the mole ratio of dendrimer (or lipid) to siRNA of 200:1. These compositions was tested at a 50 ng dose using the Cell-titer Fluor and OneGlo assays. These results are shown in
Example 15
In a 15th example, reference is made to
The analysis unit holds in its memory models of these various signals that are the result of processing employing artificial intelligence as described hereinbefore. The analysis unit will process these streams using those results to produce a report on the analysis of those results.
It was found that the accelerometer is particularly suitable for measuring movements of the head whereas the gyroscope, which measures rotation movements, was found to be particularly suitable for measuring rotation movements of the mandible. Thus cerebral activation that leads to rotation of the mandible without the head changing position can be detected by the gyroscope. On the other hand, an IMM type movement will be detected by the accelerometer, in particular if the head moves on this occasion. An RMM type movement will be detected by the gyroscope, which is highly sensitive thereto.
Top products related to «Lead»
More about "Lead"
Exposure to lead can have serious health consequences, particularly for children, leading to developmental and neurological issues.
Sources of lead exposure include paint, dust, soil, and drinking water.
Identifying and mitigating lead exposure is crucial for public health and safety.
Researchers and healthcare professionals must carefully consider the risks and benefits of lead-containing products and develop strategies to minimize lead exposure in the environment and the population.
Techniques like electron microscopy, using instruments like the JEM-1400, JEM-1400Plus, and EM UC7 ultramicrotome, can help analyze lead-containing materials and assess their impact.
Embedding samples in resin, such as Embed 812, and utilizing tools like the Glutaraldehyde fixative can also aid in the study of lead-related issues.
Optimizing research and product development processes is also important.
AI-driven platforms like PubCompare.ai can enhance research accuracy by helping to locate relevant protocols from literature, pre-prints, and patents, and enabling AI-driven comparisons to identify the best protocols and products for specific needs.
This can streamline the research process and optimize outcomes, leading to more effective strategies for mitigating lead exposure and improving public health.