I analyzed each of the simulated datasets described above (500 datasets under each of 6 distinct models of diversification) using MEDUSA, using the implementation of MEDUSA available in the Geiger v1.99-3 package [50] (link) for the R programming and statistical environment. Model selection used the default AICc criterion. I summarized the results of MEDUSA analyses in two ways. First, for each simulation scenario, I tabulated the distribution of “best fit” models, to assess the fraction of simulations for which MEDUSA was able to correctly estimate the number of processes in the generating model. Second, I used the same summary statistics described above for BAMM (e.g., proportional error) to compare branch-specific estimates of speciation rates under MEDUSA to the true rates under the generating model.
Extinction, Psychological
This process is a fundamental principle in the field of learning and behavior modification, with applications in various areas of psychology and neuroscience.
The study of psychological extinction helps researchers understand the mechanisms of memory, emotion regulation, and the treatment of anxiety disorders.
PubCompare.ai's AI-driven tools can optimize your research protocols by quickly identifying the best methods from published literature, preprints, and patents, while leveraging comparisons to maximize the impact of your extinction studies.
Streamline your workflow and advance your research with PubCompare.ai's cutting-edg tools.
Most cited protocols related to «Extinction, Psychological»
I analyzed each of the simulated datasets described above (500 datasets under each of 6 distinct models of diversification) using MEDUSA, using the implementation of MEDUSA available in the Geiger v1.99-3 package [50] (link) for the R programming and statistical environment. Model selection used the default AICc criterion. I summarized the results of MEDUSA analyses in two ways. First, for each simulation scenario, I tabulated the distribution of “best fit” models, to assess the fraction of simulations for which MEDUSA was able to correctly estimate the number of processes in the generating model. Second, I used the same summary statistics described above for BAMM (e.g., proportional error) to compare branch-specific estimates of speciation rates under MEDUSA to the true rates under the generating model.
56 ,
57 ] is a commonly used prior for population-level data and has been extended to include various forms of demographic functions [
58 (link),
59 (link)], sub-divided populations [
60 (link)], and other complexities. Traditional speciation models such as the Yule process [
61 ] and various birth–death models [
62 (link),
63 (link)] can also provide useful priors for species-level data. Such models generally have a number of hyperparameters (for example, effective population size, growth rate, or speciation and extinction rates), which, under a Bayesian framework, can be sampled to provide a posterior distribution of these potentially interesting biological quantities.
In some cases, the choice of prior on the phylogenetic tree can exert a strong influence on inferences made from a given dataset [
64 (link)]. The sensitivity of inference results to the prior chosen will be largely dependent on the data analyzed and few general recommendations can be made. It is, however, good practice to perform the MCMC analysis without any data in order to sample wholly from the prior distribution. This distribution can be compared to the posterior distribution for parameters of interest in order to examine the relative influence of the data and the prior (
η is a function of the factors that relates 24-hr dry aerosol mass to satellite observations of ambient AOD: aerosol size, aerosol type, diurnal variation, relative humidity, and the vertical structure of aerosol extinction (van Donkelaar et al. 2006 (link)). Following the methods of Liu et al. (2004 (link), 2007) and van Donkelaar et al. (2006) (link), we used a global 3-D CTM [GEOS-Chem; geos-chem.org; see Supplemental Material (doi:10.1289/ehp.0901623)] to calculate the daily global distribution of η.
The GEOS-Chem model solves for the temporal and spatial evolution of aerosol (sulfate, nitrate, ammonium, carbonaceous, mineral dust, and sea salt) and gaseous compounds using meteorological data sets, emission inventories, and equations that represent the physics and chemistry of atmospheric constituents. The model calculates the global 3-D distribution of aerosol mass and AOD with a transport time step of 15 min. We applied the modeled relationship between aerosol mass and relative humidity for each aerosol type to calculate PM2.5 for relative humidity values that correspond to surface measurement standards [European Committee for Standardization (CEN) 1998 ; U.S. Environmental Protection Agency 1997 ] (35% for theUnited States and Canada; 50% for Europe). We calculated daily values of η as the ratio of 24-hr ground-level PM2.5 for a relative humidity of 35% (U.S. and Canadian surface measurement gravimetric analysis standard)and of 50% (European surface measurement standard) to total-column AOD at ambient relative humidity. We averaged the AOD between 1000 hours and 1200 hours local solar time, which corresponded to the Terra overpass period. We interpolated values of η from 2° × 2.5°, the resolution of the GEOS-Chem simulation, to 0.1° × 0.1° for application to satellite AOD values.
We compared the original MODIS and MISR total-column AOD with coincident ground-based measurements of daily mean PM2.5. Canadian sites are part of the National Air Pollution Surveillance Network (NAPS) and are maintained by Environment Canada (
I also considered a model where a pure-birth diversification process shifts to an exponential change process at some point in time (model exp2; 2 processes). Finally, I considered four variants of diversity-dependent multi-process models. In each case, I assumed that a pure-birth process at the root of the tree underwent multiple (1, 2, 3, or 4) shifts to independent and decoupled diversity-dependent speciation-extinction processes (models DD2, DD3, DD4, DD5). I conducted 500 simulations per scenario.
Each multiprocess simulation was conducted by first simulating a pure-birth phylogeny for 100 time units with λ = 0.032. I then randomly chose a time Ts on the interval (40, 95) for the occurrence of a rate shift. A shift was then assigned randomly to one of the lineages that existed at time Ts. I then sampled parameters for the new process (see below). The tree was then broken at the shift point, and a new subtree was simulated forward in time from the shift point under the new process parameters. For trees with more than two processes, this procedure was repeated until the target number of processes had been added. For the exp2 model, this consisted of sampling λ, z, and μ for the shift process uniformly on the following intervals: λ, (0.05, 0.50); z, (−0.10, 0.05); μ, (0.0, 0.45). Thus, the addition of an exponential change process could have resulted in either an increase in rates through time (if z >0) or a decrease (if z <0). For all simulations, I required that subtrees contained at least 25 and fewer than 1000 terminal taxa; any simulations failing to meet this criterion were automatically rejected.
For the diversity-dependent models, diversification dynamics followed a linear diversity-dependent model [48] . The rate of speciation was thus a function of the number of coeval lineages in the subclade, or where K is the clade-specific carrying capacity, and nt is the number of lineages in the subclade at time t. Note that the occurrence of a shift event results in a decoupling of dynamics from the parent process. To parameterize the diversity-dependent processes, I sampled λ0 from a uniform (0.05, 0.40) distribution, K from a uniform (25, 250) distribution, and μ from a uniform (0, 0.05) distribution. For the constant-rate birth-death simulations, I sampled λ from a uniform (0, 0.1) distribution and chose a corresponding relative extinction rate (μ/λ) from a uniform (0, 0.99) distribution.
Each of the 500 simulations for each of 6 simulation scenarios was thus conducted under a potentially unique speciation-extinction parameterization. The number of taxa in each simulated tree also varied among datasets. I recorded the mean rate of speciation and extinction across each branch in each simulated tree. All simulations were conducted in C++; simulated trees are available through the Dryad data repository (doi:10.5061/dryad.hn1vn).
I analyzed each of the 3000 simulated datasets using BAMM with 3 million generations of MCMC sampling. I discarded the first half of samples from each simulation of the posterior as “burn-in” and estimated the overall “best model” as the model that was sampled most frequently by the Markov chain. I computed the mean of the posterior distribution of speciation and extinction rates on each branch for each tree. I then used OLS regression to assess the relationship between branch-specific rate estimates obtained using BAMM versus the true underlying evolutionary rates. As an additional estimate of bias, I computed the proportional error [37] (link) in the estimated rates as a function of the true rates. This metric is computed as the weighted average of proportional rate differences across all N branches in the phylogeny, or where rEST and rTRUE are the estimated and true values of rates along a particular branch. A value of 2 would imply that estimated rates are, on average, equal to twice the true rate in the generating model.
Most recents protocols related to «Extinction, Psychological»
Example 1
This example describes an exemplary nanostructure (i.e. nanocomposite tecton) and formation of a material using the nanostructure.
A nanocomposite tecton consists of a nanoparticle grafted with polymer chains that terminate in functional groups capable of supramolecular binding, where supramolecular interactions between polymers grafted to different particles enable programmable bonding that drives particle assembly (
Once synthesized, nanocomposite tectons were functionalized with either diaminopyridine-polystyrene or thymine-polystyrene were readily dispersed in common organic solvents such as tetrahydrofuran, chloroform, toluene, and N,N′-dimethylformamide with a typical plasmonic resonance extinction peak at 530-540 nm (
A key feature of the nanocomposite tectons is that the sizes of their particle and polymer components can be easily modified independent of the supramolecular binding group's molecular structure. However, because this assembly process is driven via the collective interaction of multiple diaminopyridine and thymine-terminated polymer chains, alterations that affect the absolute number and relative density of diaminopyridine or thymine groups on the nanocomposite tecton surface impact the net thermodynamic stability of the assemblies. In other words, while all constructs should be thermally reversible, the temperature range over which particle assembly and disassembly occurs should be affected by these variables. To better understand how differences in nanocomposite tecton composition impact the assembly process, nanostuctures were synthesized using different nanoparticle core diameters (10-40 nm) and polymer spacer molecular weights (3.7-11.0 kDa), and allowed to fully assemble at room temperature (˜22° C.) (
From these data, two clear trends can be observed. First, when holding polymer molecular weight constant, Tm increases with increasing particle size (
All nanocomposite tectons possess similar polymer grafting densities (i.e. equivalent areal density of polymer chains at the inorganic nanoparticle surface,
Conversely, for a fixed inorganic particle diameter (and thus constant number of polymer chains per particle), increasing polymer length decreases the areal density of diaminopyridine and thymine groups at the nanocomposite tecton periphery due to the “splaying” of polymers as they extend off of the particle surface, thereby decreasing Tm in a manner consistent with the trend in
Importantly, because the nanocomposite tecton assembly process is based on dynamic, reversible supramolecular binding, it should be possible to drive the system to an ordered equilibrium state where the maximum number of binding events can occur. The particle cores and polymer ligands are polydisperse (
To test this hypothesis, multiple sets of assembled nanocomposite tectons were thermally annealed at a temperature just below their Tm, allowing particles to reorganize via a series of binding and unbinding events until they reached the thermodynamically most stable conformation. The resulting structures were analyzed with small angle X-ray scattering, revealing the formation of highly ordered mesoscale structures where the nanoparticles were arranged in body-centered cubic superlattices (
Example 8
In order to elucidate the particle size dependence of the optical properties of nanomaterials, simulations have been performed using two models: the effective medium model (EMM) and finite element analysis+geometrical optics (FEA+GO). The simple EMM approach assumes that the nanoparticle has a single refractive index (n) and extinction ratio (k), and assumes that the particles are uniformly distributed throughout a low-refractive-index medium (
Example 4
Antibody solutions containing romosozumab PARG (SEQ ID NO: 8) C-terminal variant or wild-type romosozumab are measured using a cone and plate. The solutions are concentrated up to 120 mg/mL according to approximate volume depletion, and final concentrations are determined (±10%) using the proteins absorbance at 280 nm (after dilution to end up within 0.1-1 absorbance units (AU)) and a protein specific extinction coefficient. Viscosity analysis is performed on a Brookfield LV-DVIII cone and plate instrument (Brookfield Engineering, Middleboro, MA, USA) using a CP-40 spindle and sample cup or an ARES-G2 rheometer (TA Instruments, New Castle, DE, USA) using a TA Smart Swap 2 degree cone/plate spindle. All measurements are performed at 25° C. and controlled by a water bath attached to the sample cup. Multiple viscosity measurements were collected, manually within a defined torque range (10-90%) by increasing the RPM of the spindle. Measurements are averaged in order to report one viscosity value per sample to simplify the resulting comparison chart.
spectrophotometrically by measuring the hydrolysis of 4-nitrophenyl
butyrate (pNPB) as a substrate.26 (link) The pNPB
dissolved in acetonitrile (50 mM) was added to 200 μL of Na
phosphate buffer (100 mM, pH 7.5) with 0.5% (v/v) Triton X-100 to
0.5 mM pNPB as the final concentration; 10 μL of crude extract
was used as an enzyme source. The enzymatic reaction was carried out
at 25 °C for 15 min, and the release of pNP was measured at 405
nm. Enzyme activity was calculated using the extinction coefficient
of pNP corresponding to 18.5 mM–1 cm–1. One Unit is defined as the amount of the enzyme that catalyzes
the conversion of one micromole of substrate per minute under the
specified conditions of the assay method and normalized by grams of
the fermented substrate (U/g). The statistical correlation between
groups was analyzed by Student’s t-test.
For the single-layer single-pixel diffractive model used for the experimental demonstration (Fig.
To overcome potential mechanical misalignments during the experimental testing, the network was “vaccinated” with deliberate random displacements during the training stage53 (link). Specifically, a random lateral displacement was added to the diffractive layer, where and were randomly and independently sampled, i.e., where and are not necessarily equal to each other in each misalignment step.
A random axial displacement was also added to the axial separations between any two consecutive planes. Accordingly, the axial distance between any two consecutive planes was set to 20 mm , where was randomly sampled as,
In our experiments, we also measured the power spectrum of the pulsed terahertz source with only the input and output apertures present, which served as an experimental reference spectrum, . Based on this, the experimentally measured power spectrum at the output single-pixel aperture of a diffractive network can be written as:
The binary objects and apertures were all 3D-printed (Form 3B, Formlabs) and coated with aluminum foil to define the transmission areas. Apertures, objects, the diffuser, and the diffractive layer were assembled using a 3D-printed holder (Objet30 Pro, Stratasys). The setup of the THz-TDS system is illustrated in Fig.
Top products related to «Extinction, Psychological»
More about "Extinction, Psychological"
This process, also known as Extinction Learning or Response Extinction, is a fundamental principle in the field of learning and behavior modification, with applications in various areas of psychology and neuroscience.
The study of psychological extinction helps researchers understand the mechanisms of memory, emotion regulation, and the treatment of anxiety disorders, phobias, and addiction.
By leveraging extinction-based therapies, clinicians can help patients unlearn maladaptive behaviors and responses, leading to improved mental health and well-being.
Researchers investigating extinction processes may utilize a range of analytical tools and techniques, such as the NanoDrop 2000 and NanoDrop 1000 spectrophotometers for biomolecular quantification, the UV-1800 spectrophotometer for absorbance measurements, and the Cary Eclipse Fluorescence Spectrophotometer for fluorescence-based assays.
These instruments, combined with powerful data analysis software like Prism 6, can provide valuable insights into the underlying neurobiological and biochemical mechanisms of extinction.
PubCompare.ai's AI-driven tools can optimize your extinction research protocols by quickly identifying the best methods from published literature, preprints, and patents, while leveraging comparisons to maximize the impact of your studies.
Streamline your workflow and advance your research with PubCompare.ai's cutting-edg tools, empowering you to uncover new discoveries in the field of psychological extinction.