We also tested use of the flexible sidechain method for docking of covalent ligands. In this case, a coordinate file is created with the ligand attached to the proper sidechain in the protein, by overlapping ideal coordinates of the ligand onto the proper bond in the protein. This sidechain-ligand structure is then treated as flexible during the docking simulation, searching torsional degrees of freedom to optimize the interaction with the rest of the protein.
Single bond
This type of bond is commonly found in organic compounds and is essential for the stability and structure of many molecules.
Understanding the properties and behavior of single bonds is crucial for research in fields such as chemistry, materials science, and drug design.
PubCompare.ai's AI-driven platform can help researchers locate and compare the best protocols from literature, preprints, and patents related to single bond optimization, enhancing reproducibility and research accuracy.
This intuitive tool allows users to improve their research outcomes by accessing data-driven comparisons and insights on single bond optimization methodologies.
Most cited protocols related to «Single bond»
(i) Structural model: coordinates, displacement parameters, occupancies, atom types, f′ and f′′ for anomalous scatterers (if present).
(ii) Reflection data: pre-processed observed intensities or amplitudes of structure factors and, optionally, experimental phases.
(iii) Parameters determining the refinement protocol.
(iv) Empirical geometry restraints: bond lengths, bond angles, dihedral angles, chiralities and planarities (Engh & Huber, 1991 ▶ ; Grosse-Kunstleve, Afonine et al., 2004 ▶ ).
(v) Optionally, a restraint library file (CIF) may be provided to define the stereochemistry of entities in the input model (for example, ligands) that do not have corresponding restraints in the library included in the PHENIX distribution.
The PDB format (Bernstein et al., 1977 ▶ ; Berman et al., 2000 ▶ ) is the most commonly used format for exchanging macromolecular model data and is therefore available as the input format for refinement in PHENIX. The
The experimental data can be provided in many commonly used formats. Multiple input files can be given simultaneously, e.g. a SCALEPACK file (Otwinowski & Minor, 1997 ▶ ) with observed intensities, a CNS (Brünger et al., 1998 ▶ ) file with Rfree flags (Brünger, 1992 ▶ , 1993 ▶ ) and an MTZ file (Winn et al., 2011 ▶ ) with phase information. A comprehensive procedure aims to extract the data most suitable for refinement without user intervention. A preliminary crystallographic data analysis is performed in order to detect and ignore potential reflection outliers (Read, 1999 ▶ ). If twinning (for a review, see Parsons, 2003 ▶ ; Helliwell, 2008 ▶ ) is suspected, a user can run phenix.xtriage (Zwart et al., 2005 ▶ ) to obtain a twin-law operator to be used by the twin-refinement target in phenix.refine.
A number of automatic adjustments to the refinement strategy are considered at this point. These adjustments include automatic choice of refinement target if necessary (based on the number of test reflections, the presence of twinning and the availability of experimental phase information as Hendrickson–Lattman coefficients; Hendrickson & Lattman, 1970 ▶ ), specifying the atomic displacement parameters (isotropic or anisotropic), determining whether or not to add ordered solvent (if the resolution is sufficient), automatic detection or adjustment of user-provided NCS selections, determining the set of atoms that should have their occupancies refined and automatic determination of occupancy constraints for atoms in alternative conformations. When joint refinement is performed using both X-ray and neutron data (Coppens et al., 1981 ▶ ; Wlodawer & Hendrickson, 1981 ▶ , 1982 ▶ ; Adams et al., 2009 ▶ ; Afonine, Mustyakimov et al., 2010 ▶ ), it is important to ensure that the cross-validation reflections are consistent between data sets. This check is performed automatically. If a mismatch is detected, phenix.refine will terminate and offer to generate a new set of flags consistent with both data sets.
The large set of configurable refinement parameters is presented to the user in a novel hierarchical organization,
Table
Taxonomy of implementation outcomes
Implementation outcome | Level of analysis | Theoretical basis | Other terms in literature | Salience by implementation stage | Available measurement |
---|---|---|---|---|---|
Acceptability | Individual provider Individual consumer | Rogers: “complexity” and to a certain extent “relative advantage” | Satisfaction with various aspects of the innovation (e.g. content, complexity, comfort, delivery, and credibility) | Early for adoption Ongoing for penetration Late for sustainability | Survey Qualitative or semi-structured interviews Administrative data Refused/blank |
Adoption | Individual provider Organization or setting | RE-AIM: “adoption” Rogers: “trialability” (particularly for early adopters) | Uptake; utilization; initial implementation; intention to try | Early to mid | Administrative data Observation Qualitative or semi-structured interviews Survey |
Appropriateness | Individual provider Individual consumer Organization or setting | Rogers: “compatibility” | Perceived fit; relevance; compatibility; suitability; usefulness; practicability | Early (prior to adoption) | Survey Qualitative or semi-structured interviews Focus groups |
Feasibility | Individual providers Organization or setting | Rogers: “compatibility” and “trialability” | Actual fit or utility; suitability for everyday use; practicability | Early (during adoption) | Survey Administrative data |
Fidelity | Individual provider | RE-AIM: part of “implementation” | Delivered as intended; adherence; integrity; quality of program delivery | Early to mid | Observation Checklists Self-report |
Implementation Cost | Provider or providing institution | TCU Program Change Model: “costs” and “resources” | Marginal cost; cost-effectiveness; cost-benefit | Early for adoption and feasibility Mid for penetration Late for sustainability | Administrative data |
Penetration | Organization or setting | RE-AIM: necessary for “reach” | Level of institutionalization? Spread? Service access? | Mid to late | Case audit Checklists |
Sustainability | Administrators Organization or setting | RE-AIM: “maintenance” Rogers: “confirmation” | Maintenance; continuation; durability; incorporation; integration; institutionalization; sustained use; routinization; | Late | Case audit Semi-structured interviews Questionnaires Checklists |
Adoption is defined as the intention, initial decision, or action to try or employ an innovation or evidence-based practice. Adoption also may be referred to as “uptake.” Our definition is consistent with those proposed by Rabin et al. (2008 (link)) and Rye and Kimberly (2007 (link)). Adoption could be measured from the perspective of provider or organization. Haug et al. (2008 (link)) used pre-post items to capture substance abuse providers’ adoption of evidence-based practices, while Henggeler et al. (2008 (link)) report interview techniques to measure therapists’ adoption of contingency management.
Appropriateness is the perceived fit, relevance, or compatibility of the innovation or evidence based practice for a given practice setting, provider, or consumer; and/or perceived fit of the innovation to address a particular issue or problem. “Appropriateness” is conceptually similar to “acceptability,” and the literature reflects overlapping and sometimes inconsistent terms when discussing these constructs. We preserve a distinction because a given treatment may be perceived as appropriate but not acceptable, and vice versa. For example, a treatment might be considered a good fit for treating a given condition but its features (for example, rigid protocol) may render it unacceptable to the provider. The construct “appropriateness” is deemed important for its potential to capture some “pushback” to implementation efforts, as is seen when providers feel a new program is a “stretch” from the mission of the health care setting, or is not consistent with providers’ skill set, role, or job expectations. For example, providers may vary in their perceptions of the appropriateness of programs that co-locate mental health services within primary medical, social service, or school settings. Again, a variety of stakeholders will likely have perceptions about a new treatment’s or program’s appropriateness to a particular service setting, mission, providers, and clientele. These perceptions may be function of the organization’s culture or climate (Klein and Sorra 1996 (link)). Bartholomew et al. (2007 (link)) describe a rating scale for capturing appropriateness of training among substance abuse counselors who attended training in dual diagnosis and therapeutic alliance.
Cost (incremental or implementation cost) is defined as the cost impact of an implementation effort. Implementation costs vary according to three components. First, because treatments vary widely in their complexity, the costs of delivering them will also vary. Second, the costs of implementation will vary depending upon the complexity of the particular implementation strategy used. Finally, because treatments are delivered in settings of varying complexity and overheads (ranging from a solo practitioner’s office to a tertiary care facility), the overall costs of delivery will vary by the setting. The true cost of implementing a treatment, therefore, depends upon the costs of the particular intervention, the implementation strategy used, and the location of service delivery.
Much of the work to date has focused on quantifying intervention costs, e.g., identifying the components of a community-based heart health program and attaching costs to these components (Ronckers et al. 2006 (link)). These cost estimations are combined with patient outcomes and used in cost-effectiveness studies (McHugh et al. 2007 (link)). A review of literature on guideline implementation in professions allied to medicine notes that few studies report anything about the costs of guideline implementation (Callum et al. 2010 (link)). Implementing processes that do not require ongoing supervision or consultation, such as computerized medical record systems, may carry lower costs than implementing new psychosocial treatments. Direct measures of implementation cost are essential for studies comparing the costs of implementing alternative treatments and of various implementation strategies.
Feasibility is defined as the extent to which a new treatment, or an innovation, can be successfully used or carried out within a given agency or setting (Karsh 2004 (link)). Typically, the concept of feasibility is invoked retrospectively as a potential explanation of an initiative’s success or failure, as reflected in poor recruitment, retention, or participation rates. While feasibility is related to appropriateness, the two constructs are conceptually distinct. For example, a program may be appropriate for a service setting—in that it is compatible with the setting’s mission or service mandate, but may not be feasible due to resource or training requirements. Hides et al. (2007 (link)) tapped aspects of feasibility of using a screening tool for co-occurring mental health and substance use disorders.
Fidelity is defined as the degree to which an intervention was implemented as it was prescribed in the original protocol or as it was intended by the program developers (Dusenbury et al. 2003 (link); Rabin et al. 2008 (link)). Fidelity has been measured more often than the other implementation outcomes, typically by comparing the original evidence-based intervention and the disseminated/implemented intervention in terms of (1) adherence to the program protocol, (2) dose or amount of program delivered, and (3) quality of program delivery. Fidelity has been the overriding concern of treatment researchers who strive to move their treatments from the clinical lab (efficacy studies) to real-world delivery systems. The literature identifies five implementation fidelity dimensions including adherence, quality of delivery, program component differentiation, exposure to the intervention, and participant responsiveness or involvement (Mihalic 2004 ; Dane and Schneider 1998 (link)). Adherence, or the extent to which the therapy occurred as intended, is frequently examined in psychotherapy process and outcomes research and is distinguished from other potentially pertinent implementation factors such as provider skill or competence (Hogue et al. 1996 ). Fidelity is measured through self-report, ratings, and direct observation and coding of audio- and videotapes of actual encounters, or provider-client/patient interaction. Achieving and measuring fidelity in usual care is beset by a number of challenges (Proctor et al. 2009 (link); Mihalic 2004 ; Schoenwald et al. 2005 (link)). The foremost challenge may be measuring implementation fidelity quickly and efficiently (Hayes 1998 ).
Schoenwald and colleagues (2005 (link)) have developed three 26–45-item measures of adherence at the therapist, supervisor and consultant level of implementation (available from the MST Institute
Penetration is defined as the integration of a practice within a service setting and its subsystems. This definition is similar to (Stiles et al. 2002 (link)) notion of service penetration and to Rabin et al.s’ (2008 (link)) notion of niche saturation. Studying services for persons with severe mental illness, Stiles et al. (2002 (link)) apply the concept of service penetration to service recipients (the number of eligible persons who use a service, divided by the total number of persons eligible for the service). Penetration also can be calculated in terms of the number of providers who deliver a given service or treatment, divided by the total number of providers trained in or expected to deliver the service. From a service system perspective, the construct is also similar to “reach” in the RE-AIM framework (Glasgow 2007b ). We found infrequent use of the term penetration in the implementation literature; though studies seemed to tap into this construct with terms such a given treatment’s level of institutionalization.
Sustainability is defined as the extent to which a newly implemented treatment is maintained or institutionalized within a service setting’s ongoing, stable operations. The literature reflects quite varied uses of the term “sustainability,” but our proposed definition incorporates aspects of those offered by Johnson et al. (2004 (link)), Turner and Sanders (2006 (link)), Glasgow et al. (1999 (link)), Goodman et al. (1993 (link)), and Rabin et al. (2008 (link)). Rabin et al. (2008 (link)) emphasizes the integration of a given program within an organization’s culture through policies and practices, and distinguishes three stages that determine institutionalization: (1) passage (a single event such as transition from temporary to permanent funding), (2) cycle or routine (i.e., repetitive reinforcement of the importance of the evidence-based intervention through including it into organizational or community procedures and behaviors, such as the annual budget and evaluation criteria), and (3) niche saturation (the extent to which an evidence-based intervention is integrated into all subsystems of an organization). Thus the outcomes of “penetration” and “sustainability” may be related conceptually and empirically, in that higher penetration may contribute to long-term sustainability. Such relationships require empirical test, as we elaborate below. Indeed Steckler et al. (1992 (link)) emphasize sustainability in terms of attaining long-term viability, as the final stage of the diffusion process during which innovations settle into organizations. To date, the term sustainability appears more frequently in conceptual papers than actual empirical articles measuring sustainability of innovations. As we discuss below, the literature often uses the same term (niche saturation, for example) to reference multiple implementation outcomes, underscoring the need for conceptual clarity as we seek to advance in this paper.
Most recents protocols related to «Single bond»
Protocol full text hidden due to copyright restrictions
Open the protocol to access the free full text link
Interaction energies and , based on the Universal Force Field (UFF) [27 (link)], are calculated using RDKit (version 2023.09.5). Note that includes distortion energy from the added covalent bond and non-bond interaction energy between atoms within the single molecule structure composed of the two fragments and the added covalent bond. The distortion energy originates from the bond length, bond angle, and dihedral. To accept some distortions and clashes, the energy tolerance level is set to a high value of 500 kcal/mol.
angle, dihedral, and improper. The bonded parameters in Gao’s
paper are simplified MARTINI bonded interactions, where the equilibrium
distances/angles are preserved but different force constants are simplified
into one single force constant. In this work, the force constants
are directly converted from MARTINI for better accuracies. The constraint
interactions in MARTINI are converted to bond interactions with a
large force constant of 30,000 kJ/(nm2·mol). All force
constants are converted into DPD units.
The bond, angle, dihedral,
and improper interactions are simulated with bond style set to “harmonic”,
angle style set to “cosine/squared”, dihedral style
set to “charmm”, and improper style set to “harmonic”
in LAMMPS, respectively.
For this exercise, we chose to preserve the aromaticity of the enumerated and drug structures. When a molecule is loaded into RDKit, the user can choose to encode aromatic systems as conjugated single and double bonds76 (link). While we initially chose to remove aromaticity markers, we found a high proportion of substructure matches to lie along aromatic rings, which point to a disconnection fragmenting those rings. To remove these energetically undesirable matches, aromaticity of all molecular structures was subsequently preserved (Fig.
Code for substructure search is contained in 002_substructure_search.ipynb. The notebook takes as input smiles_min_dist_natoms.csv, appends the search result as an additional column in the spreadsheet, and saves the output as smiles_mindist_dbank.csv.
Top products related to «Single bond»
More about "Single bond"
This type of bond is essential for the stability and structure of many organic compounds, making it a crucial topic in fields like chemistry, materials science, and drug design.
Understanding the properties and behavior of single bonds is vital for researchers to optimize these bonds and enhance their research outcomes.
PubCompare.ai's AI-driven platform provides an innovative approach to helping researchers locate and compare the best protocols from literature, preprints, and patents related to single bond optimization.
This intuitive tool allows users to access data-driven comparisons and insights, improving reproducibility and research accuracy.
Products like Adper Single Bond 2, Single Bond Universal, Topspin 3.2, Bond RX, Avance III, Filtek Z350 XT, Single Bond Universal Adhesive, Adper Single Bond, Single Bond 2, and Maestro are examples of tools and materials that can be used in single bond research and optimization.
By leveraging the power of AI and data-driven insights, PubCompare.ai empowers researchers to enhance their understanding of single bonds and optimize their research processes, leading to more reliable and impactful discoveries.