The largest database of trusted experimental protocols
> Procedures > Educational Activity > Training Support

Training Support

Discover the power of Training Support, the AI-driven platform that helps researchers optimize their training protocols and enhance reproducibility.
Easily locate training methods from literature, pre-prints, and patents, and leverage AI-driven comparisons to identify the best approches and products.
Take your research training to new heights with the support of this innovative tool.

Most cited protocols related to «Training Support»

This work is part of a larger study in Zambia known as Better Health Outcomes through Mentoring and Assessment (BHOMA), which is a stepped wedge community randomised controlled trial that aims to strengthen the health system in three rural districts of Zambia. The BHOMA intervention is being implemented in Chongwe, Luangwa and Kafue Districts, all in Lusaka Province, Zambia. The combined population for the 3 districts is 306,000, with a total of 48 health facilities and 4 general hospitals. Two separate but complementary packages are being applied in the BHOMA intervention: the health facility package (which targets the health workers and their support staff through training, mentoring and support) and the community-based package (which works within the community to improve access to health services and improve data and referral systems).
The BHOMA intervention is complex and labor intensive, and is therefore being rolled out gradually from one health facility to the next over a period of 3 years using a stepped wedged design
[23 (link),24 ]. The full intervention and the evaluation design are described elsewhere (Mutale et al., unpublished
[19 ]. A baseline health facility survey was conducted in 42 out 48 health facilities found in the 3 BHOMA districts between January and April 2011. This constituted 96% of the total health facilities, with the rest being used as pilot sites for the BHOMA intervention.
In this study, we interviewed 1 to 3 health workers at each of the 42 health facilities who were present at the time of baseline data collection, depending on the available staff. Most health facilities had just one eligible health worker. Where there were more than three, up to three health workers were randomly selected to take part in the study. They were eligible if they had been working in the facility for at least 1 month and were attending to patients. All participants were given instructions about the tool, which was self-administered though the respondents were free to clarify questions that they did not understand. Before being used in the Zambian setting, the tool was pretested and questions were adapted to suit the lower level health facilities but the content remained essentially the same as described by Mbindyo et al.
[22 (link)].
The data collection tool was selected as it was easy to use and there is no available tool that has been used in Zambia previously. It is hoped that the assessment will be repeated after 12 months in the same health facilities to determine any changes. The tool had 23 items, with answers given on a scale of 1 to 5 (strongly agree to strongly disagree) (Table 
1). The items with negative statements were reverse coded when calculating scores.
Data was entered into a Microsoft access database (Microsoft, Redmond, WA, USA) and exported to SPSS version 19 (SPSS, Chicago, IL, USA) for analysis. Factor analysis was used to confirm latent factors described by Mbindyo et al.
[22 (link)]. The scores were standardized to 100 in order to allow for comparison between subscores. The overall scores were calculated by the sum of all subscores of the latent factors described. Linear regression was used to identify determinants of motivation.
Full text: Click here
Publication 2013
Health Personnel Motivation Obstetric Labor Patients Training Support Workers
Based on the outcome of the Adoption Decision/Preparation phase, training with adaptation support begins in the Implementation phase. In contrast to the IAU condition in which the curriculum is set, the DAP training supports changes deemed necessary by the IRT. One prominent difference between IAU and DAP conditions is the explicit inclusion and discussion of adaptation during provider training, including why one might adapt, what one might adapt, what one might not adapt, when to seek guidance on adaptation, and how to use the ongoing coaches and IRT for tailoring SC. In addition to intervention adaptation, the need for adaptation at the system and/organizational levels is also an ongoing target for change. In addition, the research team in conjunction with intervention developers will refine assessment of fidelity. Departures from fidelity to core elements will be considered drift.
Full text: Click here
Publication 2012
Acclimatization Training Support
We assessed the performance, stability, and reproducibility of the PSO-enhanced thromboSeq platform using multiple training, evaluation, and independent validation cohorts. All classification experiments were performed with the PSO-enhanced thromboSeq algorithm, using parameters optimized by particle swarm intelligence. We assigned for the matched cohort 133 samples for training-evaluation, of which 93 were used for RUV-correction, gene panel selection, and SVM training (training cohort), and 40 were used for gene panel optimization (evaluation cohort). The full cohort contained 208 samples for training-evaluation, of which 120 were used for RUV-correction, gene panel selection, and SMV training (training cohort), and 88 were used for gene panel optimization (evaluation cohort). All random selection procedures were performed using the sample-function as implemented in R. For assignment of samples per cohort to the training and evaluation cohorts, only the number of samples per clinical group was balanced, whereas other potentially contributing variables were not stratified at this stage (assuming random distribution among the groups). Following, an SVM model was trained using the training samples, and the samples assigned to the independent validation cohort were predicted. The late-stage NSCLC samples and early-stage locally advanced NSCLC samples were validated separately resulting in two ROC curves. The 53 locally advanced NSCLC samples were age-matched with 53 non-cancer individuals selected from the non-cancer samples of the independent validation cohort. Performance of the training cohort was assessed by a leave-one-out cross validation approach (LOOCV, see also (Best et al., 2015 (link))). During a LOOCV procedure, all samples minus one (‘left-out sample’) are used for training of the algorithm. Each sample is predicted once, resulting in the same number of predictions as samples in the training cohort. The list of stable genes among the initial training cohort, determined RUV-factors for removal, and final gene panel determined by swarm-optimization of the training-evaluation cohort were used as input for the LOOCV procedure. As a control for internal reproducibility, we randomly sampled training and evaluation cohorts, while maintaining the validation cohorts and the swarm-guided gene panel of the original classifier, and perform 1000 (matched and full cohort NSCLC/non-cancer) training and classification procedures. As a control for random classification, class labels of the samples used by the SVM-algorithm for training of the support vectors were randomly permutated, while maintaining the swarm-guided gene list of the original classifier. This process was performed 1000 times for the matched and full NSCLC/non-cancer cohort classifiers. P values were calculated accordingly, as described previously (Best et al., 2015 (link)). Results were presented in receiver operating characteristics (ROC) curves, and summarized using area under the curve (AUC)-values, as determined by the ROCR-package in R. AUC 95% confidence intervals were calculated according to the method of Delonge using the ci.auc-function of the pROC-package in R.
Full text: Click here
Publication 2017
Cloning Vectors Genes Malignant Neoplasms Non-Small Cell Lung Carcinoma Prognosis Training Support
We conducted a stakeholder-driven, evidence-based, modified-Delphi process to develop recommendations for displaying PRO data in three different applications: individual patient data for monitoring/management, research results presented to patients in educational materials/decision aids, and research results presented to clinicians in peer-reviewed publications. We used a standard modified-Delphi approach, consisting of a pre-meeting survey relevant to the application of interest, a face-to-face meeting, and a post-meeting survey. The first two applications were addressed during an in-person meeting in February 2017, and the third application was addressed during an in-person meeting in October 2017. For simplicity, we refer to these as Meeting #1 and #2. The meetings addressed different applications; issues that were relevant across applications were handled in the context of each application separately.
Because much of the evidence base guiding this process emerged from studies in oncology, we focused specifically on the cancer context. In addition to the project team and this project’s SAB, we purposefully invited representatives from key stakeholder groups: cancer patients/caregivers, oncology clinicians, PRO researchers, and stakeholders specific to particular applications (e.g., electronic health record vendors for individual patient data, decision aid experts for research data presented to patients, journal editors for research data presented to clinicians).
Prior to each in-person meeting, we held a webinar during which we oriented participants to the purpose of the project, the specific data display issues that we were addressing for the relevant applications (Table 1, column 1), and the evidence base regarding the options for those data display issues. The following parameters informed the considerations: (1) recommendations should work on paper (static presentation); (2) presentation in color is possible (but it should be interpretable in grayscale); and (3) additional functionality in electronic presentation is possible (but not part of standards). Notably, during the meeting discussions, additional guiding principles were established: (1) displays should be as simple and intuitively interpretable as possible; (2) it is reasonable to expect that clinicians will need to explain the data to patients; and (3) education and training support should be encouraged to be available.
After the pre-meeting webinar, we surveyed participants’ initial perspectives using Qualtrics, a leading enterprise survey company, with protections for sensitive data, used by colleges and universities around the world [18 ]. Specifically, for each issue, we first asked participants to rate whether there ought to be a standard on that topic. Response options were Important to Present Consistently, Consistency Desirable, Variation Acceptable, and Important to Tailor to Personal Preferences. Regardless of their response to this question, we asked participants to indicate what the standard should be, with alternative approaches for addressing that particular issue as the response options. For example, for data presented to patients, the options for presenting proportions included pie charts, bar charts, and icon arrays, based on the available evidence base [16 ]. Following each question, participants were asked to indicate the rationale behind their responses in text boxes. A summary of the pre-meeting survey results and comments was circulated prior to the meeting.
At each in-person meeting, we addressed each of the data display issues, briefly summarizing the evidence base and the feedback from the pre-meeting survey before opening up the topic for discussion. At Meeting #1, the participants aimed to be consistent across the two applications, when possible. For Meeting #1 topics also addressed during Meeting #2, after an initial discussion, the consensus statements from Meeting #1 were shared for the Meeting #2 group’s consideration, with the possibility of accepting the statement unchanged, modifying it, discarding it, or developing a new statement.
Following the discussion, participants voted using an audience response system (to ensure anonymity) on whether there should be a standard, and in cases where a standard was supported, what that standard should be. Issues that were not considered appropriate for a standard, and topics for further research, were also noted. After the meeting, the consensus statements were circulated to participants via Qualtrics. Each participant was asked whether each consensus statement was “acceptable” or “not acceptable,” and if the latter, to indicate why in a text box. The funders had no role in the project design; data collection, analysis, or interpretation; writing; or decision to submit this manuscript for publication.
Publication 2018
ARID1A protein, human Face Malignant Neoplasms Neoplasms Patients Training Support
NHSBT collects whole blood from donors attending either static donor centres or temporary ‘mobile’ donation sessions set up at community venues such as village halls. Recruitment in INTERVAL has been restricted to donors attending the static donor centres (which are open daily during the working week), principally because ‘mobile’ sessions do not typically visit locations often enough to accommodate donors who would be allocated to the more frequent intervals being evaluated in INTERVAL. The static donor centres of NHSBT are located in Birmingham, Bradford, Brentwood, Bristol, Cambridge, Edgware, Gloucester, Lancaster, Leeds (2 sites) Leicester, Liverpool, Luton, Manchester (2 sites), Newcastle, Nottingham, Oxford, Plymouth, Poole, Sheffield, Southampton, Stoke on Trent, Tooting (South London), and West End London. To facilitate the provision of adequate training support during each site’s first week of participant recruitment, we commenced recruitment in one new centre per week. At each centre, designated trained members of staff adopted the roles of clinical and/or operational experts to supervise the work of the trial.
The overall approach used in INTERVAL has been to embed research activity within the existing operational framework of NHSBT. To support additional functions required in the trial, we have established an academic trial coordinating centre at the Department of Public Health and Primary Care, University of Cambridge. In addition to supporting the trial’s core scientific activities, the coordinating centre provides a helpdesk to respond to queries from participants about the trial, and maintains a study website [15 ]. The academic coordinating centre has worked closely with the INTERVAL study administration team (ISAT) based within NHSBT. For example, ISAT has supported the trial to enable participants to make appointments to give blood at intervals that are more frequent than current NHSBT practice (which is not possible through NHSBT’s routine appointment system). To enhance adherence of trial participants to their allocated donation intervals, ISAT has used more intensive and systematic efforts than used in routine NHSBT practice to remind participants about their blood donation appointments, including a systematic three-step telephone and email reminder process.
Full text: Click here
Publication 2014
BLOOD Blood Donation Donor, Blood Donors Primary Health Care Tissue Donors Training Support

Most recents protocols related to «Training Support»

Wage cost per hour was multiplied by the number of hours per day (7.6 h for physicians and 7.5 for nurses). Wage cost per day was then divided by the average number of injections per day, giving the personnel costs per injection (Table 4).

Hospital costs per injection for physicians and nurses

Cost categoriesPhysicianNurse
%%
Training costs1.40.50.60.2
Injection personnel time costs20.07.115.25.5
Clinical support costs0.30.10.40.2
Support personnel time costs36.312.936.313.1
Medicine198.470.4198.471.8
Equipment excluding medicine19.16.819.16.9
Running expenses of operation premises6.22.26.22.2
Sum281.6100.0276.1100.0
Full text: Click here
Publication 2023
Nurses Physicians Training Support

Protocol full text hidden due to copyright restrictions

Open the protocol to access the free full text link

Publication 2023
COVID 19 Health Care Professionals Mental Health Movement Obstetric Delivery Ocular Accommodation Patients Pharmaceutical Preparations Physical Examination Radius Respiratory Diaphragm Safety Supervision Tablet Therapeutics Training Programs Training Support
Eight attributes that consisted of treatment benefits were selected. The first one was on Information delivery (two levels- Health facility, individualized) where we asked participants if they would like information on how to use services in the facility around SRHR and MCH that could assist with depression management and the second level focused on provision of individualized information sheets on general care including nutritional care for mother and baby. The second option focused on Additional Participants (Co-participate with Caregivers, provide information sheets for care-givers) included asking if adolescents would like group sessions with their caregivers including partners vs information sheet on depression care for their partners. The third choice was around Treatment duration option (4 sessions for 1.5 hours, 8 sessions for 1.5 hours) for the entire depression treatment. The fifth choice was around Intervention delivery agents (whether CHV, or Facility nurses) probing if the therapy for depression should be delivered by lay workers or facility -based nurses. We also included an attribute around additional training support (exploring links to vocational training vs more formalized learning needs (i.e. back to school) were tested. We also had a choice task around further support needs so we asked whether greater peer support or parenting skills support were preferred). The seventh choice in terms of rethinking the health services (Adolescent friendly services, combined with older mothers) the choice given was between using and strengthening adolescent friendly services or using the routine MCH services and be serviced along with older adult mothers and finally in order to address the challenges that Incentives for improving health care uptake choices were offered between (transport allowance, refreshment provision, or a choice for provision of both).
For this analysis we applied dummy coding where the baseline attribute category for each attribute is omitted from estimations, and used as a reference category. The reference categories applied in this study are indicated in Table 1 as reference.
The design was pilot-tested with a selection of the pregnant adolescents who had been participated in the qualitative interviews to refine the survey and to assess the salience of the attributes to the treatment decision. Participants completed DCE questionnaires and participated in a personal cognitive interview as part of the pilot testing. To determine the burden on participants, the number of completed items and the time it took to complete them were recorded. Personal cognitive interviews were utilized to assess participants’ knowledge of the questionnaire’s levels and face validity. The final set of attributes and levels are presented in Table 1.
We tested multiple-choice elicitation formats and chose to use full-profile tasks between two treatment profiles in which participants indicated which treatment they would prefer to take. This setup allowed for the elicitation of acceptable tradeoffs people were willing to make between different treatment attributes. If the number of attributes is low enough that participants can reasonably complete a full-profile task, this maximizes information about trade-offs [38 (link)]. We allowed the participants to select an opt-out option. An example choice task with decision scenario is shown in Fig 2.
Full text: Click here
Publication 2023
Adolescent Aged Cognition Infant Mothers Nurses Nutritional Support Obstetric Delivery Training Support Workers

Protocol full text hidden due to copyright restrictions

Open the protocol to access the free full text link

Publication 2023
Cloning Vectors Microtubule-Associated Proteins Seizures Speech Speech Sounds Training Support
We associated genes with Bphs-related functions as described in Tyler et al.34 (link). Briefly, we used the connectivity weights in the Functional Network of Tissues in Mouse (FNTM) as features for training support vector machines75 (link). Each feature consisted of the connection weights from a given gene to genes in the functional module. To improve classification and reduce over-generalization we clustered each functional gene set into modules, each <400 genes45 (link). For each of these modules, we trained 100 SVMs to classify the module genes from a balanced set of randomly chosen genes from outside the module. We used 10-fold cross validation and a linear kernel. We also trained each SVM over a series of cost parameters identified by iteratively narrowing the cost parameter window to identify a series of eight cost parameters that maximized classification accuracy. We then used the training modules to score each positional candidate gene in the Bphse locus. To compare scores across multiple trained models, we converted SVM scores to false positive rates.
Full text: Click here
Publication 2023
Benign Prostatic Hyperplasia Cloning Vectors Connective Tissue Gene Modules Generalization, Psychological Genes Mice, Laboratory Operator, Genetic Training Support

Top products related to «Training Support»

Sourced in United States, Germany, United Kingdom, Italy, Poland, Spain, France, Sao Tome and Principe, Australia, China, Canada, Ireland, Japan, Macao, Switzerland, India, Belgium, Denmark, Israel, Brazil, Austria, Portugal, Sweden, Singapore, Czechia, Malaysia, Hungary
PBS (Phosphate-Buffered Saline) is a widely used buffer solution in biological and medical research. It is a balanced salt solution that maintains a stable pH and osmotic pressure, making it suitable for a variety of applications. PBS is primarily used for washing, diluting, and suspending cells and biological samples.
Sourced in Japan
The DS-5700 is a multi-parameter patient monitor that provides continuous monitoring of various vital signs. It is designed to display and record patient data such as ECG, SpO2, NIBP, respiratory rate, and temperature.
Sourced in Finland, United States
The Polar H10 is a wireless heart rate sensor. It is designed to accurately measure and transmit heart rate data.
Sourced in United States, United Kingdom, Austria, Denmark
Stata v14 is a statistical software package developed by StataCorp. It provides a comprehensive set of tools for data management, analysis, and visualization. Stata v14 offers a range of features, including data manipulation, statistical modeling, and graphics capabilities. The software is designed to be user-friendly and is widely used in various fields, including academia, research, and industry.
Sourced in United States, United Kingdom, Spain, Germany, Austria
SPSS v24 is a software application for statistical analysis. It provides tools for data management, analysis, and visualization. The core function of SPSS v24 is to assist users in processing and analyzing data, including the ability to perform various statistical tests and generate reports.
Sourced in Norway
MiniAnne is a portable, low-cost manikin designed for basic life support training. It features a realistic airway and chest rise to provide trainees with hands-on experience in administering rescue breaths and chest compressions.
Sourced in United States
ANSYS Workbench 17.0 is a comprehensive engineering simulation platform that provides a unified, extensible, and flexible environment for automated engineering analysis. It enables seamless parametric, integrated multiphysics, and multidisciplinary analysis.
Sourced in United States, Japan, United Kingdom, Germany, Belgium, Austria, Spain, France, Denmark, Switzerland, Ireland
SPSS version 20 is a statistical software package developed by IBM. It provides a range of data analysis and management tools. The core function of SPSS version 20 is to assist users in conducting statistical analysis on data.

More about "Training Support"

Discover the power of Training Support, the groundbreaking AI-driven platform that empowers researchers to optimize their training protocols and enhance the reproducibility of their studies.
This innovative tool seamlessly integrates with existing research workflows, making it easy to locate relevant training methods from a vast repository of literature, preprints, and patents.
Leveraging the latest advancements in machine learning, Training Support provides AI-driven comparisons that help researchers identify the most effective approaches and cutting-edge products, such as the Polar H10 heart rate monitor, Stata v14 statistical software, and SPSS v24 data analysis tools.
Whether you're working with MiniAnne, a popular deep learning framework, or utilizing the powerful Workbench 17.0 software suite, Training Support is the perfect companion to take your research training to new heights.
With its intuitive interface and seamless integration with tools like PBS and DS-5700, this platform empowers you to streamline your workflows, boost productivity, and enhance the reproducibility of your findings.
Discover the transformative power of Training Support and elevate your research to new levels of success.