The largest database of trusted experimental protocols
> Living Beings > Population Group > Population at Risk

Population at Risk

Population at Risk refers to the identification and analysis of subgroups within a population that are particularly vulnerable or susceptible to specific health risks, illnesses, or negative outcomes.
These groups may be defined by demographic, socioeconomic, environmental, or behavioral factors that increase their exposure or susceptibility.
Accurately determining and understanding the Population at Risk is crucial for public health interventions, resource allocation, and the development of targeted preventive strategies.
Researchers can utilize AI-driven tools like PubCompare.ai to optimize protocols for assessing Population at Risk, enhancing reproducibility and accuracy of their findings.

Most cited protocols related to «Population at Risk»

Carcinogenic and mutagenic risk assessments15 (link),60 (link)–63 (link),67 (link)–69 (link) induced by inhalation of PM2.5-bound enriched with selected nitro-PAHs (1-NPYR, 2-NPYR, 2-NFLT, 3-NFLT, 2-NBA, and 3-NBA) and PAHs (PYR, FLT, BaP, and BaA) were estimated in the bus station and coastal site samples according to calculations done by Wang et al.60 (link), Nascimento et al.61 (link), and Schneider et al.67 (link) PAH and PAH derivatives risk assessment is done in terms of BaP toxicity, which is well established67 (link)–73 (link). The daily inhalation levels (EI) were calculated as: EI=BaPeq×IR=(Ci×TEFi)×IR where EI (ng person−1 day−1) is the daily inhalation exposure, IR (m³ d−1) is the inhalation rate (m³ d−1), BaPeq is the equivalent of benzo[a]pyrene (BaPeq = Σ Ci × TEFi) (in ng m−3), Ci is the PM2.5 concentration level for a target compound i, and TEFi is the toxic equivalent factor of the compound i. TEF values were considered those from Tomaz et al.15 (link), Nisbet and LaGoy69 (link), OEHHA72 , Durant et al.73 (link), and references therein. EI in terms of mutagenicity was calculated using equation (1), just replacing the TEF data by the mutagenic potency factors (MEFs) data, published by Durant et al.73 (link). Individual TEFs and MEFs values and other data used in this study are described in SI, Table S4.
The incremental lifetime cancer risk (ILCR) was used to assess the inhalation risk for the population in the Greater Salvador, where the bus station and the coastal site are located. ILCR is calculated as: ILCR=(EI×SF×ED×cf×EF)/(AT×BW) where SF is the cancer slope factor of BaP, which was 3.14 (mg kg−1 d−1)−1 for inhalation exposure60 (link), EF (day year−1) represents the exposure frequency (365 days year−1), ED (year) represents exposure duration to air particles (year), cf is a conversion factor (1 × 10−6), AT (days) means the lifespan of carcinogens in 70 years (70 × 365 = 25,550 days)70 ,72 , and BW (kg) is the body weight of a subject in a target population71 .
The risk assessment was performed considering four different target groups in the population: adults (>21 years), adolescents (11–16 years), children (1–11 years), and infants (<1 year). The IR for adults, adolescents, children, and infants were 16.4, 21.9, 13.3, 6.8 m3 day−1, respectively. The BW was considered 80 kg for adults, 56.8 kg for adolescents, 26.5 kg for children and 6.8 kg for infants70 .
Full text: Click here
Publication 2019
Adolescent Adult Benzo(a)pyrene Body Weight Carcinogens Child derivatives Factor X Fibrinogen fluoromethyl 2,2-difluoro-1-(trifluoromethyl)vinyl ether Health Risk Assessment Infant Inhalation Inhalation Exposure Malignant Neoplasms Mutagens Polycyclic Hydrocarbons, Aromatic Population at Risk Population Group Respiratory Rate
We assume that there is a well-defined baseline time in the cohort and that T denotes the time from baseline time until the occurrence of the event of interest. In the absence of competing risks, the survival function, S(t), describes the distribution of event times: S(t) = Pr(Tt). One minus the survival function (ie, the complement of the survival function), F(t) = 1 − S(t) = Pr(Tt) describes the incidence of the event over the duration of follow-up. Two key properties of the survival function are that S(0) = 1 (ie, at the beginning of the study, the event has not yet occurred for any subjects) and (ie, eventually the event of interest occurs for all subjects). In practice, the latter assumption may not be required, because the probability of the event over a restricted follow-up period may be <1.
Estimating the incidence of an event as a function of follow-up time provides important information on the absolute risk of an event. In the absence of competing risks, the Kaplan-Meier estimate of the survival function is frequently used for estimating the survival function. One minus the Kaplan-Meier estimate of the survival function provides an estimate of the cumulative incidence of events over time. In the case study that follows, we examine the incidence of cardiovascular death in patients hospitalized with heart failure. When the complement of the Kaplan-Meier function was used, the estimated incidence of cardiovascular death within 5 years of hospital admission was 43.0%. However, using the Kaplan-Meier estimate of the survival function to estimate the incidence function in the presence of competing risks generally results in upward biases in the estimation of the incidence function.9 (link),10 (link),12 (link) In particular, the sum of the Kaplan-Meier estimates of the incidence of each individual outcome will exceed the Kaplan-Meier estimate of the incidence of the composite outcome defined as any of the event types. Even when the competing events are independent, the Kaplan-Meier estimator yields biases in the probability of the event of interest. The problem here is that the Kaplan-Meier estimator estimates the probability of the event of interest in the absence of competing risks, which is generally larger than that in the presence of competing risks. Furthermore, the hypothetical population in which competing risks do not exist may not be the population of greatest interest for clinical and/or policy making,13 (link) as in the cardiovascular setting where noncardiovascular death may be an important consideration.
The Cumulative Incidence Function (CIF), as distinct from 1 – S(t), allows for estimation of the incidence of the occurrence of an event while taking competing risk into account. This allows one to estimate incidence in a population where all competing events must be accounted for in clinical decision making. The cumulative incidence function for the kth cause is defined as: CIFk(t) = Pr(Tt,D = k), where D is a variable denoting the type of event that occurred. A key point is that, in the competing risks setting, only 1 event type can occur, such that the occurrence of 1 event precludes the subsequent occurrence of other event types. The function CIFk(t) denotes the probability of experiencing the kth event before time t and before the occurrence of a different type of event. The CIF has the desirable property that the sum of the CIF estimates of the incidence of each of the individual outcomes will equal the CIF estimates of the incidence of the composite outcome consisting of all of the competing events. Unlike the survival function in the absence of competing risks, CIFk(t) will not necessarily approach unity as time becomes large, because of the occurrence of competing events that preclude the occurrence of events of type k. In the case study that follows, when using the CIF, the estimated incidence of cardiovascular death within 5 years of hospital admission was 36.8%. This estimate was 6.2% lower than the estimate obtained using the complement of the Kaplan-Meier function. This illustrates the upward bias that can be observed when naively using Kaplan-Meier estimate in the presence of competing risks.
Publication 2016
Cardiovascular System Congestive Heart Failure Patients Population at Risk

Protocol full text hidden due to copyright restrictions

Open the protocol to access the free full text link

Publication 2010
Central American People Hispanics Households Hypochondroplasia Latinos PER1 protein, human Population at Risk Puerto Ricans
Prognostic models, focusing on time to event data that may be censored, are often constructed using survival analysis techniques such as the Cox proportional hazards model or parametric survival models. Ideally, pre-specification of the covariates prior to the modelling process, and hence fitting the full model results in more reliable and less biased prognostic models than data derived models based on statistical significance testing [13 (link)]. Such a model can be as large and complex as permitted by the number of observed events [13 (link),14 (link)].
The parameters of interest in prognostic modelling are summarised in Table 1. These usually include the regression coefficients or the hazard ratio for each covariate in the model and their associated significance in the model. Assessments of the model performance, for example model fit, predictive accuracy, discrimination and calibration are also important issues in prognostic modelling studies.
The likelihood ratio chi-square (χ2) statistic tests the hypothesis of no difference between the null model given a specified distribution and the fitted prognostic model with p parameters [15 ]. Various proportion of explained variance measures have been proposed as measures of the goodness of fit and predictive accuracy (e.g. by Schemper and Stare [16 (link)], Schemper and Hendersen [17 (link)], O'Quigley, Xu and Stare [18 (link)] and Nagelkerke's R2 [15 ]). However, no approach is completely satisfactory when applied to censored survival data. Discrimination assesses the ability to distinguish between patients with different prognoses, which can be assessed using the concordance index (c-index) [19 (link)] or alternatively using the prognostic separation D statistic [20 (link)]. Calibration determines the extent of the bias in the predicted probabilities compared to the observed values. A shrinkage estimator provides a measure of the amount needed to recalibrate the model to correctly predict the outcome of future patients using the fitted model [21 (link)]. The prognostic model is often summarised by reporting the predictive survival probabilities at specific time-points of interest or quantiles of the survival distribution for each prognostic risk group.
Full text: Click here
Publication 2009
Discrimination, Psychology Patients Population at Risk Prognosis
As with the Poisson- and Bernoulli-based prospective space–time scan statistics [27 ], the space–time permutation scan statistic utilizes thousands or millions of overlapping cylinders to define the scanning window, each being a possible candidate for an outbreak. The circular base represents the geographical area of the potential outbreak. A typical approach is to first iterate over a finite number geographical grid points and then gradually increase the circle radius from zero to some maximum value defined by the user, iterating over the zip codes in the order in which they enter the circle. In this way, both small and large circles are considered, all of which overlap with many other circles. The height of the cylinder represents the number of days, with the requirement that the last day is always included together with a variable number of preceding days, up to some maximum defined by the user. For example, we may consider all cylinders with a height of 1, 2, 3, 4, 5, 6, or 7 d. For each center and radius of the circular cylinder base, the method iterates over all possible temporal cylinder lengths. This means that we will evaluate cylinders that are geographically large and temporally short, forming a flat disk, those that are geographically small and temporally long, forming a pole, and every other combination in between.
What is new with the space–time permutation scan statistic is the probability model. Since we do not have population-at-risk data, the expected must be calculated using only the cases. Suppose we have daily case counts for zip-code areas, where czd is the observed number of cases in zip-code area z during day d. The total number of observed cases (C) is

For each zip code and day, we calculate the expected number of cases μzd conditioning on the observed marginals:

In words, this is the proportion of all cases that occurred in zip-code area z times the total number of cases during day d. The expected number of cases μA in a particular cylinder A is the summation of these expectations over all the zip-code-days within that cylinder:

The underlying assumption when calculating these expected numbers is that the probability of a case being in zip-code area z, given that it was observed on day d, is the same for all days d.
Let cA be the observed number of cases in the cylinder. Conditioned on the marginals, and when there is no space–time interaction, cA is distributed according to the hypergeometric distribution with mean μA and probability function

When both ΣzεAczd and ΣdεAczd are small compared to C, cA is approximately Poisson distributed with mean μA [37 ]. Based on this approximation, we use the Poisson generalized likelihood ratio (GLR) as a measure of the evidence that cylinder A contains an outbreak:

In words, this is the observed divided by the expected to the power of the observed inside the cylinder, multiplied by the observed divided by the expected to the power of the observed outside the cylinder. Among the many cylinders evaluated, the one with the maximum GLR constitutes the space–time cluster of cases that is least likely to be a chance occurrence and, hence, is the primary candidate for a true outbreak. One reason for using the Poisson approximation is that it is much easier to work with this distribution than the hypergeometric when adjusting for space by day-of-week interaction (see below), as the sum of Poisson distributions is still a Poisson distribution.
Since we are evaluating a huge number of outbreak locations, sizes, and time lengths, there is serious multiple testing that we need to adjust for. Since we do not have population-at-risk data, this cannot be done in any of the usual ways for scan statistics. Instead, it is done by creating a large number of random permutations of the spatial and temporal attributes of each case in the dataset. That is, we shuffle the dates/times and assign them to the original set of case locations, ensuring that both the spatial and temporal marginals are unchanged. After that, the most likely cluster is calculated for each simulated dataset in exactly the same way as for the real data. Statistical significance is evaluated using Monte Carlo hypothesis testing [38 ]. If, for example, the maximum GLR is calculated from 999 simulated datasets, and the maximum GLR for the real data is higher than the 50th highest, then that cluster is statistically significant at the 0.05 level. In general terms, the p-value is p = R/(S + 1) where R is the rank of the maximum GLR from the real dataset and S is the number of simulated datasets [38 ]. In addition to p-values, we also report null occurrence rates [8 (link)], such as once every 45 d or once every 23 mo. The null occurrence rate is the expected time between seeing an outbreak signal with an equal or higher GLR assuming that the null hypothesis is true. For daily analyses, it is defined as once every 1/p d. For example, under the null hypothesis we would at the 0.05 level on average expect one false alarm every 20 d for each syndrome under surveillance.
Because of the Monte Carlo hypothesis testing, the method is computer intensive. To facilitate the use of the methods by local, state, and federal health departments, the space–time permutation scan statistic has been implemented as a feature in the free and public domain SaTScan software [36 ].
Full text: Click here
Publication 2005
MLL protein, human Population at Risk Public Domain Radionuclide Imaging Radius Syndrome

Most recents protocols related to «Population at Risk»

Two separate Markov decision models were developed to compare the long-term costs and health benefits of the IraPEN program (primary CVD prevention) with the status quo (no prevention) in two distinct scenarios. In the base case scenario, individuals without diabetes were included, while patients with diabetes were included in the alternative scenario. Each Markov model has four health states with transitions between the states according to age, sex, and the CVD risk characteristics of participants (Figure 1). In contrast to the usual Markov models, which are structured based on cohorts with average profiles, we decided to categorize the individuals based on their CVD risks. As the intervention (treatment) varied according to CVD risk level, it is logical to model them separately. In this way, we can take into account their specific characteristics. Therefore, based on WHO/ISH CVD risk prediction charts for EMR B, four index cohorts were constructed (5 ). These hypothetical cohorts were used as a representative for individuals with low, moderate, high, and very high CVD risk profiles. The CVD risk state represents the starting point for all people who are 40 years old. It was assumed that people in this state may either remain in the same health state, move to the stroke state, or CHD (coronary heart disease) state, or die. As long as they are event-free, these individuals can stay in a healthy state, but after the first event, they move to the CHD or stroke state and stay there until their death.
In WHO/ISH CVD risk prediction charts, the CVD risk is calculated based on individuals' age and risk factors such as blood pressure, lipid profile, diabetes, and smoking status and categorized into the following five groups: below 10% (low-risk group), between 10 and 19% (moderate-risk group), between 20 and 29% (high-risk group), between 30 and 39%, and above 40% (very high-risk group). As the individuals in the two latter groups are treated the same, in the IraPEN program, whoever has a CVD risk above 30% is categorized as the very high-risk group.
Therefore, considering what was mentioned earlier, all the Iranians aged older than 40 years who did not have CHD or stroke events before were eligible for this program. According to the recent census (2016), 31.16% of Iranians were older than 40 years (6 ). By adding individuals aged older than 30 years with the aforementioned risk factors, we can conclude that this program is going to screen at least 25 million people yearly.
The healthcare perspective and a 40-year time horizon were adopted for this analysis. As the analysis is a comparison between IraPEN (intervention) and status quo (no intervention) which both have the same Markov structure and transition probabilities, it is not expected that half cycle correction (HCC) approach makes any difference in ICER results; therefore, HCC was not applied to this analysis (7 (link)).
The hypothetical cohorts were used as a representative for individuals with low, moderate, high, and very high CVD risk profiles (Table 1). Progressively, a proportion of the cohort can go to the CHD state, who are the survivors of the first CHD event, or to the stroke state who are the survivors of the first stroke event. Those CHD and stroke events that were fatal moved to the death state. In general, the people in these two states are at a higher risk of dying from CHD or stroke, but they may die from any other causes like the normal population. Table 2 summarizes the assumptions of this analysis.
Full text: Click here
Publication 2023
Blood Pressure Cerebrovascular Accident Diabetes Mellitus Health Transition Heart Disease, Coronary Lipids Patients Population at Risk Primary Prevention Survivors
This study evaluates the potential cost-effectiveness (CE) of the IraPEN program in comparison to the status quo through a health economic evaluation and the outcomes expressed in terms of QALY and LY gained for each CVD risk group. The target group of this analysis is all Iranian people aged older than 40 years and the evaluated intervention is the same as the recommended intervention of WHO PEN which is included screening, monitoring, and medications.
Full text: Click here
Publication 2023
Pharmaceutical Preparations Population at Risk
A list of previously reported 35 cuprotosis-related genes was shown in Supplementary file 1 [12 (link)]. 34 cuproptosis-related genes were successfully matched with the training set. Univariate Cox regression analysis was used in the training set to explore the prognostic relevance of these genes for OS through the coxph function of the “survival” R package [18 (link)]. Then, a multivariate Cox regression analysis was conducted to obtain the coefficient of each gene for establishing the formula of the risk score. Patients were divided into two risk groups based on the median value of the risk score. Finally, the “timeROC” package was used in three additional datasets to analyze the specificity and sensitivity of 1-, 2-, and 3-year OS predictions. The area under the curve (AUC) was calculated to assess the ROC effect. The Kaplan–Meier (KM) curve analysis [19 (link)] was established by the “survminer” package to compare the OS between the high- and low-risk groups in the training, validation, and testing sets.
Full text: Click here
Publication 2023
Genes Hypersensitivity Patients Population at Risk
“DESeq2” package was used to identify the differentially expressed genes (DEGs) between the high- and low-risk groups in the TCGA dataset. |Log2FoldChange|> 1 and a false discovery rate (FDR)-adjusted P-value < 0.01 were set as the cutoffs for the DEGs. Results were visualized in a volcano plot using the “ggplot2” package. Gene ontology (GO) functional enrichment analyses were performed on these DEGs with the “clusterProfiler” package [21 (link)]. Data on genetic alternations were downloaded from the TCGA data portal (https://portal.gdc.cancer.gov). The MutSig2.0 approach [22 (link)] was used to identify significantly mutated genes, and the top mutated genes in two risk groups were visualized by the “Maftools” package [23 (link)]. Correlation analysis was constructed between the risk score and total mutation burden (TMB). The differences in TMB and microsatellite instability (MSI) score between the two risk groups were also visualized by boxplot using the “ggplot2” package. Moreover, twenty-two subpopulations of tumor-infiltrating immune cells were analyzed using the CIBERSORT algorithm (https://cibersort.stanford.edu/) [24 (link)]. The different expressions of some immune checkpoints in the high- and low-risk groups were detected, such as CD274, CD276, CD44, and CD40.
Full text: Click here
Publication 2023
CD44 protein, human CD274 protein, human Cell Cycle Checkpoints Cells Genes Malignant Neoplasms Microsatellite Instability Mutation Neoplasms Population at Risk Population Group
In this study, comprehensive enrichment analyses covering 4 aspects were conducted. First, the “clusterProfiler” R package was utilised to perform KEGG along with the GO enrichment analyses targeting the RBPs containing different RBDs (canonical RBDs or non-canonical RBDs). Next, KEGG and GO analyses were also performed regarding distinct modules which were significantly correlated with prognosis identified by the WGCNA. Thirdly, to elucidate the mechanism underlying our prognostic model, GSEA (V.4.1.0, http://software.broadinstitute.org/gsea/) was employed to assess BP, CC, MF and KEGG enrichment based on differently expressed genes between different risk groups predicted by our novel prognostic models (FDR < 0.001, |NES| > 2). Finally, emerging literature have demonstrated the relationship between RBPs and immune status. Therefore, we further used ssGSEA to quantify the enrichment scores of diverse immune cell subpopulations and related functions or pathways. The infiltrating score of 16 immune cells and the activity of 13 immune-related functions or pathways were calculated with ssGSEA in the “gsva” R package. And the NES scores of different risk groups were compared using Wilcoxon method.
Full text: Click here
Publication 2023
Genes Immune System Processes Population at Risk Population Group Prognosis RNA Recognition Motif

Top products related to «Population at Risk»

Sourced in United States, Austria, Japan, Cameroon, Germany, United Kingdom, Canada, Belgium, Israel, Denmark, Australia, New Caledonia, France, Argentina, Sweden, Ireland, India
SAS version 9.4 is a statistical software package. It provides tools for data management, analysis, and reporting. The software is designed to help users extract insights from data and make informed decisions.
Sourced in United States, Austria, Japan, Belgium, United Kingdom, Cameroon, China, Denmark, Canada, Israel, New Caledonia, Germany, Poland, India, France, Ireland, Australia
SAS 9.4 is an integrated software suite for advanced analytics, data management, and business intelligence. It provides a comprehensive platform for data analysis, modeling, and reporting. SAS 9.4 offers a wide range of capabilities, including data manipulation, statistical analysis, predictive modeling, and visual data exploration.
Sourced in United States, Austria, Canada, Belgium, United Kingdom, Germany, China, Japan, Poland, Israel, Switzerland, New Zealand, Australia, Spain, Sweden
Prism 8 is a data analysis and graphing software developed by GraphPad. It is designed for researchers to visualize, analyze, and present scientific data.
Sourced in United States, Austria, Japan, Belgium, New Zealand, United Kingdom, France
R is a free, open-source software environment for statistical computing and graphics. It provides a wide variety of statistical and graphical techniques, including linear and nonlinear modeling, classical statistical tests, time-series analysis, classification, clustering, and others.
Sourced in United States, Austria, Japan, Belgium, New Zealand, United Kingdom, Germany, Denmark, Australia, France
R version 3.6.1 is a statistical computing and graphics software package. It is an open-source implementation of the S programming language and environment. R version 3.6.1 provides a wide range of statistical and graphical techniques, including linear and nonlinear modeling, classical statistical tests, time-series analysis, classification, clustering, and more.
Sourced in United States, Japan, United Kingdom, Austria, Canada, Germany, Poland, Belgium, Lao People's Democratic Republic, China, Switzerland, Sweden, Finland, Spain, France
GraphPad Prism 7 is a data analysis and graphing software. It provides tools for data organization, curve fitting, statistical analysis, and visualization. Prism 7 supports a variety of data types and file formats, enabling users to create high-quality scientific graphs and publications.
Sourced in United States, Japan, Austria, Germany, United Kingdom, France, Cameroon, Denmark, Israel, Sweden, Belgium, Italy, China, New Zealand, India, Brazil, Canada
SAS software is a comprehensive analytical platform designed for data management, statistical analysis, and business intelligence. It provides a suite of tools and applications for collecting, processing, analyzing, and visualizing data from various sources. SAS software is widely used across industries for its robust data handling capabilities, advanced statistical modeling, and reporting functionalities.
Sourced in United States, Germany, United Kingdom, Israel, Canada, Austria, Belgium, Poland, Lao People's Democratic Republic, Japan, China, France, Brazil, New Zealand, Switzerland, Sweden, Australia
GraphPad Prism 5 is a data analysis and graphing software. It provides tools for data organization, statistical analysis, and visual representation of results.
Sourced in United States, United Kingdom, Canada, China, Germany, Japan, Belgium, Israel, Lao People's Democratic Republic, Italy, France, Austria, Sweden, Switzerland, Ireland, Finland
Prism 6 is a data analysis and graphing software developed by GraphPad. It provides tools for curve fitting, statistical analysis, and data visualization.
Sourced in United States, Japan, United Kingdom, Austria, Germany, Czechia, Belgium, Denmark, Canada
SPSS version 22.0 is a statistical software package developed by IBM. It is designed to analyze and manipulate data for research and business purposes. The software provides a range of statistical analysis tools and techniques, including regression analysis, hypothesis testing, and data visualization.

More about "Population at Risk"

Population at Risk, also known as Vulnerable Population or High-Risk Group, refers to the identification and analysis of subgroups within a population that are particularly susceptible or exposed to specific health risks, illnesses, or negative outcomes.
These groups may be defined by various factors such as demographic, socioeconomic, environmental, or behavioral characteristics that increase their vulnerability.
Accurately determining and understanding the Population at Risk is crucial for public health interventions, resource allocation, and the development of targeted preventive strategies.
Researchers can utilize AI-driven tools like PubCompare.ai to optimize protocols for assessing Population at Risk, enhancing the reproducibility and accuracy of their findings.
PubCompare.ai is an innovative solution that empowers researchers to locate the best protocols and products from literature, pre-prints, and patents, enabling them to enhance their research efforts and improve the assessment of Population at Risk.
The tool's AI-driven comparisons can help identify the most effective approaches and products, ensuring that researchers can develop more accurate and reproducible studies.
Additionally, researchers may find it helpful to utilize statistical software like SAS version 9.4, R version 3.6.1, SPSS version 22.0, or data visualization tools such as GraphPad Prism 5, 6, 7, or 8 to analyze and present their findings on Population at Risk.
These tools can provide valuable insights and enhance the overall quality and impact of the research.