The largest database of trusted experimental protocols
> Physiology > Organism Attribute > Functional Performance

Functional Performance

Functional performance refers to an individual's ability to engage in daily activities and tasks that are essential for independent living and well-being.
It encompasses physical, cognitive, and psychosocial domains, and can be influenced by various factors such as age, health status, environment, and personal preferences.
Optimizing functional performance is crucial for maintaining quality of life and promoting successful aging.
Researchers and clinicians may utilize standardized assessment tools and interventions to evaluate and enhance functional performance in diverse populations.

Most cited protocols related to «Functional Performance»

To perform function prediction in yeast with the GeneMANIA algorithm, we used five yeast functional association networks (used in [9 (link)] and [17 (link)] and available from [33 ]), constructed an extended yeast benchmark consisting of 15 association networks and 400 GO functional classes, and used the bioPIXIE network (obtained from [36 ]).
We constructed 15 yeast association networks from various genomics data sources (supplementary Table 1 in Additional data file 2). In addition, we also downloaded the GO association file from the Saccharomyces Genome Database on (1 June 2006) and obtained a set of 400 functional classes that we grouped according to their specificity level and GO branch. In particular, we randomly selected 100 functional classes from each specificity level (3 to 10, 11 to 30, 31 to 100, and 101 to 300) and organized them based on their GO branch. In addition, the 5 yeast network benchmark consists of 13 Munich Information Center for Protein Sequences (MIPS) functional classes. We have included the prediction performance on these functional classes in supplementary Figures 3 and 5 in Additional data file 1.
In addition to the above, we investigated the prediction performance of GeneMANIA using the bioPIXIE [14 (link)] network (we refer to this method as GeneMANAIbioPIXIE). bioPIXIE is a composite association network that has been constructed from 925 referenced data sources. It contains 15,551,081 functional associations between 7,034 yeast genes.
Full text: Click here
Publication 2008
Amino Acid Sequence Functional Performance Genes Genome Saccharomyces Saccharomyces cerevisiae
We set out to parameterize an energy function based on experimental thermodynamic data of small molecules, and high-resolution structural data of macromolecules (shortly “structural data”), with the broader aim of better recapitulating the large-scale energy landscape of protein folding or complex formation, high-resolution structural features, and the balance between natural amino acid preferences. The experimental thermodynamic data consists of the liquid properties of small molecules containing functional groups from natural amino acids12 and vapor-to-water transfer free energies of protein side-chain analogs25 . The structural data consists of large numbers (> 1000 cluster centers) of alternative conformations (decoys) for protein structures and complexes of known structure, and high-resolution crystallographic data. The agreement of an energy function with these data is represented by a target function Ftotal:
Ftotal[E(Θ)]=wthermodynamicFthermodynamic[E(Θ)]+wstructuralFstructual[E(Θ)] where the target functions Fthermodynamic and Fstructural are functionals of a biomolecular energy function E(Θ), which is a function of a set of parameters Θ (see the sections below for the details of E(Θ) in the study), and their relative contributions are adjusted by weights w. Fthermodynamic[E(Θ)] and Fstructural[E(Θ)] are themselves a weighted linear sum of target functions evaluating performance on specific tasks; their exact composition varies depending on the aim of optimization, and is described in the following paragraphs and sections.
The energy parameters Θ subject to optimization consist of atom-type-dependent parameters, for example, the Lennard-Jones (LJ) radius and well-depth of each atom type. The total number of parameters simultaneously optimized in a single run is on the order of 100. A key advantage of the ability to simultaneously optimize a large number of parameters is that the introduction of significant changes to the physical models of energy terms (for example, an anisotropic solvation model, or change in LJ model of hydrogen atoms) may considerably shift the balance between the energy terms and require large-scale re-parameterization. Optimization of these large parameter sets, with respect to a wide range of thermodynamic and structural data, is performed by a newly developed parameter optimization protocol named dualOptE that uses Nelder-Mead simplex optimization26 (Figure 1, details in following section).
We found several factors to be critical for energy function training to robustly transferrable to independent datasets. First, the training data need to be diverse; consequently, energy function performance is trained on a wide variety of sub-tasks, including recapitulation of sequence and side-chain rotamers, native monomeric structure discrimination, protein-protein docking, and the aforementioned thermodynamic recapitulation tasks. Second, the structure discrimination training sets must be dynamic; it is all too easy to train an energy function to consistently recognize the native structure in a sea of static decoys, but much more challenging when all structures are relaxed in the new energy function27 (link). In dualOptE, all tests involve some reoptimization against the current energy function: for example, the test measuring the ability to discriminate near-native monomeric conformations or protein-protein interfaces first optimizes a pre-generated set of structures against the current parameterization before assessing discrimination quality. Third, each cycle of parameter optimization must be carried out in a limited amount of computer time. Since we need to assess hundred or thousands of parameterizations in the course of an optimization trajectory, each test has to run on the order of several minutes. For example, during parameter optimization, a full liquid MC simulation to estimate liquid phase properties of small molecules at each step is not computationally tractable; we instead use static sets of snapshots from MC simulations. Following completion of a given parameter optimization run we carried out full liquid MC simulations and found that the static approximation was fairly accurate as long as there were not large changes in the energy function.
We employed multiple iterations of this dual energy function optimization approach. The first iteration, yielding the energy function opt-july15, introduced a new anisotropic implicit solvent model into the Rosetta energy function. Rosetta has previously used the Lazaridis-Karplus (LK) isotropic occlusion-based implicit solvation model28 (link), where the occluded volume of each atom is proportional to the fractional desolvation energy. The new anisotropic solvation model combines the isotropic part from the original LK model with a ne wly introduced anisotropic polar term29 (link), which accounts for anisotropic interactions between polar heavy atoms and solvent: occlusion of water binding sites is made more energetically disfavorable than occlusion away from such sites. A second series of optimizations follows introduction of attractive dispersion forces to hydrogens (originally pseudo-united-atom) as well as a reworked electrostatic model, yielded the energy function opt-nov15. For both energy function “snapshots”, following optimization, the resulting energy functions were validated on a set of independent structure prediction tasks too computationally intensive to be used in optimization30 (link). Details of energy functions (opt-july15 and opt-nov15) and energy terms, and a list of the tests used for optimization are described in following sections, a full list of atomic parameters determined by DualOptE appear in the Supplementary Tables S3–4, and details of the tests and datasets for optimization or independent validation in Supplementary Materials.
The resulting next-generation Rosetta energy function (opt-nov15) outperforms the previous energy function (talaris2014)13 (link) on a wide range of structure prediction tests independent of the training set data. In contrast to opt-nov15, talaris2014 had been optimized solely relying on similar set of structural data we incorporate in the study without the use of small molecule data. We briefly summarize the energy function changes; full details are again provided below. First, there are changes in the physical models, notably the new anisotropic solvation model, a sigmoidal dielectric model, and explicit modeling of the effects of proline on the backbone torsion angles of the preceding residue. Second, there are changes in the representation; in previous Rosetta energy functions hydrogens are purely repulsive to speed computation (much shorter range distance cutoffs were required), whereas in the new energy function hydrogens make attractive LJ interactions. Third, there are changes in the overall balance of forces: compared to talaris2014, both solvation and electrostatic forces are considerably stronger relative to other non-bonded interactions. Fourth, there are changes in many energy function parameters: the attractive interactions of sulfur and aliphatic carbons are stronger (which bring better agreement with small-molecule liquid phase data), and the partial charges of charged chemical groups are more evenly distributed (rather than being primarily on the tip atoms).
Publication 2016
Amino Acids Anisotropy Binding Sites Carbon Crystallography Dental Occlusion Discrimination, Psychology Disgust Electrostatics Functional Performance Hydrogen Physical Examination Proline Proteins Radius Solvents Sulfur Task Performance Vertebral Column
Brain networks can be derived from anatomical or physiological observations, resulting in structural and functional networks, respectively. When interpreting brain network data sets, it is important to respect this fundamental distinction.7 ,13 (link)Structural connectivity describes anatomical connections linking a set of neural elements. At the scale of the human brain, these connections generally refer to white matter projections linking cortical and subcortical regions. Structural connectivity of this kind is thought to be relatively stable on shorter time scales (seconds to minutes) but may be subject to plastic experience-dependent changes at longer time scales (hours to days). In human neuroimaging studies, structural brain connectivity is commonly measured as a set of undirected links, since the directionality of projections currently cannot be discerned.
Functional connectivity is generally derived from time series observations, and describes patterns of statistical dependence among neural elements.12 (link) Time series data may be derived with a variety of techniques, including electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI), and can be computed in a number of ways, including as cross-correlation, mutual information, or spectral coherence. While the presence of a statistical relationship between two neural elements is often taken as a sign of functional coupling, it must be noted that the presence of such coupling does not imply a causal relationship.14 (link) Functional connectivity is highly time-dependent, often changing in a matter of tens or hundreds of milliseconds as functional connections are continually modulated by sensory stimuli and task context. Even when measured with techniques that operate with a slow sampling rate such as fMRI, functional connectivity may exhibit non-stationary fluctuations (see below).
Effective connectivity represents a third and increasinglyimportant mode of representing and analyzing brain networks.11 (link),15 (link) Effective connectivity attempts to capture a network of directed causal effects between neural elements. As such it represents a generative and mechanistic model that accounts for the observed data, selected from a range of possible models using objective criteria like the model evidence. Recent developments in this area include approaches towards “network discovery”16 (link),17 (link) involving the identification of graph models for effective connectivity that best explain empirical data. While effective connectivity bears much promise for the future, most current studies of brain networks are still carried out on either structural or functional connectivity data sets, and hence these two modes of connectivity will form the main focus of this review.
Within the formal framework of graph theory, a graph or network comprises a set of nodes (neural elements) and edges (their mutual connections). Structural and/or functional brain connectivity data recorded from the human brain can be processed into network form by following several steps, starting with the definition of the network's nodes and edges (Figure 1). This first step is fundamental for deriving compact and meaningful descriptions of brain networks.18 (link),19 (link) Nodes are generally derived by parcellating cortical and subcortical gray matter regions according to anatomical borders or landmarks, or by defining a random parcellation into evenly spaced and sized voxel clusters. Once nodes are defined, their structural or functional couplings can be estimated, and the full set of all pairwise couplings can then be aggregated into a connection matrix. To remove inconsistent or weak interactions, connection matrices can be subjected to averaging across imaging runs or individuals, or to thresholding.
The resulting networks can be examined with the tools and methods of network science. One approach is based on graph theory and offers a particularly large set of tools for detecting, analyzing, and visualizing network architecture. A number of surveys on the application of graph theory methods in neuroscience are available.13 (link),20 -25 (link) An important part of any graph-theoretical analysis is the comparison of measures obtained from empirical networks to appropriately configured populations of networks representing a “null hypothesis.” A commonly used random null model is generated by randomizing the global topology of a network while preserving local node statistics, most importantly the graph's degree sequence.
Figure 2 illustrates a selection of graph measures that are widely used in studies of human brain networks. Based on the insights they deliver, they can be classified into measures reporting on aspects of segregation, integration, and influence.13 (link) Segregation (or specialization) refers to the degree to which a network's elements form separate cliques or clusters. Integration refers to the capacity of the network as a whole to become interconnected and exchange information. Influence measures report on how individual nodes or edges are embedded in the network and the extent to which they contribute to the network's structural integrity and information flow.
An important measure of segregation is the clustering coefficient of a given node, essentially measuring the density of connections among a node's topological neighbors. If these neighbors are densely interconnected they can be said to form a cluster or clique, and they are likely to share specialized information. The average of clustering coefficients over all nodes is the clustering coefficient of the network, often used as a global metric of the network's level of segregation. Another aspect of connectivity within local (ie, topologically connected) sets of network nodes is provided by the analysis of network motifs, constituting subgraphs or “building blocks” of the network as a whole.26 (link) Every network can be uniquely decomposed into a set of motifs of a given size, and the distribution of different motifs can reveal which subgraphs occur more frequently than expected, relative to an appropriate null model.
Measures of integration are generally based on the concept of communication paths and their path lengths. A path is any unique sequence of edges that connects two nodes with one another, and its length is given by the number of steps (in a binary graph) or the sum of the edge lengths (in a weighted graph). The length of the shortest path between each pair of nodes corresponds to their distance (also often referred to as the “shortest path length”), and the global average of all distances across the entire network is called the network's characteristic path length. Closely related to this measure is the global network efficiency, which is computed as the average of the inverse of all distances.27 (link) One can see easily that the global efficiency of a fully connected network would be maximal (equal to one) while the global efficiency of a completely disconnected network would be minimal (equal to zero). Short path lengths promote functional integration since they allow communication with few intermediate steps, and thus minimize effects of noise or signal degradation.
Measures of influence attempt to quantify the “importance” of a given node or edge for the structural integrity or functional performance of a network. The simplest index of influence is the node degree, and in many (but not all) cases the degree of a node will be highly correlated with other more complex influence measures. Many of these measures capture the “centrality” of network elements, for example expressed as the number of short communication paths that travel through each node or edge.28 This measure of “betweenness centrality” is related to communication processes, but is also often found to be highly correlated with the related measure of “closeness,” quantifying the proximity of each node to the rest of the network. Another class of influence measures is based on the effect of node or edge deletion on short communication paths or network dynamics. For example, vulnerability measures the decrease (or, in some cases, the increase) in global efficiency due to the deletion of a single node or edge.29 The most central or influential nodes in a network are often referred to as “hubs,” but it should be noted that there is no unique way of detecting these hubs with graph theory tools. Instead, a conjunction of multiple influence measures (eg, degree, betweenness, vulnerability) should be used when attempting to identify hub nodes.30 (link)While measures of segregation, integration, and influence can express structural characteristics of a network from different perspectives, recent developments in characterizing network communities or modules can potentially unify these different perspectives into a more coherent account of how a given network can be decomposed into modules (segregation), how these modules are interconnected (integration), and which nodes or edges are important for linking modules together (influence). Community detection is an extremely active field in network science.31 A number of new community detection techniques have found applications in the analysis of structural and functional brain networks. One of the most commonly- used community detection algorithms is based on Newman's Q-metric32 coupled with an efficient optimization approach.33 Another approach called Infomap34 (link) identifies communities on the basis of a model of a diffusive random walk, essentially utilizing the fact that a modular network restricts diffusion between communities. In contrast, the Q-metric essentially captures the difference between the actually encountered within-module density of connections compared with what is expected based on a corresponding random model, given a particular partitioning of the network into modules. Since combinatorics makes it impractical to examine all possible module partitions, an optimization algorithm is needed to identify the single partition for which the Q-metric is maximized.
Several methodological issues have arisen in recent years that impact the way community detection is carried out in brain networks, particularly in networks describing functional connectivity (Figure 3). The first issue concerns the widespread practice of thresholding functional networks to retain only a small percentage (often less than 10%) of the strongest functional connections. In addition, the remaining connections are then set to unit strength, resulting in a greatly sparsified binary network which is then subjected to standard graph analysis. Since the appropriate value of the threshold is a free and completely undetermined parameter, most practitioners vary the threshold across a broad range and then compute and compare graph metrics for the resulting networks. The practice of thresholding functional networks has two immediate consequences, a much sparser topology which then tends to result in more and more separate clusters or modules, and a topology that discards all (even strong) negative correlations. While the status of negative correlations in resting fMRI remains controversial,35 (link)-38 (link) it could be argued that the presence of an anticorrelation between two nodes does contribute information about their community membership. Building on this idea, variants of the Q-metric and other related measures that take into account the full weight distribution of a network have been proposed.39 (link) These new metrics can also be applied to functional networks regardless of their density (including fully connected networks), thus eliminating the need for thresholding entirely.
The second issue relates to the optimization of the module partition given a cost or quality metric like Newman's Q. Studies of various real-world networks have shown that identifying the single optimal partition can not only be computationally difficult, but that many real networks can be partitioned at near-optimal levels in a number of different or “degenerate” ways;40 Aggregating these degenerate solutions can provide additional information about the robustness with which a given node pair is affiliated with the same or a different module. This idea has been developed further into a quantitative approach called “consensus clustering.” 41 Consensus clustering has not yet been widely applied to brain networks,39 (link),42 but it may soon become a useful tool since it provides information about the strength with which individual neural elements affiliate with their “home community.” An attractive hypothesis is that elements with generally weak affiliation are good candidates to assume functional roles as hub nodes that crosslink diverse communities.
The next three sections of the article will review our current knowledge about the network architecture of structural brain networks, how structural networks relate to functional networks in both rest and task conditions, and what we can learn by applying network approaches to clinical problems.
Publication 2013
AH 31 Bears Body Regions Brain Cortex, Cerebral Debility Deletion Mutation Diffusion Electroencephalography Functional Performance Gray Matter Homo sapiens Nervousness physiology Population Group Toxic Epidermal Necrolysis White Matter
The present study included 304 cognitively normal (CN; mean age = 74.9 years, SD = 5.8; mean education = 16.3 years, SD = 2.7; gender: 150 women/154 men) and 846 MCI (mean age = 73.2 years, SD = 7.7; mean education = 16.0 years, SD = 2.8; gender: 343 women/503 men) participants from ADNI. Criteria for ADNI eligibility and diagnostic classifications are described at http://www.adni-info.org/Scientists/ADNIGrant/ProtocolSummary.aspx. All participants were 55–91 years old, non-depressed, had a modified Hachinski score [32 (link)] of 4 or less, and had a study partner able to provide an independent evaluation of functioning. Individuals with a history of significant neurological or psychiatric disease, substance abuse, or metal in their body other than dental fillings were excluded. The CN group included all who had at least one year of follow-up and who remained classified as normal throughout their participation in ADNI. ADNI criteria for MCI were: 1) subjective memory complaints reported by themselves, study partner, or clinician; 2) objective memory loss defined as scoring below an education-adjusted cut-off score on delayed recall of Story A of the WMS-R Logical Memory Test (score =8 for those with =16 years of education; score =4 for those with 8–15 years of education; score =2 for those with 0–7 years of education); 3) global CDR score of 0.5; and 4) general cognitive and functional performance sufficiently preserved such that a diagnosis of dementia could not be made by the site physician at the time of screening.
Publication 2014
Cognition Dementia Dental Health Services Diagnosis Eligibility Determination Functional Performance Gender Human Body Memory Memory Loss Mental Disorders Mental Recall Metals Physicians Substance Abuse Woman
Prior to any gain calculations, each individual head and eye velocity record was checked to ensure that an acceptable head impulse movement was achieved, free of any artifact (12 (link)). In the head velocity component, this meant no head movement prior to the impulse, and as abrupt a stop as possible with a maximum “bounce-back” velocity of <25% of peak head velocity. For the eye velocity, it meant no eye movement prior to the onset of the head movement, and no obvious goggle movement or eyelid artifacts. Any impulse with these artifacts was eliminated prior to gain calculations. The remaining average number of impulses in each direction after removing the traces with artifacts ranged from a low of 32 impulses per subject (right anterior, 50–59 age group) to a maximum of 57 impulses per subject (right horizontal, 80–89 age group).
Vestibulo-ocular reflex gain was calculated as follows. The time of peak onset head acceleration was determined for each impulse (2 (link)), and head impulse onset was defined as occurring 60 ms before this time. Head impulse offset was defined as the moment when head velocity crossed zero velocity again (1 (link), 3 (link)). Following the methods previously described (1 (link), 3 (link)), the eye velocity time series were first desaccaded: saccades were identified by an eye acceleration criterion, and linear interpolation was used to replace the removed saccade. Then, the area under the desaccaded eye velocity curve from the start to the end of the head impulse was calculated and compared to the area under the head velocity curve during the same interval. VOR gain was defined as the ratio of these two areas. This is a position gain rather than the traditional slope gain (velocity) calculation, because our measurements and simulations (3 (link)) have shown that this method of calculating VOR gain with vHIT is more resistant to artifact – due, e.g., to slippage of the goggles – than the instantaneous VOR gain calculations using velocity. It is also a more functional measure of vestibulo-ocular performance, as it is the eye position error at the end of the head impulse (how far fixation is from the fixation target), which is the driver for the corrective saccade in the case of vestibular loss. VOR position gain and the corrective saccades are complementary (5 (link)).
For each subject, the VOR gain of each impulse was plotted as a function of peak head velocity (Figure 2A). A line of best fit [using the lowess procedure (13 ) to perform a robust locally weighted regression] was fitted to those values using a smoothing fraction f, which depended on the range in head velocity covered, being chosen to correspond to an interval of 50°/s in peak head velocity. A cubic spline interpolation (using natural splines, where the second derivative was equal to zero at the endpoints) applied to the lowess-fitted data then provided a vector of VOR gain values at each 0.2°/s increment of peak head velocity (Figure 2A). The VOR gain vectors across all the subjects in that decade age group (Figure 2B) were averaged to form a vector of average VOR gain as a function of peak head velocity for that decade age group together with a band of ±95% confidence intervals of the mean (Figure 2C) and a band of the mean ± 2 SDs (Figure 2D) to show the band across velocities in which the results of 95% of the healthy population in that decade age group are expected to fall (8 ). This data processing procedure was repeated for each decade band for horizontal, anterior, and posterior canals. Plots of two-tailed 95% confidence intervals were important for showing whether the VOR data for an age band included the VOR gain of 1.0.
Full text: Click here
Publication 2015
Acceleration Age Groups Cloning Vectors Cuboid Bone Eyelids Eye Movements Functional Performance Head Head Movements Movement Population Health Pulp Canals Vestibular Labyrinth Vision

Most recents protocols related to «Functional Performance»

To measure performance, in Figs. 2I, 3 B and D and 4 G and H, we computed the circular average distance (72 ) of the estimate μT from the true HD ϕT at the end of a simulation of length T = 20 from P = 5′000 simulated trajectories by m1=1Pk=1PexpiμT(k)ϕT(k) . The absolute value of the imaginary-valued circular average, 0 ≤ |m1|≤1, is unitless and denotes an empirical accuracy (or “inference accuracy”) and thus measures how well the estimate μT matches the true HD ϕT. Here, a value of 1 denotes an exact match. The reported inference accuracy is related to the circular variance via Varcirc = 1 − |m1|. In SI Appendix, Fig. S5, we provide histograms with samples μT − ϕT with different numerical values of |m1| to provide some intuition for the spread of estimates for a given value of the performance measure.
We estimated performance through such averages for a range of HD observation information rates in Figs. 2I, 3B and 4G. This information rate is a simulation time-step size-independent quantity, which measures the Fisher information that HD observations provide about true HD per unit time. For individual HD observations of duration dt, Eq. 6, this Fisher information approaches Izt(ϕt)→(κzdt)2/2 with dt → 0 (31 , Theorem 2]. Per unit time, we observe 1/dt independent observations, leading to a total Fisher information (or information rate) of γz = κz2dt/2. As in simulations, γz needs to remain constant with changing Δt to avoid increasing the amount of provided information, the HD observation reliability κz needs to change with the size of simulation time-step size Δt. To keep our plots independent of this time-step size, we thus plot performance as a function of the HD observation information rate rather than κz. For the inset of Fig. 3B, and for Figs. 3 D and F, we additionally performed a grid search over the fixed-point κ* (Fig. 3B, inset) or both the fixed-point κ* and of the decay speed β (Figs. 3 D and F). For each setting of κ* and β, we assessed the performance by computing an average over this performance for a range of HD observation information rates, weighted by how likely each observation reliability is assumed to be a priori. The latter was specified by a log-normal prior, p(γz)=Lognormal(γz|μγz, σγz2), favoring intermediate reliability levels. We chose μγz = 0.5 and σγz2 = 1 for the prior parameters, but our results did not strongly depend on this parameter choice. The performance loss shown in Fig. 3D also relied on such a weighted average across information rates γz for a particle filter benchmark (PF, SI for details). The loss itself was then defined as 1PerformancePerformance PF .
Publication 2023
Functional Performance Intuition Strains
All patients aged ≥ 18 years (yrs) who had undergone surgery for BM between 01/2013 and 12/2018 at the authors’ neuro-oncological center were registered in a computerized database. Only patients with histopathological proven BM were included in this study. Patients with lost follow-up information regarding the day of death were excluded from further analysis. The study was conducted in accordance with the Declaration of Helsinki and the protocol was approved by the Ethics Committee of the University Hospital Bonn (No. 250/19). Informed consent was not sought as a retrospective study design was chosen.
Pertinent clinical information such as preoperative functional neurological status, comorbidities, radiological features, primary site of cancer and time of diagnosis was assessed. The Karnofsky Performance Score (KPS) was used to classify the patients according to their functional status at admission. Patients were evaluated at admission according to their clinical–functional constitution with KPS ≥ 70% or KPS < 70%, as described previously [20 (link)]. The Charlson Comorbidity Index (CCI) was used to evaluate the comorbidity burden of patients prior to surgery. After age adjustment, patients with BM were divided into two groups with CCI < 10 and CCI ≥ 10 as previously described [19 ]. A weekly tumor board meeting was held at initial presentation and during follow-up to discuss treatment strategies for each patient. Decisions were made by interdisciplinary consensus and, when appropriate, coordinated with the referring physician’s previous therapies [18 (link)]. In case of multiple BM, indication for resection was hold for the clinically-manifest lesion, for the prevention of mass effects by the resection of the most prevailing BM and/or to prevent acute tumor-related hydrocephalus.
All patients were divided into two groups for further investigations: Patients with BM as manifestation of a known cancer (metachronous situation) and patients with diagnosis of BM as the first manifestation of an unknown cancer disease (synchronous situation).
OS was defined as the time period from the day of surgery for BM until death or last observation in case the date of death was not known.
Full text: Click here
Publication 2023
ARID1A protein, human Diagnosis Ethics Committees, Clinical Functional Performance Hydrocephalus Malignant Neoplasms Neoplasms Neoplasms, Unknown Primary Operative Surgical Procedures Patients Physicians Surgery, Day X-Rays, Diagnostic
Sample characteristics were compared using Fisher’s exact test (categorical variables) or t-test (continuous variables). Associations between PRSs and past year passive/active suicidal ideation were investigated using general estimation equation (GEE) adjusted for age, sex, and 10 principal components (PCs) to correct for population stratification. The use of GEE analyses enables repeated measurements to be taken into account, and data based on examinations when individuals fulfilled the criteria for a dementia diagnosis were excluded. All analyses were then repeated after also excluding data based on examinations when individuals were diagnosed with MDD. Further, sensitivity analyses were performed based on passive suicidal ideation only (Paykel questions 1 and 2). Correction for multiple testing was performed using the Bonferroni method. Due to overlap between many of the PRSs, all tests performed could not be considered independent of each other, and we therefore corrected for the number of domains included, i.e., psychiatric conditions (depression and suicidality), personality (neuroticism), cognitive function/performance (general cognitive performance, Alzheimer’s disease, educational attainment), loneliness, and vascular disease (stroke, hypertension, atherosclerotic heart disease, and angina), generating a corrected p-value threshold of p = 0.01. The statistical analyses were performed in IBM SPSS Statistics v28.
Full text: Click here
Publication 2023
Angina Pectoris Cerebrovascular Accident Cognition Coronary Arteriosclerosis Dementia Functional Performance High Blood Pressures Hypersensitivity Mental Disorders Neuroticism Physical Examination Prieto syndrome Vascular Diseases
The Bayer Activities of Daily Living Scale (BAYER-S) [35 ] was used to evaluate the functional capacity of the patients. The scale consists of 25 items that score from 1 to 10 to be answered by the patient’s family caregiver. Lower scores indicate better functional performance.
Publication 2023
Family Caregivers Functional Performance Patients
The modified short physical performance battery (SPPB) test was used to assess functional performance. The following tests were performed under the SPPB based on the recommendations of Ilich et al. (9 (link)): one-leg stance (to evaluate balance), gait speed (to measure endurance) and the sit-to-stand chair test (to assess lower extremity strength). The cut-off value for each test was ≤ 16 s, < 0.8 m/s and ≤ 20 times, respectively. The SPPB has a 0.76 internal consistency and predictive validity for mortality, nursing home admission and disability risk (10 (link)).
Full text: Click here
Publication 2023
Disabled Persons Functional Performance Lower Extremity Performance, Physical

Top products related to «Functional Performance»

Sourced in United States, Austria, Japan, Belgium, United Kingdom, Cameroon, China, Denmark, Canada, Israel, New Caledonia, Germany, Poland, India, France, Ireland, Australia
SAS 9.4 is an integrated software suite for advanced analytics, data management, and business intelligence. It provides a comprehensive platform for data analysis, modeling, and reporting. SAS 9.4 offers a wide range of capabilities, including data manipulation, statistical analysis, predictive modeling, and visual data exploration.
Sourced in United States
Lipofectamine 2000 CD is a transfection reagent developed by Thermo Fisher Scientific for the efficient delivery of DNA, RNA, and other nucleic acids into a variety of cell types. It is a cationic lipid-based formulation designed to facilitate the uptake of genetic material into the target cells.
Sourced in United States, China, Germany, Japan, United Kingdom, Italy, Switzerland, Canada, France
Lipofectamine LTX is a transfection reagent used for the delivery of nucleic acids, such as plasmid DNA or RNA, into mammalian cells. It is designed to efficiently and gently introduce these molecules into the cells, enabling their expression or functional studies.
Sourced in United States, Japan, United Kingdom, Germany, Belgium, Austria, Italy, Poland, India, Canada, Switzerland, Spain, China, Sweden, Brazil, Australia, Hong Kong
SPSS Statistics is a software package used for interactive or batched statistical analysis. It provides data access and management, analytical reporting, graphics, and modeling capabilities.
Sourced in United States, Japan, Italy
JMP 10 is a data analysis software product developed by SAS Institute. It provides a range of statistical and analytical tools for data exploration, visualization, and modeling. JMP 10 is designed to help users gain insights from their data through interactive and intuitive interfaces.
Sourced in United States, United Kingdom
SPSS Statistics is a software package used for interactive or batched statistical analysis. It is capable of handling a variety of data formats and can perform a wide range of statistical procedures including, but not limited to, data management, statistical modeling, and reporting. SPSS Statistics version 20.0 is the latest stable release of the software.
Sourced in United States, United Kingdom, Germany
SPSS Statistics for Windows, Version 20.0 is a software application for statistical analysis. It provides tools for data management, visualization, and advanced statistical modeling. The software is designed to work on the Windows operating system.
Sourced in United States, Japan, United Kingdom, Austria, Germany, Czechia, Belgium, Denmark, Canada
SPSS version 22.0 is a statistical software package developed by IBM. It is designed to analyze and manipulate data for research and business purposes. The software provides a range of statistical analysis tools and techniques, including regression analysis, hypothesis testing, and data visualization.
Sourced in Germany, United States, Australia, United Kingdom, Canada
The 32-channel head coil is a key component in magnetic resonance imaging (MRI) systems. It is designed to acquire high-quality images of the human head, enabling detailed visualization and analysis of brain structure and function.
The FLIPR®tetra cellular screening system is a high-throughput, automated fluorescence imaging platform designed for cell-based assays. It enables the simultaneous measurement and analysis of real-time cellular responses in multi-well microplates. The system is capable of rapid, high-resolution kinetic imaging to support a variety of cell-based applications.

More about "Functional Performance"

Functional performance, also known as functional ability or functional capacity, refers to an individual's capacity to engage in daily activities and tasks essential for independent living and overall well-being.
This multifaceted concept encompasses physical, cognitive, and psychosocial domains, and can be influenced by various factors such as age, health status, environment, and personal preferences.
Optimizing functional performance is crucial for maintaining quality of life and promoting successful aging.
Researchers and clinicians may utilize standardized assessment tools, such as those available in SAS 9.4, SPSS Statistics, JMP 10, and SPSS version 22.0, to evaluate and enhance functional performance in diverse populations.
These assessment tools may include measures of activities of daily living (ADLs), instrumental activities of daily living (IADLs), cognitive function, and physical fitness.
Interventions to improve functional performance may involve physical therapy, occupational therapy, cognitive training, and the use of assistive technologies.
For example, the FLIPR®tetra cellular screening system can be used to assess cellular function and identify potential interventions.
Additionally, transfection agents like Lipofectamine 2000 CD and Lipofectamine LTX may be used to deliver gene therapies or other interventions that can impact functional performance.
By understanding the factors that influence functional performance and leveraging the appropriate assessment tools and interventions, researchers and clinicians can work to optimize an individual's ability to engage in daily activities and tasks, ultimately enhancing their quality of life and promoting successful aging.
The PubCompare.ai platform can help identify the most accurate and reproducible methods to enhance research productivity and accuracy in this important area of study.