R2, using the most general definition [5 (link),8 (link)]:
Formula 10:
with RSS = residual sum-of-squares, TSS = total sum-of-squares, y = response values, = fitted values and = the mean of response values. For a more detailed description see Remarks 1-6 in Additional File
We chose to use the adjusted R2 to compensate for possible bias due to different number of parameters:
Formula 11:
with n = sample size and p = number of parameters.
The Akaike Information Criterion (AIC, [10 -12 (link)]), a measure that is widely accepted for measuring the validity within a cohort of nonlinear models and frequently used for model selection [13 ].
Formula 12:
with p = number of parameters and ln(L) = maximum log-likelihood of the estimated model. The latter, in the case of a nonlinear fit with normally distributed errors [13 ], is calculated by
Formula 13:
with x1, ..., xn = the residuals from the nonlinear least-squares fit and N = their number.
To provide a fair playing ground, we employed an AIC variant that corrects for small sample sizes, the bias-corrected AIC (AICc):
Formula 14:
with n = sample size and p = number of parameters.
In order to obtain values for the validity of a fit, we used Akaike weights which calculate the weight of evidence for each model within a cohort of models in question [12 (link)-14 ]:
Formula 15:
with i, k = model numbers, Δi(AIC) = the difference in AIC of each model in comparison to the model with the lowest AIC, subsequently normalized to their sum (denominator).
Also here, we used the bias-corrected AICc for calculating the Akaike weights.
We also chose to employ the Bayesian Information Criterion (BIC), which gives a higher penalty on the number of parameters [15 (link)]:
Formula 16:
with p = number of parameters, n = sample size and L = maximum likelihood of the estimated model.
Furthermore, the residual variance as the part of the variance that cannot be accounted for by the model:
Formula 17:
with RSS = residual sum-of-squares, n = sample size and p = number of parameters.
The variance of a least-squares fit is also characterized by the chi-square statistic defined as
where yi = response values, f(xi) = the fitted values and = the uncertainty in the individual measurements yi. We further define the reduced chi-square as a useful measure [16 ] by
Formula 19:
with ν = n - p (degrees of freedom). If the fitting function is a good approximation to the parent function, then the variances of both should agree well, and the reduced chi-square should be approximately unity. If the reduced chi-square is much larger than 1 (i.e. 10 or 100), it means that one is either overly optimistic about the measurement errors or that one selected an inappropriate fitting function. If reduced chi-square is too small (i.e. 0.1 or 0.01) it may mean that one has been too pessimistic about measurement errors. For this work, models were selected based on reduced chi-square by being closest to 1.