## Monte Carlo-based prediction intervals for nonlinear regression

May 11, 2018

Calculation of the propagated uncertainty $\sigma_y$ using $\nabla \Sigma \nabla^T$ (1), where $\nabla$ is the gradient and $\Sigma$ the covariance matrix of the coefficients $\beta_i$, is called the “Delta Method” and is widely applied in nonlinear least-squares (NLS) fitting. However, this method is based on first-order Taylor expansion and thus assummes linearity around $\hat{y} = f(x, \hat{\beta})$. The second-order approach can partially correct for this restriction by using a second-order polynomial around $\hat{y}$, which is $\nabla \Sigma \nabla^T + \frac{1}{2} \rm{tr}(H \Sigma H \Sigma)$ (2), where $\rm{tr}(\cdot)$ is the matrix trace and $\rm{H}$ is the Hessian.
Confidence and prediction intervals for NLS models are calculated using $t(1 - \frac{\alpha}{2}, \nu) \cdot \sigma_y$ (3) or $t(1 - \frac{\alpha}{2}, \nu) \cdot \sqrt{\sigma_y^2 + \sigma_r^2}$ (4), respectively, where the residual variance $\sigma_r^2 = \frac{\sum_{i=1}^n (y_i - \hat{y}_i)^2}{\nu}$ (5).
Now, how can we employ the matrix notation of error propagation for creating Taylor expansion- and Monte Carlo-based prediction intervals?
The inclusion of $\sigma_r^2$ in the prediction interval can be implemented as an extended gradient and “augmented” covariance matrix. So instead of using $\hat{y} = f(x, \hat{\beta})$ (6), we take $\hat{y} = f(x, \hat{\beta}) + \sigma_r^2$ (7) as the expression and augment the $i \times i$ covariance matrix $\Sigma$ to an $(i+1) \times (i+1)$ covariance matrix, where $\Sigma_{i+1, i+1} = \sigma_r^2$. Partial differentiation and matrix multiplication will then yield, for example with two coefficients $\beta_1$ and $\beta_2$ and their corresponding covariance matrix $\Sigma$:
$\left[\frac{\partial f}{\partial \beta_1}\; \frac{\partial f}{\partial \beta_2}\; 1\right] \left[ \begin{array}{ccc} \;\sigma_1^2\;\;\; \sigma_1\sigma_2\;\; 0 \\ \sigma_2\sigma_1\;\; \sigma_2^2\;\;\;\; 0 \\ \;\;\;0\;\;\;\;\;\; 0\;\;\;\;\; \sigma_r^2 \end{array} \right] \left[ \begin{array}{c} \frac{\partial f}{\partial \beta_1} \\ \frac{\partial f}{\partial \beta_2} \\ 1 \end{array} \right]$ (8)
$= \left(\frac{\partial f}{\partial \beta_1}\right)^2\sigma_1^2 + 2 \frac{\partial f}{\partial \beta_1} \frac{\partial f}{\partial \beta_2} \sigma_1 \sigma_2 + \left(\frac{\partial f}{\partial \beta_2}\right)^2\sigma_2^2 + \sigma_r^2$ (9)
$\equiv \sigma_y^2 + \sigma_r^2$, which then goes into (4).
The advantage of the augmented covariance matrix is that it can be exploited for creating Monte Carlo-based prediction intervals. This is new from propagate version 1.0-6 and is based on the paradigm that we add another dimension by employing the augmented covariance matrix of (8) in the multivariate t-distribution random number generator (in our case rtmvt), with $\mu = 0$. All $n$ simulations are then evaluated with (7) and the usual $[1 - \frac{\alpha}{2}, \frac{\alpha}{2}]$ quantiles calculated for the prediction interval. Using the original covariance matrix with (6) will deliver the MC-based confidence interval.
Application of second-order Taylor expansion or the MC-based approach demonstrates nicely that for the majority of nonlinear models, the confidence/prediction intervals around $\hat{y}$ are quite asymmetric, which the classical Delta Method does not capture:
 library(propagate)

 

DNase1 <- subset(DNase, Run == 1) fm3DNase1 <- nls(density ~ Asym/(1 + exp((xmid - log(conc))/scal)), data = DNase1, start = list(Asym = 3, xmid = 0, scal = 1)) ## first-order prediction interval set.seed(123) PROP1 <- predictNLS(fm3DNase1, newdata = data.frame(conc = 2), nsim = 1000000, second.order = FALSE, interval = "prediction") t(PROP1$summary)  $\begin{array}{cc} Prop.Mean.1 & 0.74804722 \\ Prop.sd.1 & 0.02081131 \\ Prop.2.5 & 0.70308712 \\ Prop.97.5 & 0.79300731 \\ \end{array}$  ## second-order prediction interval and MC set.seed(123) PROP2 <- predictNLS(fm3DNase1, newdata = data.frame(conc = 2), nsim = 1000000, second.order = TRUE, interval = "prediction") t(PROP2$summary) 

$\begin{array}{cc} Prop.Mean.1 & 0.74804722 \\ Prop.Mean.2 & 0.74815136 \\ Prop.sd.1 & 0.02081131 \\ Prop.sd.2 & 0.02081520 \\ Prop.2.5 & 0.70318286 \\ Prop.97.5 & 0.79311987 \\ Sim.Mean & 0.74815598 \\ Sim.sd & 0.02261884 \\ Sim.2.5 & 0.70317629 \\ Sim.97.5 & 0.79309874 \\ \end{array}$

What we see here is that
i) the first-order prediction interval [0.70308712; 0.79300731] is symmetric and slightly down-biased compared to the second-order one [0.70317629; 0.79309874],
and
ii) the second-order prediction interval tallies nicely up to the 4th decimal with the new MC-based interval (0.70318286 and 0.70317629; 0.79311987 and 0.79309874).
I believe this clearly demonstrates the usefulness of the MC-based approach for NLS prediction interval estimation…

## Linear regression with random error giving EXACT predefined parameter estimates

January 26, 2016

When simulating linear models based on some defined slope/intercept and added gaussian noise, the parameter estimates vary after least-squares fitting. Here is some code I developed that does a double transform of these models as to obtain a fitted model with EXACT defined parameter estimates a (intercept) and b (slope).

It does so by:
1) Fitting a linear model #1 $Y_i = \beta_0 + \beta_1X_i + \varepsilon_i$ to the x,y data.
2) Correcting y by $\beta_1$: $Y_i = Y_i \cdot (\mathrm{b}/\beta_1)$.
3) Refitting linear model #2: $Y_i = \beta_0 + \beta_1X_i + \varepsilon_i$.
4) Correcting y by $\beta_0$: $Y_i = Y_i + (\mathrm{a} - \beta_0)$.
5) Refitting linear model #3: $Y_i = \beta_0 + \beta_1X_i + \varepsilon_i$, which is the final model with parameter estimates a and b.

Below is the code:
 exactLM <- function( x = 1:100, ## predictor values b = 0.01, ## slope a = 3, ## intercept error = NULL, ## homoscedastic error n = 1, ## number of replicates weights = NULL, ## possible weights, i.e. when heteroscedastic plot = TRUE, ## plot data and regression ... ## other parameters to 'lm' ) { if (is.null(error)) stop("Please define some error!")

 ## create x and y-values x <- rep(x, n) y <- a + b * x if (!is.null(error) & length(error) != length(x)) stop("'x' and 'error' must be of same length!") if (!is.null(weights) & length(weights) != length(x)) stop("'x' and 'weights' must be of same length!") ## add error y <- y + error ## create linear model #1 LM1 <- lm(y ~ x, weights = weights, ...) COEF1 <- coef(LM1) ## correct slope and create linear model #2 y <- y * (b/COEF1[2]) LM2 <- lm(y ~ x, weights = weights, ...) COEF2 <- coef(LM2) ## correct intercept and create linear model #3 y <- y + (a - COEF2[1]) LM3 <- lm(y ~ x, weights = weights, ...) ## plot data and regression plot(x, y, pch = 16) abline(LM3, lwd = 2, col = "darkred") 

 return(list(model = LM3, x = x, y = y)) } 

Here are some applications using replicates and weighted fitting:
 ############ Examples ################# ## n = 1 exactLM(x = 1:100, a = 0.5, b = 0.2, error = rnorm(100, 0, 2))

 ## n = 3 exactLM(x = 1:100, a = 0.5, b = 0.2, error = rnorm(300, 0, 2), n = 3) ## weighted by exact 1/var x <- 1:100 error <- rnorm(100, 0, 0.1 * x) weights <- 1/(0.1 * x)^2 exactLM(x = x, a = 0.5, b = 0.2, error = error, weights = weights) 

## weighted by empirical 1/var x <- rep(1:100, 3) error <- rnorm(300, 0, 0.1 * x) weights <- rep(1/(tapply(error, x, mean)^2), 3) exactLM(x = x, a = 0.5, b = 0.2, error = error, weights = weights) 

I am curious on comments concerning simplification and more importantly, application (other than cheating data…)!

Cheers,
Andrej

## Introducing: Orthogonal Nonlinear Least-Squares Regression in R

January 18, 2015

With this post I want to introduce my newly bred ‘onls’ package which conducts Orthogonal Nonlinear Least-Squares Regression (ONLS):
http://cran.r-project.org/web/packages/onls/index.html.
Orthogonal nonlinear least squares (ONLS) is a not so frequently applied and maybe overlooked regression technique that comes into question when one encounters an “error in variables” problem. While classical nonlinear least squares (NLS) aims to minimize the sum of squared vertical residuals, ONLS minimizes the sum of squared orthogonal residuals. The method is based on finding points on the fitted line that are orthogonal to the data by minimizing for each $(x_i, y_i)$ the Euclidean distance $\|D_i\|$ to some point $(x_{0i}, y_{0i})$ on the fitted curve. There is a 25 year old FORTRAN implementation for ONLS available (ODRPACK, http://www.netlib.org/toms/869.zip), which has been included in the ‘scipy’ package for Python (http://docs.scipy.org/doc/scipy-0.14.0/reference/odr.html). Here, onls has been developed for easy future algorithm tweaking in R. The results obtained from onls are exactly similar to those found in the original implementation [1, 2]. It is based on an inner loop using optimize for each $(x_i, y_i)$ to find $\min \|D_i\|$ within some border $[x_{i-w}, x_{i+w}]$ and an outer loop for the fit parameters using nls.lm of the ‘minpack’ package. Sensible starting parameters for onls are obtained by prior fitting with standard nls, as parameter values for ONLS are usually fairly similar to those from NLS.

There is a package vignette available with more details in the “/onls/inst” folder, especially on what to do if fitting fails or not all points are orthogonal. I will work through one example here, the famous DNase 1 dataset of the nls documentation, with 10% added error. The semantics are exactly as in nls, albeit with a (somewhat) different output:

 > DNase1 <- subset(DNase, Run == 1) > DNase1$density <- sapply(DNase1$density, function(x) rnorm(1, x, 0.1 * x)) > mod1 <- onls(density ~ Asym/(1 + exp((xmid - log(conc))/scal)), data = DNase1, start = list(Asym = 3, xmid = 0, scal = 1))

 

Obtaining starting parameters from ordinary NLS... Passed... Relative error in the sum of squares is at most ftol'. Optimizing orthogonal NLS... Passed... Relative error in the sum of squares is at most ftol'. 

The print.onls method gives, as in nls, the parameter values and the vertical residual sum-of-squares. However, the orthogonal residual sum-of-squares is also returned and MOST IMPORTANTLY, information on how many points $(x_{0i}, y_{0i})$ are actually orthogonal to $(x_i, y_i)$ after fitting:
 > print(mod1) Nonlinear orthogonal regression model model: density ~ Asym/(1 + exp((xmid - log(conc))/scal)) data: DNase1 Asym xmid scal 2.422 1.568 1.099 vertical residual sum-of-squares: 0.2282 orthogonal residual sum-of-squares: 0.2234 PASSED: 16 out of 16 fitted points are orthogonal.

 

Number of iterations to convergence: 2 Achieved convergence tolerance: 1.49e-08 

Checking all points for orthogonality is accomplished using the independent checking routine check_o which calculates the angle between the slope $\mathrm{m}_i$ of the tangent obtained from the first derivative at $(x_{0i}, y_{0i})$ and the slope $\mathrm{n}_i$ of the onls-minimized Euclidean distance between $(x_{0i}, y_{0i})$ and $(x_i, y_i)$:

$\tan(\alpha_i) = \left|\frac{\mathrm{m}_i - \mathrm{n}_i}{1 + \mathrm{m}_i \cdot \mathrm{n}_i}\right|$
$\mathrm{m}_i = \frac{df(x, \beta)}{dx_{0i}}, \,\, \mathrm{n}_i = \frac{y_i - y_{0i}}{x_i - x_{0i}}$
=> $\alpha_i[^{\circ}] = \tan^{-1} \left( \left|\frac{\mathrm{m}_i - \mathrm{n}_i}{1 + \mathrm{m}_i \cdot \mathrm{n}_i}\right| \right) \cdot \frac{360}{2\pi}$ which should be $90^{\circ}$, if the Euclidean distance has been minimized.

When plotting an ONLS model with the plot.onls function, it is important to know that orthogonality is only evident with equal scaling of both axes:
 > plot(mod1, xlim = c(0, 0.5), ylim = c(0, 0.5)) 

As with nls, all generics work:
print(mod1), plot(mod1), summary(mod1), predict(mod1, newdata = data.frame(conc = 6)), logLik(mod1), deviance(mod1), formula(mod1), weights(mod1), df.residual(mod1), fitted(mod1), residuals(mod1), vcov(mod1), coef(mod1), confint(mod1).
However, deviance and residuals deliver the vertical, standard NLS values. To calculate orthogonal deviance and obtain orthogonal residuals, use deviance_o and residuals_o.

[1] ALGORITHM 676 ODRPACK: Software for Weighted Orthogonal Distance Regression.
Boggs PT, Donaldson JR, Byrd RH and Schnabel RB.
ACM Trans Math Soft (1989), 15: 348-364.
[2] User’s Reference Guide for ODRPACK Version 2.01.
Software for Weighted Orthogonal Distance Regression.
Boggs PT, Byrd RH, Rogers JE and Schnabel RB.\\
NISTIR (1992), 4834: 1-113.

Cheers,
Andrej

## Error propagation based on interval arithmetics

September 27, 2014

I added an interval function to my ‘propagate’ package (now on CRAN) that conducts error propagation based on interval arithmetics. It calculates the uncertainty of a model by using interval arithmetics based on (what I call) a “combinatorial sequence grid evaluation” approach, thereby avoiding the classical dependency problem that often inflates the result interval.
This is how it works:
For two variables $x, y$ with intervals $[x_1, x_2]$ and $[y_1, y_2]$, the four basic arithmetic operations $\langle \mathrm{op} \rangle \in \{+, -, \cdot, /\}$ are
$[x_1, x_2] \,\langle\!\mathrm{op}\!\rangle\, [y_1, y_2] =$
$\left[ \min(x_1 \langle\!\mathrm{op}\!\rangle y_1, x_1 \langle\!\mathrm{op}\!\rangle y_2, x_2 \langle\!\mathrm{op}\!\rangle y_1, x_2 \langle\!\mathrm{op}\!\rangle y_2) \right.,$
$\left. \max(x_1 \langle\!\mathrm{op}\!\rangle y_1, x_1 \langle\!\mathrm{op}\!\rangle y_2, x_2 \langle\!\mathrm{op}\!\rangle y_1, x_2 \langle\!\mathrm{op}\!\rangle y_2)\right]$

So for a function $f([x_1, x_2], [y_1, y_2], [z_1, z_2], ...)$ with $k$ variables, we have to create all combinations $C_i = {{\{\{x_1, x_2\}, \{y_1, y2\}, \{z_1, z_2\}, ...\}} \choose k}$, evaluate their function values $R_i = f(C_i)$ and select $R = [\min R_i, \max R_i]$.
The so-called dependency problem is a major obstacle to the application of interval arithmetic and arises when the same variable exists in several terms of a complicated and often nonlinear function. In these cases, over-estimation can cover a range that is significantly larger, i.e. $\min R_i \ll \min f(x, y, z, ...) , \max R_i \gg \max f(x, y, z, ...)$. For an example, see here under “Dependency problem”. A partial solution to this problem is to refine $R_i$ by dividing $[x_1, x_2]$ into $i$ smaller subranges to obtain sequence $(x_1, x_{1.1}, x_{1.2}, x_{1.i}, ..., x_2)$. Again, all combinations are evaluated as described above, resulting in a larger number of $R_i$ in which $\min R_i$ and $\max R_i$ may be closer to $\min f(x, y, z, ...)$ and $\max f(x, y, z, ...)$, respectively. This is the “combinatorial sequence grid evaluation” approach which works quite well in scenarios where monotonicity changes direction, obviating the need to create multivariate derivatives (Hessians) or use some multivariate minimization algorithm.
If the interval is of type $[x_1 < 0, x_2 > 0]$, a zero is included into the middle of the sequence to avoid wrong results in case of even powers, i.e. $[-1, 1]^2 = [-1, 1][-1, 1] = [-1, 1]$ when actually the correct interval is $[0, 1]$, as exemplified by curve(x^2, -1, 1). Some examples to illustrate:

 ## Example 2: A complicated nonlinear model. ## Reduce sequence length to 2 => original interval ## for quicker evaluation. EXPR2 <- expression(C * sqrt((520 * H * P)/(M *(t + 460)))) H <- c(64, 65) M <- c(16, 16.2) P <- c(361, 365) t <- c(165, 170) C <- c(38.4, 38.5) DAT2 <- makeDat(EXPR2) interval(DAT2, EXPR2, seq = 2) [1317.494, 1352.277] 
 ## Example 5: Overestimation from dependency problem. # Original interval with seq = 2 => [1, 7] EXPR5 <- expression(x^2 - x + 1) x <- c(-2, 1) DAT5 <- makeDat(EXPR5) interval(DAT5, EXPR5, seq = 2) [1, 7] 
 # Refine with large sequence => [0.75, 7] interval(DAT5, EXPR5, seq = 100) [0.7502296, 7] # Tallies with curve function. curve(x^2 - x + 1, -2, 1) 

Have fun!
Cheers,
-ans

## I’ll take my NLS with weights, please…

January 13, 2014

Today I want to advocate weighted nonlinear regression. Why so?
Minimum-variance estimation of the adjustable parameters in linear and non-linear least squares requires that the data be weighted inversely as their variances $w_i \propto \sigma^{-2}$. Only then $\hat{\beta}$ is the BLUE (Best Linear Unbiased Estimator) for linear regression and nonlinear regression with small errors (http://en.wikipedia.org/wiki/Weighted_least_squares#Weighted_least_squares), an important fact frequently neglected, especially in scenarios with heteroscedasticity.
The variance of a fit $s^2$ is also characterized by the statistic $\chi^2$ defined as followed:
$\chi^2 \equiv \sum_{i=1}^n \frac{(y_i - f(x_i))^2}{\sigma_i^2}$
The relationship between $s^2$ and $\chi^2$ can be seen most easily by comparison with the reduced $\chi^2$: $\chi_\nu^2 = \frac{\chi^2}{\nu} = \frac{s^2}{\langle \sigma_i^2 \rangle}$
whereas $\nu$ = degrees of freedom (N – p), and $\langle \sigma_i^2 \rangle$ is the weighted average of the individual variances. If the fitting function is a good approximation to the parent function, the value of the reduced chi-square should be approximately unity, $\chi_\nu^2 = 1$. If the fitting function is not appropriate for describing the data, the deviations will be larger and the estimated variance will be too large, yielding a value greater than 1. A value less than 1 can be a consequence of the fact that there exists an uncertainty in the determination of $s^2$, and the observed values of $\chi_\nu^2$ will fluctuate from experiment to experiment. To assign significance to the $\chi^2$ value, one can use the integral probability $P_\chi(\chi^2;\nu) = \int_{\chi^2}^\infty P_\chi(x^2, \nu)dx^2$ which describes the probability that a random set of $n$ data points sampled from the parent distribution would yield a value of $\chi^2$ equal to or greater than the calculated one. This can be calculated by 1 - pchisq(chi^2, nu) in R.

To see that this actually works, we can Monte Carlo simulate some heteroscedastic data with defined variance as a function of $y$-magnitude and compare unweighted and weighted NLS.
First we take the example from the documentation to nls and fit an enzyme kinetic model:
 DNase1 <- subset(DNase, Run == 1) fm3DNase1 <- nls(density ~ Asym/(1 + exp((xmid - log(conc))/scal)), data = DNase1, start = list(Asym = 3, xmid = 0, scal = 1)) 
Then we take the fitted values $\hat{y}$ (which are duplicated because of the initial replicates), create a new unique dataset on which we create 20 response values $y_i$ for each concentration $x$ sampled from a normal distribution with 2% random heteroscedastic gaussian noise as a function of the value’s magnitude $y_i = \hat{y} + \mathcal{N}(0, 0.02 \cdot \hat{y})$:
 FITTED <- unique(fitted(fm3DNase1)) DAT <- sapply(FITTED, function(x) rnorm(20, mean = x, sd = 0.02 * x)) matplot(t(DAT), type = "p", pch = 16, lty = 1, col = 1) lines(FITTED, col = 2) 

Now we create the new dataframe to be fitted. For this we have to stack the unique $x$– and $y_i$-values into a 2-column dataframe:
 CONC <- unique(DNase1$conc) fitDAT <- data.frame(conc = rep(CONC, each = 20), density = matrix(DAT))  First we create the unweighted fit:  FIT1 <- nls(density ~ Asym/(1 + exp((xmid - log(conc))/scal)), data = fitDAT, start = list(Asym = 3, xmid = 0, scal = 1))  Then we fit the data with weights $w = 1/var(y)$. IMPORTANT: we need to replicate the weight values by 20 in order to match the data length.  VAR <- tapply(fitDAT$density, fitDAT$conc, var) VAR <- rep(VAR, each = 20) FIT2 <- nls(density ~ Asym/(1 + exp((xmid - log(conc))/scal)), data = fitDAT, weights = 1/VAR, start = list(Asym = 3, xmid = 0, scal = 1))  For calculation of $\chi^2_\nu$ and its corresponding p-value, we use the fitchisq function of my ‘qpcR’ package:  library(qpcR) > fitchisq(FIT1)$chi2 [1] 191.7566 $chi2.red [1] 1.22138$p.value [1] 0.03074883

 

> fitchisq(FIT2) $chi2 [1] 156.7153$chi2.red [1] 0.9981866 $p.value [1] 0.4913983  Now we see the benefit of weighted fitting: Only the weighted model shows us with it’s reduced chi-square value of almost exactly 1 and its high p-value that our fitted model approximates the parent model. And of course it does, because we simulated our data from it… Cheers, Andrej ## Introducing ‘propagate’ August 31, 2013 With this post, I want to introduce the new ‘propagate’ package on CRAN. It has one single purpose: propagation of uncertainties (“error propagation”). There is already one package on CRAN available for this task, named ‘metRology’ (http://cran.r-project.org/web/packages/metRology/index.html). ‘propagate’ has some additional functionality that some may find useful. The most important functions are: * propagate: A general function for the calculation of uncertainty propagation by first-/second-order Taylor expansion and Monte Carlo simulation including covariances. Input data can be any symbolic/numeric differentiable expression and data based on replicates, summaries (mean & s.d.) or sampled from a distribution. Uncertainty propagation is based completely on matrix calculus accounting for full covariance structure. Monte Carlo simulation is conducted using multivariate normal or t-distributions with covariance structure. The second-order Taylor approximation is the new aspect, because it is not based on the assumption of linearity around $f(x)$ but uses a second-order polynomial to account for nonlinearities, making heavy use of numerical or symbolical Hessian matrices. Interestingly, the second-order approximation gives results quite similar to the MC simulations! * plot.propagate: Graphing error propagation with the histograms of the MC simulations and MC/Taylor-based confidence intervals. * predictNLS: The propagate function is used to calculate the propagated error to the fitted values of a nonlinear model of type nls or nlsLM. Please refer to my post here: https://rmazing.wordpress.com/2013/08/26/predictnls-part-2-taylor-approximation-confidence-intervals-for-nls-models/. * makeGrad, makeHess, numGrad, numHess are functions to create symbolical or numerical gradient and Hessian matrices from an expression containing first/second-order partial derivatives. These can then be evaluated in an environment with evalDerivs. * fitDistr: This function fits 21 different continuous distributions by (weighted) NLS to the histogram or kernel density of the Monte Carlo simulation results as obtained by propagate or any other vector containing large-scale observations. Finally, the fits are sorted by ascending AIC. * random samplers for 15 continuous distributions under one hood, some of them previously unavailable: Skewed-normal distribution, Generalized normal distributionm, Scaled and shifted t-distribution, Gumbel distribution, Johnson SU distribution, Johnson SB distribution, 3P Weibull distribution, 4P Beta distribution, Triangular distribution, Trapezoidal distribution, Curvilinear Trapezoidal distribution, Generalized trapezoidal distribution, Laplacian distribution, Arcsine distribution, von Mises distribution. Most of them sample from the inverse cumulative distribution function, but 11, 12 and 15 use a vectorized version of “Rejection Sampling” giving roughly 100000 random numbers/s. An example (without covariance for simplicity): $\mu_a = 5, \sigma_a = 0.1, \mu_b = 10, \sigma_b = 0.1, \mu_x = 1, \sigma_x = 0.1$ $f(x) = a^{bx}$:  >DAT <- data.frame(a = c(5, 0.1), b = c(10, 0.1), x = c(1, 0.1)) >EXPR <- expression(a^b*x) >res <- propagate(EXPR, DAT)  Results from error propagation: Mean.1 Mean.2 sd.1 sd.2 2.5% 97.5% 9765625 10067885 2690477 2739850 4677411 15414333  Results from Monte Carlo simulation: Mean sd Median MAD 2.5% 97.5% 10072640 2826027 9713207 2657217 5635222 16594123  The plot reveals the resulting distribution obtained from Monte Carlo simulation:  >plot(res)  Seems like a skewed distributions. We can now use fitDistr to find out which comes closest:  > fitDistr(res$resSIM) Fitting Normal distribution...Done. Fitting Skewed-normal distribution...Done. Fitting Generalized normal distribution...Done. Fitting Log-normal distribution...Done. Fitting Scaled/shifted t- distribution...Done. Fitting Logistic distribution...Done. Fitting Uniform distribution...Done. Fitting Triangular distribution...Done. Fitting Trapezoidal distribution...Done. Fitting Curvilinear Trapezoidal distribution...Done. Fitting Generalized Trapezoidal distribution...Done. Fitting Gamma distribution...Done. Fitting Cauchy distribution...Done. Fitting Laplace distribution...Done. Fitting Gumbel distribution...Done. Fitting Johnson SU distribution...........10.........20.........30.........40.........50 .........60.........70.........80.Done. Fitting Johnson SB distribution...........10.........20.........30.........40.........50 .........60.........70.........80.Done. Fitting 3P Weibull distribution...........10.........20.......Done. Fitting 4P Beta distribution...Done. Fitting Arcsine distribution...Done. Fitting von Mises distribution...Done. $aic Distribution AIC 4 Log-normal -4917.823 16 Johnson SU -4861.960 15 Gumbel -4595.917 19 4P Beta -4509.716 12 Gamma -4469.780 9 Trapezoidal -4340.195 1 Normal -4284.706 5 Scaled/shifted t- -4283.070 6 Logistic -4266.171 3 Generalized normal -4264.102 14 Laplace -4144.870 13 Cauchy -4099.405 2 Skewed-normal -4060.936 11 Generalized Trapezoidal -4032.484 10 Curvilinear Trapezoidal -3996.495 8 Triangular -3970.993 7 Uniform -3933.513 20 Arcsine -3793.793 18 3P Weibull -3783.041 21 von Mises -3715.034 17 Johnson SB -3711.034  Log-normal wins, which makes perfect sense after using an exponentiation function... Have fun with the package. Comments welcome! Cheers, Andrej   17 Comments | General, R Internals | Tagged: confidence interval, first-order, fitting, Monte Carlo, nls, nonlinear, predict, second-order, Taylor approximation | Permalink Posted by anspiess   predictNLS (Part 2, Taylor approximation): confidence intervals for ‘nls’ models August 26, 2013 Initial Remark: Reload this page if formulas don’t display well! As promised, here is the second part on how to obtain confidence intervals for fitted values obtained from nonlinear regression via nls or nlsLM (package ‘minpack.lm’). I covered a Monte Carlo approach in https://rmazing.wordpress.com/2013/08/14/predictnls-part-1-monte-carlo-simulation-confidence-intervals-for-nls-models/, but here we will take a different approach: First- and second-order Taylor approximation around $f(x)$: $f(x) \approx f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2$. Using Taylor approximation for calculating confidence intervals is a matter of propagating the uncertainties of the parameter estimates obtained from vcov(model) to the fitted value. When using first-order Taylor approximation, this is also known as the “Delta method”. Those familiar with error propagation will know the formula $\displaystyle \sum_{i=1}^2 \rm{j_i}^2 \sigma_i^2 + 2\sum_{i=1\atop i \neq k}^n\sum_{k=1\atop k \neq i}^n \rm{j_i j_k} \sigma_{ik}$. Heavily underused is the matrix notation of the famous formula above, for which a good derivation can be found at http://www.nada.kth.se/~kai-a/papers/arrasTR-9801-R3.pdf: $\sigma_y^2 = \nabla_x\mathbf{C}_x\nabla_x^T$, where $\nabla_x$ is the gradient vector of first-order partial derivatives and $\mathbf{C}_x$ is the variance-covariance matrix. This formula corresponds to the first-order Taylor approximation. Now the problem with first-order approximations is that they assume linearity around $f(x)$. Using the “Delta method” for nonlinear confidence intervals in R has been discussed in http://thebiobucket.blogspot.de/2011/04/fit-sigmoid-curve-with-confidence.html or http://finzi.psych.upenn.edu/R/Rhelp02a/archive/42932.html. For highly nonlinear functions we need (at least) a second-order polynomial around $f(x)$ to realistically estimate the surrounding interval (red is linear approximation, blue is second-order polynomial on a sine function around $x = 5$): Interestingly, there are also matrix-like notations for the second-order mean and variance in the literature (see http://dml.cz/dmlcz/141418 or http://iopscience.iop.org/0026-1394/44/3/012/pdf/0026-1394_44_3_012.pdf): Second-order mean: $\rm{E}[y] = f(\bar{x}_i) + \frac{1}{2}\rm{tr}(\mathbf{H}_{xx}\mathbf{C}_x)$. Second-order variance: $\sigma_y^2 = \nabla_x\mathbf{C}_x\nabla_x^T + \frac{1}{2}\rm{tr}(\mathbf{H}_{xx}\mathbf{C}_x\mathbf{H}_{xx}\mathbf{C}_x)$, where $\mathbf{H}_{xx}$ is the Hessian matrix of second-order partial derivatives and $tr(\cdot)$ is the matrix trace (sum of diagonals). Enough theory, for wrapping this all up we need three utility functions: 1) numGrad for calculating numerical first-order partial derivatives. numGrad <- function(expr, envir = .GlobalEnv) { f0 <- eval(expr, envir) vars <- all.vars(expr) p <- length(vars) x <- sapply(vars, function(a) get(a, envir)) eps <- 1e-04 d <- 0.1 r <- 4 v <- 2 zero.tol <- sqrt(.Machine$double.eps/7e-07) h0 <- abs(d * x) + eps * (abs(x) < zero.tol) D <- matrix(0, length(f0), p) Daprox <- matrix(0, length(f0), r) for (i in 1:p) { h <- h0 for (k in 1:r) { x1 <- x2 <- x x1 <- x1 + (i == (1:p)) * h f1 <- eval(expr, as.list(x1)) x2 <- x2 - (i == (1:p)) * h f2 <- eval(expr, envir = as.list(x2)) Daprox[, k] <- (f1 - f2)/(2 * h[i]) h <- h/v } for (m in 1:(r - 1)) for (k in 1:(r - m)) { Daprox[, k] <- (Daprox[, k + 1] * (4^m) - Daprox[, k])/(4^m - 1) } D[, i] <- Daprox[, 1] } return(D) } 2) numHess for calculating numerical second-order partial derivatives. numHess <- function(expr, envir = .GlobalEnv) { f0 <- eval(expr, envir) vars <- all.vars(expr) p <- length(vars) x <- sapply(vars, function(a) get(a, envir)) eps <- 1e-04 d <- 0.1 r <- 4 v <- 2 zero.tol <- sqrt(.Machine$double.eps/7e-07) h0 <- abs(d * x) + eps * (abs(x) < zero.tol) Daprox <- matrix(0, length(f0), r) Hdiag <- matrix(0, length(f0), p) Haprox <- matrix(0, length(f0), r) H <- matrix(NA, p, p) for (i in 1:p) { h <- h0 for (k in 1:r) { x1 <- x2 <- x x1 <- x1 + (i == (1:p)) * h f1 <- eval(expr, as.list(x1)) x2 <- x2 - (i == (1:p)) * h f2 <- eval(expr, envir = as.list(x2)) Haprox[, k] <- (f1 - 2 * f0 + f2)/h[i]^2 h <- h/v } for (m in 1:(r - 1)) for (k in 1:(r - m)) { Haprox[, k] <- (Haprox[, k + 1] * (4^m) - Haprox[, k])/(4^m - 1) } Hdiag[, i] <- Haprox[, 1] } for (i in 1:p) { for (j in 1:i) { if (i == j) { H[i, j] <- Hdiag[, i] } else { h <- h0 for (k in 1:r) { x1 <- x2 <- x x1 <- x1 + (i == (1:p)) * h + (j == (1:p)) * h f1 <- eval(expr, as.list(x1)) x2 <- x2 - (i == (1:p)) * h - (j == (1:p)) * h f2 <- eval(expr, envir = as.list(x2)) Daprox[, k] <- (f1 - 2 * f0 + f2 - Hdiag[, i] * h[i]^2 - Hdiag[, j] * h[j]^2)/(2 * h[i] * h[j]) h <- h/v } for (m in 1:(r - 1)) for (k in 1:(r - m)) { Daprox[, k] <- (Daprox[, k + 1] * (4^m) - Daprox[, k])/(4^m - 1) } H[i, j] <- H[j, i] <- Daprox[, 1] } } } return(H) } And a small function for the matrix trace: tr <- function(mat) sum(diag(mat), na.rm = TRUE) 1) and 2) are modified versions of the genD function in the “numDeriv” package that can handle expressions. Now we need the predictNLS function that wraps it all up: predictNLS <- function( object, newdata, interval = c("none", "confidence", "prediction"), level = 0.95, ... ) { require(MASS, quietly = TRUE) interval <- match.arg(interval) ## get right-hand side of formula RHS <- as.list(object$call$formula)[[3]] EXPR <- as.expression(RHS) ## all variables in model VARS <- all.vars(EXPR) ## coefficients COEF <- coef(object) ## extract predictor variable predNAME <- setdiff(VARS, names(COEF)) ## take fitted values, if 'newdata' is missing if (missing(newdata)) { newdata <- eval(object$data)[predNAME] colnames(newdata) <- predNAME } ## check that 'newdata' has same name as predVAR if (names(newdata)[1] != predNAME) stop("newdata should have name '", predNAME, "'!") ## get parameter coefficients COEF <- coef(object) ## get variance-covariance matrix VCOV <- vcov(object) ## augment variance-covariance matrix for 'mvrnorm' ## by adding a column/row for 'error in x' NCOL <- ncol(VCOV) ADD1 <- c(rep(0, NCOL)) ADD1 <- matrix(ADD1, ncol = 1) colnames(ADD1) <- predNAME VCOV <- cbind(VCOV, ADD1) ADD2 <- c(rep(0, NCOL + 1)) ADD2 <- matrix(ADD2, nrow = 1) rownames(ADD2) <- predNAME VCOV <- rbind(VCOV, ADD2) NR <- nrow(newdata) respVEC <- numeric(NR) seVEC <- numeric(NR) varPLACE <- ncol(VCOV) outMAT <- NULL ## define counter function counter <- function (i) { if (i%%10 == 0) cat(i) else cat(".") if (i%%50 == 0) cat("\n") flush.console() } ## calculate residual variance r <- residuals(object) w <- weights(object) rss <- sum(if (is.null(w)) r^2 else r^2 * w) df <- df.residual(object) res.var <- rss/df ## iterate over all entries in 'newdata' as in usual 'predict.' functions for (i in 1:NR) { counter(i) ## get predictor values and optional errors predVAL <- newdata[i, 1] if (ncol(newdata) == 2) predERROR <- newdata[i, 2] else predERROR <- 0 names(predVAL) <- predNAME names(predERROR) <- predNAME ## create mean vector meanVAL <- c(COEF, predVAL) ## create augmented variance-covariance matrix ## by putting error^2 in lower-right position of VCOV newVCOV <- VCOV newVCOV[varPLACE, varPLACE] <- predERROR^2 SIGMA <- newVCOV ## first-order mean: eval(EXPR), first-order variance: G.S.t(G) MEAN1 <- try(eval(EXPR, envir = as.list(meanVAL)), silent = TRUE) if (inherits(MEAN1, "try-error")) stop("There was an error in evaluating the first-order mean!") GRAD <- try(numGrad(EXPR, as.list(meanVAL)), silent = TRUE) if (inherits(GRAD, "try-error")) stop("There was an error in creating the numeric gradient!") VAR1 <- GRAD %*% SIGMA %*% matrix(GRAD) ## second-order mean: firstMEAN + 0.5 * tr(H.S), ## second-order variance: firstVAR + 0.5 * tr(H.S.H.S) HESS <- try(numHess(EXPR, as.list(meanVAL)), silent = TRUE) if (inherits(HESS, "try-error")) stop("There was an error in creating the numeric Hessian!") valMEAN2 <- 0.5 * tr(HESS %*% SIGMA) valVAR2 <- 0.5 * tr(HESS %*% SIGMA %*% HESS %*% SIGMA) MEAN2 <- MEAN1 + valMEAN2 VAR2 <- VAR1 + valVAR2 ## confidence or prediction interval if (interval != "none") { tfrac <- abs(qt((1 - level)/2, df)) INTERVAL <- tfrac * switch(interval, confidence = sqrt(VAR2), prediction = sqrt(VAR2 + res.var)) LOWER <- MEAN2 - INTERVAL UPPER <- MEAN2 + INTERVAL names(LOWER) <- paste((1 - level)/2 * 100, "%", sep = "") names(UPPER) <- paste((1 - (1- level)/2) * 100, "%", sep = "") } else { LOWER <- NULL UPPER <- NULL } RES <- c(mu.1 = MEAN1, mu.2 = MEAN2, sd.1 = sqrt(VAR1), sd.2 = sqrt(VAR2), LOWER, UPPER) outMAT <- rbind(outMAT, RES) } cat("\n") rownames(outMAT) <- NULL return(outMAT) } With all functions at hand, we can now got through the same example as used in the Monte Carlo post: DNase1 <- subset(DNase, Run == 1) fm1DNase1 <- nls(density ~ SSlogis(log(conc), Asym, xmid, scal), DNase1) > predictNLS(fm1DNase1, newdata = data.frame(conc = 5), interval = "confidence") . mu.1 mu.2 sd.1 sd.2 2.5% 97.5% [1,] 1.243631 1.243288 0.03620415 0.03620833 1.165064 1.321511 The errors/confidence intervals are larger than with the MC approch (who knows why?) but it is very interesting to see how close the second-order corrected mean (1.243288) comes to the mean of the simulated values from the Monte Carlo approach (1.243293)! The two approach (MC/Taylor) will be found in the predictNLS function that will be part of the “propagate” package in a few days at CRAN… Cheers, Andrej 12 Comments | General, R Internals | Tagged: confidence interval, first-order, fitting, Monte Carlo, nls, nonlinear, predict, second-order, Taylor approximation | Permalink Posted by anspiess « Previous Entries 
 Search Recent Posts Monte Carlo-based prediction intervals for nonlinear regression Linear regression with random error giving EXACT predefined parameter estimates Introducing: Orthogonal Nonlinear Least-Squares Regression in R Error propagation based on interval arithmetics I’ll take my NLS with weights, please… Archives May 2018 January 2016 January 2015 September 2014 January 2014 August 2013 July 2013 April 2013 February 2013 January 2013 November 2012 July 2012 June 2012 Categories General nonlinear regression R Internals Uncategorized Meta Register Log in Entries feed Comments feed WordPress.com Blogroll R-bloggers __ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-286348-5dcb1fdade4f8', location: 140, formFactor: '003', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy settings', } } }); }); Create a free website or blog at WordPress.com. 
 //<![CDATA[ var infiniteScroll = JSON.parse( decodeURIComponent( '%7B%22settings%22%3A%7B%22id%22%3A%22content%22%2C%22ajaxurl%22%3A%22https%3A%5C%2F%5C%2Frmazing.wordpress.com%5C%2F%3Finfinity%3Dscrolling%22%2C%22type%22%3A%22scroll%22%2C%22wrapper%22%3Atrue%2C%22wrapper_class%22%3A%22infinite-wrap%22%2C%22footer%22%3Atrue%2C%22click_handle%22%3A%221%22%2C%22text%22%3A%22Older%20posts%22%2C%22totop%22%3A%22Scroll%20back%20to%20top%22%2C%22currentday%22%3A%2226.08.13%22%2C%22order%22%3A%22DESC%22%2C%22scripts%22%3A%5B%5D%2C%22styles%22%3A%5B%5D%2C%22google_analytics%22%3Afalse%2C%22offset%22%3A0%2C%22history%22%3A%7B%22host%22%3A%22rmazing.wordpress.com%22%2C%22path%22%3A%22%5C%2Fpage%5C%2F%25d%5C%2F%22%2C%22use_trailing_slashes%22%3Atrue%2C%22parameters%22%3A%22%22%7D%2C%22query_args%22%3A%7B%22error%22%3A%22%22%2C%22m%22%3A%22%22%2C%22p%22%3A0%2C%22post_parent%22%3A%22%22%2C%22subpost%22%3A%22%22%2C%22subpost_id%22%3A%22%22%2C%22attachment%22%3A%22%22%2C%22attachment_id%22%3A0%2C%22name%22%3A%22%22%2C%22pagename%22%3A%22%22%2C%22page_id%22%3A0%2C%22second%22%3A%22%22%2C%22minute%22%3A%22%22%2C%22hour%22%3A%22%22%2C%22day%22%3A0%2C%22monthnum%22%3A0%2C%22year%22%3A0%2C%22w%22%3A0%2C%22category_name%22%3A%22%22%2C%22tag%22%3A%22%22%2C%22cat%22%3A%22%22%2C%22tag_id%22%3A%22%22%2C%22author%22%3A%22%22%2C%22author_name%22%3A%22%22%2C%22feed%22%3A%22%22%2C%22tb%22%3A%22%22%2C%22paged%22%3A0%2C%22meta_key%22%3A%22%22%2C%22meta_value%22%3A%22%22%2C%22preview%22%3A%22%22%2C%22s%22%3A%22%22%2C%22sentence%22%3A%22%22%2C%22title%22%3A%22%22%2C%22fields%22%3A%22%22%2C%22menu_order%22%3A%22%22%2C%22embed%22%3A%22%22%2C%22category__in%22%3A%5B%5D%2C%22category__not_in%22%3A%5B%5D%2C%22category__and%22%3A%5B%5D%2C%22post__in%22%3A%5B%5D%2C%22post__not_in%22%3A%5B%5D%2C%22post_name__in%22%3A%5B%5D%2C%22tag__in%22%3A%5B%5D%2C%22tag__not_in%22%3A%5B%5D%2C%22tag__and%22%3A%5B%5D%2C%22tag_slug__in%22%3A%5B%5D%2C%22tag_slug__and%22%3A%5B%5D%2C%22post_parent__in%22%3A%5B%5D%2C%22post_parent__not_in%22%3A%5B%5D%2C%22author__in%22%3A%5B%5D%2C%22author__not_in%22%3A%5B%5D%2C%22posts_per_page%22%3A7%2C%22ignore_sticky_posts%22%3Afalse%2C%22suppress_filters%22%3Afalse%2C%22cache_results%22%3Afalse%2C%22update_post_term_cache%22%3Atrue%2C%22lazy_load_term_meta%22%3Atrue%2C%22update_post_meta_cache%22%3Atrue%2C%22post_type%22%3A%22%22%2C%22nopaging%22%3Afalse%2C%22comments_per_page%22%3A%2250%22%2C%22no_found_rows%22%3Afalse%2C%22order%22%3A%22DESC%22%7D%2C%22last_post_date%22%3A%222013-08-26%2013%3A15%3A26%22%2C%22stats%22%3A%22blog%3D37379885%26v%3Dwpcom%26tz%3D1%26user_id%3D0%26subd%3Drmazing%26x_pagetype%3Dinfinite%22%7D%7D' ) ); //]]> var WPGroHo = {"my_hash":""}; //initialize and attach hovercards to all gravatars jQuery( document ).ready( function( $) { if (typeof Gravatar === "undefined"){ return; } if ( typeof Gravatar.init !== "function" ) { return; } Gravatar.profile_cb = function( hash, id ) { WPGroHo.syncProfileData( hash, id ); }; Gravatar.my_hash = WPGroHo.my_hash; Gravatar.init( 'body', '#wp-admin-bar-my-account' ); }); var HighlanderComments = {"loggingInText":"Logging In\u2026","submittingText":"Posting Comment\u2026","postCommentText":"Post Comment","connectingToText":"Connecting to %s","commentingAsText":"%1$s: You are commenting using your %2$s account.","logoutText":"Log Out","loginText":"Log In","connectURL":"https:\/\/rmazing.wordpress.com\/public.api\/connect\/?action=request","logoutURL":"https:\/\/rmazing.wordpress.com\/wp-login.php?action=logout&_wpnonce=18cc2aa1ef","homeURL":"https:\/\/rmazing.wordpress.com\/","postID":"676","gravDefault":"identicon","enterACommentError":"Please enter a comment","enterEmailError":"Please enter your email address here","invalidEmailError":"Invalid email address","enterAuthorError":"Please enter your name here","gravatarFromEmail":"This picture will show whenever you leave a comment. Click to customize it.","logInToExternalAccount":"Log in to use details from one of these accounts.","change":"Change","changeAccount":"Change Account","comment_registration":"0","userIsLoggedIn":"","isJetpack":"","text_direction":"ltr"}; Rmazing Create a free website or blog at WordPress.com. ( function($ ) { $( document.body ).on( 'post-load', function () { __ATA.insertInlineAds(); } ); } )( jQuery ); Post to Cancel Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use. To find out more, including how to control cookies, see here: Cookie Policy (function(){ var corecss = document.createElement('link'); var themecss = document.createElement('link'); var corecssurl = "https://s1.wp.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter3/styles/shCore.css?ver=3.0.9b"; if ( corecss.setAttribute ) { corecss.setAttribute( "rel", "stylesheet" ); corecss.setAttribute( "type", "text/css" ); corecss.setAttribute( "href", corecssurl ); } else { corecss.rel = "stylesheet"; corecss.href = corecssurl; } document.head.appendChild( corecss ); var themecssurl = "https://s2.wp.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter3/styles/shThemeDefault.css?m=1363304414h&amp;ver=3.0.9b"; if ( themecss.setAttribute ) { themecss.setAttribute( "rel", "stylesheet" ); themecss.setAttribute( "type", "text/css" ); themecss.setAttribute( "href", themecssurl ); } else { themecss.rel = "stylesheet"; themecss.href = themecssurl; } document.head.appendChild( themecss ); })(); SyntaxHighlighter.config.strings.expandSource = '+ expand source'; SyntaxHighlighter.config.strings.help = '?'; SyntaxHighlighter.config.strings.alert = 'SyntaxHighlighter\n\n'; SyntaxHighlighter.config.strings.noBrush = 'Can\'t find brush for: '; SyntaxHighlighter.config.strings.brushNotHtmlScript = 'Brush wasn\'t configured for html-script option: '; SyntaxHighlighter.defaults['pad-line-numbers'] = false; SyntaxHighlighter.defaults['toolbar'] = false; SyntaxHighlighter.all(); // Infinite scroll support if ( typeof( jQuery ) !== 'undefined' ) { jQuery( function($ ) { \$( document.body ).on( 'post-load', function() { SyntaxHighlighter.highlight(); } ); } ); } var actionbardata = {"siteID":"37379885","siteName":"Rmazing","siteURL":"http:\/\/rmazing.wordpress.com","icon":"<img alt='' src='https:\/\/s2.wp.com\/i\/logo\/wpcom-gray-white.png' class='avatar avatar-50' height='50' width='50' \/>","canManageOptions":"","canCustomizeSite":"","isFollowing":"","themeSlug":"pub\/contempt","signupURL":"https:\/\/wordpress.com\/start\/","loginURL":"https:\/\/wordpress.com\/log-in?redirect_to=https%3A%2F%2Frmazing.wordpress.com%2F2018%2F05%2F11%2Fmonte-carlo-based-prediction-intervals-for-nonlinear-regression%2F&signup_flow=account","themeURL":"","xhrURL":"https:\/\/rmazing.wordpress.com\/wp-admin\/admin-ajax.php","nonce":"6e3d2a77fe","isSingular":"","isFolded":"","isLoggedIn":"","isMobile":"","subscribeNonce":"<input type=\"hidden\" id=\"_wpnonce\" name=\"_wpnonce\" value=\"02c3cb86cc\" \/>","referer":"https:\/\/rmazing.wordpress.com\/","canFollow":"1","feedID":"2695909","statusMessage":"","customizeLink":"https:\/\/rmazing.wordpress.com\/wp-admin\/customize.php?url=https%3A%2F%2Frmazing.wordpress.com%2F","i18n":{"view":"View site","follow":"Follow","following":"Following","edit":"Edit","login":"Log in","signup":"Sign up","customize":"Customize","report":"Report this content","themeInfo":"Get theme: Contempt","shortlink":"Copy shortlink","copied":"Copied","followedText":"New posts from this site will now appear in your <a href=\"https:\/\/wordpress.com\/\">Reader<\/a>","foldBar":"Collapse this bar","unfoldBar":"Expand this bar","editSubs":"Manage subscriptions","viewReader":"View site in Reader","viewReadPost":"View post in Reader","subscribe":"Sign me up","enterEmail":"Enter your email address","followers":"","alreadyUser":"Already have a WordPress.com account? <a href=\"https:\/\/wordpress.com\/log-in?redirect_to=https%3A%2F%2Frmazing.wordpress.com%2F2018%2F05%2F11%2Fmonte-carlo-based-prediction-intervals-for-nonlinear-regression%2F&signup_flow=account\">Log in now.<\/a>","stats":"Stats"}}; var jetpackCarouselStrings = {"widths":[370,700,1000,1200,1400,2000],"is_logged_in":"","lang":"en","ajaxurl":"https:\/\/rmazing.wordpress.com\/wp-admin\/admin-ajax.php","nonce":"ecbdd1964e","display_exif":"1","display_geo":"1","single_image_gallery":"1","single_image_gallery_media_file":"","background_color":"black","comment":"Comment","post_comment":"Post Comment","write_comment":"Write a Comment...","loading_comments":"Loading Comments...","download_original":"View full size <span class=\"photo-size\">{0}<span class=\"photo-size-times\">\u00d7<\/span>{1}<\/span>","no_comment_text":"Please be sure to submit some text with your comment.","no_comment_email":"Please provide an email address to comment.","no_comment_author":"Please provide your name to comment.","comment_post_error":"Sorry, but there was an error posting your comment. Please try again later.","comment_approved":"Your comment was approved.","comment_unapproved":"Your comment is in moderation.","camera":"Camera","aperture":"Aperture","shutter_speed":"Shutter Speed","focal_length":"Focal Length","copyright":"Copyright","comment_registration":"0","require_name_email":"1","login_url":"https:\/\/rmazing.wordpress.com\/wp-login.php?redirect_to=https%3A%2F%2Frmazing.wordpress.com%2F2014%2F01%2F13%2Fill-take-my-nls-with-weights-please%2F","blog_id":"37379885","meta_data":["camera","aperture","shutter_speed","focal_length","copyright"],"local_comments_commenting_as":"<fieldset><label for=\"email\">Email (Required)<\/label> <input type=\"text\" name=\"email\" class=\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\" id=\"jp-carousel-comment-form-email-field\" \/><\/fieldset><fieldset><label for=\"author\">Name (Required)<\/label> <input type=\"text\" name=\"author\" class=\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\" id=\"jp-carousel-comment-form-author-field\" \/><\/fieldset><fieldset><label for=\"url\">Website<\/label> <input type=\"text\" name=\"url\" class=\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\" id=\"jp-carousel-comment-form-url-field\" \/><\/fieldset>","reblog":"Reblog","reblogged":"Reblogged","reblog_add_thoughts":"Add your thoughts here... (optional)","reblogging":"Reblogging...","post_reblog":"Post Reblog","stats_query_args":"blog=37379885&v=wpcom&tz=1&user_id=0&subd=rmazing","is_public":"1","reblog_enabled":""}; // <![CDATA[ (function() { try{ if ( window.external &&'msIsSiteMode' in window.external) { if (window.external.msIsSiteMode()) { var jl = document.createElement('script'); jl.type='text/javascript'; jl.async=true; jl.src='/wp-content/plugins/ie-sitemode/custom-jumplist.php'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(jl, s); } } }catch(e){} })(); // ]]> jQuery.extend( infiniteScroll.settings.scripts, ["jquery-core","jquery-migrate","jquery","postmessage","mobile-useragent-info","jquery_inview","jetpack_resize","spin","jquery.spin","grofiles-cards","wpgroho","jquery.autoresize","highlander-comments","syntaxhighlighter-core","syntaxhighlighter-brush-r","devicepx","jetpack_likes_queuehandler","the-neverending-homepage","event-countdown-js","eu-cookie-law-script","wpcom-masterbar-js","wpcom-masterbar-tracks-js","wpcom-actionbar-bar","jetpack-carousel","swfobject","videopress","tiled-gallery"] ); jQuery.extend( infiniteScroll.settings.styles, ["wpcom-smileys","wordads-global","wordads-contempt","jetpack_likes","the-neverending-homepage","infinity-contempt","wp-block-library","wpcom-blocks","coblocks-frontend","posts-list-block-style","wpcom-core-compat-playlist-styles","wpcom-bbpress2-staff-css","contempt","eu-cookie-law-style","noticons","geo-location-flair","reblogging","a8c-global-print","wpcom-actionbar-bar","h4-global","wpcom-block-editor-styles","highlander-comments","highlander-comments-ie7","wpcom-layout-grid-front","jetpack-carousel","tiled-gallery"] ); _tkq = window._tkq || []; _stq = window._stq || []; _tkq.push(['storeContext', {'blog_id':'37379885','blog_tz':'1','user_lang':'en','blog_lang':'en','user_id':'0'}]); _stq.push(['view', {'blog':'37379885','v':'wpcom','tz':'1','user_id':'0','subd':'rmazing'}]); _stq.push(['extra', {'crypt':'UE5XaGUuOTlwaD85flAmcm1mcmZsaDhkV11YdWFnNncxc1tjZG9XVXhRREQ/V0xPZ1hKXy8xNFVXSj9jVnIteUlFc3ddRUw0SjZjfGJdQml5SlBYclRxaXguUEtnZFIxMWRVbC1HVFNycE9KZ3U5Y19KQjBmTDhTOXl+PWhSRz1sUiVsMW1NM2MzZzJuaDV6dzBzWnU2QWx6RzdRWG9CVyV1S1t4S1s4VVU0Y1FVSXhQV0FtVm44aWcyVEhQWkVTd2I2bGFdK1ZoUEl2K3FqVmhmSFswRFZuVVZkZzRHeGVwV11dLCtdamxsTj1GbzB2TWE3'}]); _stq.push([ 'clickTrackerInit', '37379885', '0' ]); if ( 'object' === typeof wpcom_mobile_user_agent_info ) { wpcom_mobile_user_agent_info.init(); var mobileStatsQueryString = ""; if( false !== wpcom_mobile_user_agent_info.matchedPlatformName ) mobileStatsQueryString += "&x_" + 'mobile_platforms' + '=' + wpcom_mobile_user_agent_info.matchedPlatformName; if( false !== wpcom_mobile_user_agent_info.matchedUserAgentName ) mobileStatsQueryString += "&x_" + 'mobile_devices' + '=' + wpcom_mobile_user_agent_info.matchedUserAgentName; if( wpcom_mobile_user_agent_info.isIPad() ) mobileStatsQueryString += "&x_" + 'ipad_views' + '=' + 'views'; if( "" != mobileStatsQueryString ) { new Image().src = document.location.protocol + '//pixel.wp.com/g.gif?v=wpcom-no-pv' + mobileStatsQueryString + '&baba=' + Math.random(); } }