my model based on following growth function: x_t+1 = x_t + r * x_t * (1 – x_t/k) growth rate r defined each year random draw (mean=0.2, standard deviation=0.2). looking @ probability density function of stock x after 70 years 10000 runs. when calculate area under probability density function, equal 1 standard deviation of 0.1 not standard deviation of 0.2 or 0.3. created histogram area 100 values. shows high peak @ a=2 values area go 8. why not equal 1?
runs=10000 mean=0.2 sd=0.2 k=1 x_0=0.4 v=0.1 t=70 y=c() x=x_0 (j in 1:runs) { rand=rnorm(t,mean,sd) (i in 1:t) { x=max(x+rand[i]*x*(x-v)*(1-x/k),0) if(i==t) y[j]=x next} x=x_0 next } library(sfsmisc) dens=density(y) f=approxfun(dens$x, dens$y) h=c() i=seq(0.9, 1, length.out=100000) (e in 1:length(i)) { h[e]=f(i[e]) next} options(max.print=1000000) h[is.na(h)]=0 area=sum(abs(h[-1]+h[-length(h)])/2*diff(i))
thank you!
you numerical integration flawed.
first of all, plot of density:
almost of area underneath incredibly narrow, incredibly high spike. overwhelming majority of sample points outside of spike, hence don't job of estimating area of spike.
if evaluate print(dens)
see this:
call: density.default(x = y) data: y (10000 obs.); bandwidth 'bw' = 1.734e-06 x y min. :0.9922 min. : 0.0 1st qu.:0.9942 1st qu.: 0.0 median :0.9961 median : 0.0 mean :0.9961 mean : 439.1 3rd qu.:0.9981 3rd qu.: 0.0 max. :1.0000 max. :91131.2
note in particular all of x values > 0.99, using numerical integration on interval [.9,1]. thus, of numerical integration outside of range of density. using (via approxfun
) extrapolated values in range [0.9, 0.99], , extrapolating far outside (relatively speaking) range of data. 90% of numerical integration based on unreliable extrapolation of 2 almost-zero values (the first 2 y values).
you need find way rescale data. call density
problematic. uses (as default) 512 points, if @ them, few dozen have y
values greater 0. trying picture of distribution sampling tails rather values significant happening.
Comments
Post a Comment