Estimation of Fitted Parameter Errors The Details

Hoc opus, hic labor est (Here is the work - and the labor;

this is the really tough one)

This appendix describes useful techniques regarding the estimation of errors associated with the determined parameters in EB analysis. The Kolmogorov-Smirnov test, a procedure which checks whether the residuals fit the normal distribution, is reviewed. We also present here the sensitivity analysis approach and the grid approach, as they provide realistic estimations of the parameter errors.

B.1 The Kolmogorov-Smirnov Test

With the Kolmogorov-Smirnov test [cf. Ostle (1963)] it is possible to check whether the residuals of a least-squares solution are normally distributed around the mean value 0. An alternative is the x2-test. As Linnell's program (Linnell 1989) uses the Kolmogorov-Smirnov test we prefer this method, which works as follows:

1. let M := (x1, x2,..., xn) be a set of observations for which a given hypothesis should be tested;

2. let G : x e M ^ JR, x ^ G(x) be the corresponding cumulative distribution function;

3. for each observation x e M define Sn(x) := k/n, where k is the number of observations less than or equal to x;

4. determine the maximum D := max(G(x) - Sn(x) | x e M);

5. Dcrit denotes the maximum deviation allowed for a given significance level and a set of n elements. Dcrit is tabulated in the literature, e.g., Ostle (1963, Appendix 2, p. 560) ; and

In our case the hypothesis is "The residuals xv := l0v - lcv are normally distributed around the mean value 0." Therefore, the cumulative distribution function G(x) takes the form

/x p—x0 rx g(z)dz = g(z)dz +/ g(z)dz, g(z):= e—2^. (B.1.1)

In a typical light curve analysis, the absolute values of the residuals are usually smaller than x0 = 0.025 light units assuming unit light at maximum. If we take G(—x0) = 0.490 from a table, it is no problem to compute the second part of the integral numerically. Unfortunately, in many cases the errors in EB observations do not follow the normal distribution.

B.2 Sensitivity Analysis and the Use of Simulated Light Curves

Let us assume that a light curve solution x+ has been derived, and the corresponding calculated light curve Ocal is available. If we add some noise Alv to the computed light l£(x+) we get the simulated light curve Osim. The values Alv follow from (4.1.21):

where a is a characteristic measure for the noise of the data. Photoelectric light curves of high quality may have a = 0.005 light units assuming unit light at maximum, less good observations rather have a « 0.01 light units. The variable e denotes a normalized random variable obeying a normal distribution; indeed, observations usually produce residuals e which follow a normal distribution around a mean value e = 0. A set of normally distributed values e can be generated as follows (Brandt 1976, p. 57):

1. Assume that the mean of the distribution function is a and that the standard deviation is a. In addition, we ask for the biased values a — 5a < e < a + 5a. (B.2.2)

The reason for this bias is that in light curve analysis we usually do not observe outliers beyond a range of 5a.

2. Let p(i) be a function that produces uniformly distributed random numbers within the interval [0, 1]. The generation of such functions is usually part of any Fortran compiler; i is an arbitrary number.

3. Let g(z) = _ e 2a be the Gaussian function. In addition, gmax = g(a)

V2na denotes the amplitude of the Gaussian function.

4. By means of uniform random numbers x = p(i), 0 < x < 1, a test value e = a + 5a(2x — 1) and g(e) is computed. Furthermore, a value et = p(i)gmax is computed to compare it with e.

5. If g(e) < e, then e is accepted as an additional random number. Otherwise, go back to 4. Due to this transformation the set of elements e is normally distributed and has the desired properties. In our case, of course, we have a = 0 and a = 1.

Now the simulated light curve Osim

can be reanalyzed and the parameters derived. The results will show what effect errors in the observational data will have on the uncertainty of the parameters. For sufficiently small noise, in well-behaved regions of the parameter space, the original parameter vector x+ is recovered within small error bounds. But if noise increases, uniqueness problems may arise, and the recovery of the parameters be jeopardized. Note that this kind of analysis is a local one. It holds only for the parameter set of interest.

Although analysis can be used to investigate parameter uncertainties, we should bear in mind that (in many cases) the residuals in EB observations may not follow a normal distribution.

B.3 Deriving Bounds for Parameters: The Grid Approach

The grid approach is very useful if some of the parameters are correlated or can only be determined with great uncertainty. Usually the mass ratio q or the inclination i are such parameters. A parameter x may be constrained by or likely be located in the interval

Fig. B.1 Standard deviation of the fit versus mass ratio. This plot, part of Fig. 4 in Kallrath & Kamper (1992), shows the standard deviation of the fit, a fit, versus the mass ratio, q

0,009

G(it 0,008

G(it 0,008

Mass ratio q

0,007

Mass ratio q

Now, for some equidistant value xi in this interval, x is fixed to xi and the inverse problem is solved yielding the other light curve parameters and afit = a fit(x;). If afit is plotted versus x\ as shown in Fig. B.1, very often the result is a curve with a flat region as in Kallrath & Kamper (1992). The limits x- and x + of that flat region yield realistic bounds on the uncertainty of the parameter x:

Telescopes Mastery

Telescopes Mastery

Through this ebook, you are going to learn what you will need to know all about the telescopes that can provide a fun and rewarding hobby for you and your family!

Get My Free Ebook


Post a comment