## Info

and n

4. Estimation of the inclination i: The higher the inclination i, the wider and deeper the eclipses. For i « 90° there is only a weak dependence of the eclipse structure on the inclination. However, when i becomes low enough so that eclipses are far from central, then larger decreases in width occur. Larger decreases in depth will be seen when the eclipses become partial. We may use an atlas of eclipses like that of Terrell et al. (1992) or make a series of runs with a light curve program keeping all parameters except i and observing the effect. Binary Maker, cf. Bradstreet (1993); Bradstreet & Steelman (2004), provides a convenient and quick way to make such assessments.

5. Estimation of (mean) volume radii for lobe-filling stars with known spectro-scopic mass ratio can be found from the relation r =

0.49q2/3

derived by Eggleton (1983), which is accurate to about ±1% for all values of the mass ratio q. To compute r2 = r for the lower mass star, one has to set q = M2/M1 < 1, whereas for the higher mass star (r2 = r) one uses q = Mi/M2 > 1 as input to (4.4.5). Except for Algol type binaries, the problem with (4.4.5) is, however, to know whether a star fills its lobe. More generally, one can often estimate a reasonable r from astrophysical considerations or from published solutions. For well-detached binaries, on can expect relative radii with value of 0.1 or less.

Estimating the Roche potential Q in the circular orbit and synchronous rotation case: If estimated relative radii are available, one could apply (3.1.65) to derive estimations of Q1 and Q2. If the star is well inside its Roche lobe (detached systems), the relation Q ~ 1/r applies. Then there are look-up tables [see, e for instance, Limber (1963, p. 1119)] listing the relative radius r versus Q. Alternatively, we might use the method described in Appendix E.30 or just use Binary Maker (see Appendix 8.1).

In any case, we strongly recommend to produce a synthetic light curve with the initial parameters and to plot calculated against observed light curves. If minima in the computed light curve are less deep than in the observed light curves, i could be increased. The idea is to avoid spending inordinate amounts of time on preparation, yet begin iterations somewhere near the correct minimum.

### 4.4.2 Criteria for Terminating Iterations

To avoid landing in a relative minimum of parameter hyperspace and for the quantitative determination of light curve parameters in general, we recommend the use of both the Simplex algorithm and (damped) Differential Corrections or another derivative-based method. The former should be used for any initial search. We have found it a helpful tool to explore solution uniqueness and to perform experiments on parameter sets, for example, establishing the directions of convergence of all the parameters by systematically varying the initial mass ratio. Once a parameter set is close to a solution, switching to a derivative-based least-squares solver can increase the rate of convergence and also yield the formal or statistical errors of the parameters.

Independent of the method, the question arises of when to halt iteration. Let us first list some intuitive stopping criteria and discuss whether they make sense:

• By number of iterations — in case no convergence is achieved with the initial parameters (it gives us a chance to monitor the solutions and to interpret the physics);

• by comparison of the standard deviation afit of the fit with the noise adata in the data;

• by comparison of the parameter corrections with the parameters themselves;

• by comparison of the parameter corrections with the probable or mean standard errors;

• by inspection of the damping constant; and

• by confinement of the adjustments to a limited region (say, an ellipsoid) of parameter space.

All these criteria might appear arbitrary and it is questionable to set up general conventions. The number of iterations needed to converge may be different for different problems. Certainly, afit cannot become smaller than adata. If the model suits the data, we would expect to have afit ~ adata. Although this certainly guarantees a nice looking fit, in a flat valley convergence might not have been achieved. On the other hand, if afit > adata, either the convergence has not been achieved, or the model is deficient.

Comparing the parameter corrections Axj with the value of the parameter xj and requiring for all j

with a reasonable relative error e j does not work for parameters which can take the value 0, and it causes problems for cyclic parameters (angles allowed to any value between 0° and 360°). Hence, this criterion is not recommended.

So, instead, we might consider defining some absolute limits A Xj in those cases, and halt iterations if

becomes valid for all j. This idea is again problematic because it introduces some arbitrariness into the problem if it is not possible to scale the parameters.

Very often in the literature on light curve analysis, the criterion

is used, that is, convergence is stopped when parameter corrections are substantially, say a factor of 100, smaller than the statistical errors ep. This criterion has the advantage that it is consistently scaled.

Finally, using a Levenberg-Marquardt scheme, the damping constant X can be used to terminate the iteration, if the following holds: For a set of parameters x+ at least one of the conditions (4.4.6), (4.4.7), or (4.4.8) is satisfied. If, for sufficiently small X the damping factor increases continuously, then x+ can be accepted as the true solution. In well-defined test cases (Kallrath et al. 1998), X went down to machine accuracy, i.e., 10-15. Again, this is an empirical rule.

If, for a reasonable number of iterations, the estimated parameter vectors x stay within a predefined region, then the center of all vectors in that region may define the solution (Wilson 1996). This criterion certainly can detect some secular trends.

Bock (1987) gave a criterion which checks the statistical stability of the solution. This criterion is based on some Lipschitz conditions related to the generalized inverse and considers the quality of the solution, the nonlinearity of the problem, and the number of degrees of freedom. The formalism involved in this criterion is beyond the scope of this book and thus the reader is referred to Bock (1987, pp. 59-73).

It is important to accompany any analysis with a plot of the light curve and the residuals. A careful inspection of the residual plot can reveal any systematic trend. Unfortunately, in most cases the residuals are not normally distributed about zero. Long tails dominate the distribution. The problems associated with such distributions are discussed in the context of robustness and its statistical foundations. For an overview of robustness and robust estimation , we refer the reader to Chap. 15.7 in Press et al. (1992) and references therein.

4.4.3 The Interpretation of Errors Derived from Fitting

A solution x+ derived from a least-squares problem becomes properly meaningful only if uncertainties are attached. The uncertainty is usually specified by upper and lower bounds xi+ and xi- which for each parameter lead to the relation x- < Xi < x+, x± := Xi + 5x±. (4.4.9)

The errors 5xi include contributions from at least four sources:

• em, the error from approximations and other deficiencies in the light curve model;

• eobs, the error due to systematic error within the observational data;

• en, the error due to the numerical representation;

• es, the error due to accidental (statistical) error in the observations.

Thus, in combination, if these were random and independent errors, we would get

(5xi)2 = (em)2 + (£obs)2 + (en)2 + (es)2; (4.4.10)

in practice, we have to expect an unknown functional relationship

Typically, of these sources of error, only the statistical error es is specified in the program output, e.g., derived from the covariance matrix, and the parameters to be determined are given with associated uncertainties in the symmetric form xi ± Ax,. To be sure, it is very difficult to specify the systematic error em made by approximations in the model. Systematic errors eobs in the observations are also assumed to be zero. Errors en due to the numerical representation (integration, matrix inversion, round-off, etc.) are usually not discussed.

Within light curve analysis, and in the Wilson-Devinney model in particular, the errors Ax, of dependent parameters can be determined by applying the GauBian law of error propagation or by considering total differentials. For instance, in all modes above 0, this relation holds:

If the Differential Correction method is used to determine parameters, then Ax, follows from the diagonal elements of the covariance matrix (4.3.12), i.e.,

5 = dT ■ Wd, 5' = (-A ■ W ■ d(x0))T Ax, (4.4.13)

and for the standard deviations,9 es, of the estimated parameters:

S' can be interpreted as the contribution by which the residuals S corresponding to x0 are reduced by applying differential corrections to x0. The errors calculated by (4.4.14) give only the statistical error. Often these statistical errors are much smaller than the realistic uncertainties. The analysis of V836 Cygni by Breinhorst et al. (1989), for example, gave an error of the mass ratio, Aq, ep « 5 ■ 10-3 but inspection of a afit(q) shows that the true error in Aq is at least of order 0.1. Most often this is observed because the correlations among parameters are taken into account to only first order. The off-diagonal elements of the covariance matrix give a measure for the pairwise correlations between parameters but not for higher-order combinations of parameters. Therefore, one should not only specify the errors or uncertainties of the fitted parameters but also add some comments on existence or non-existence of correlations.

Derivative-free least-squares solvers do not produce statistical errors es. Therefore, they should only be used for producing good initial guesses for a derivative-based method.

A completely different approach to estimate Ax+ is the sensitivity analysis approach (Appendix B.2). Many simulated observables Osim(x+) and the corresponding solutions are investigated. This method provides information on how the model reacts on small changes of a model parameter. If a large parameter change leads to only a small or negligible change of model response, the parameter is weakly defined and cannot be determined accurately. For weakly defined parameters, the grid approach (Appendix B.3) gives reliable information on the bounds of such parameters.

4.4.4 Calculating Absolute Stellar Parameters from a Light Curve Solution

Curiosa felicitas (Painstaking felicity)

Once all the formal aspects of light curve solution have been reviewed, it is time to discuss the interpretation of the solution. What is the astrophysical significance of the obtained parameter set? Before interpreting the physical state of the binary, we need to compute its absolute parameters, i.e., the temperatures, radii, and luminosities and distance. The radial velocities are necessary to find the semi-major axis

9 The Wilson-Devinney program before 1998 used the probable errors ep = 0.6745es instead of the standard deviation es used in the 1998 version (Wilson, 1998). Note that the factor 0.6745 is justified only if the errors are normally distributed.

and the absolute masses. In some cases, the semi-major axis can be obtained from astrometry but none of those binaries are known at present to eclipse. From the radii and masses, the surface accelerations (in solar units) are derived. Specific details of the computation depend slightly on the particular light curve parameter set, but there is a general approach to compute stellar parameters from light curve solutions. In order to discuss this issue and interpret the light curve solution, we distinguish between two scenarios: Output obtained from complete data and output obtained from incomplete data.

### 4.4.4.1 The Complete Data Case

Let us assume that a light curve solution10 provides the inclination i and the length scale, i.e., the relative orbital semi-major axis, a, in physical units. At first, this enables us to compute the total mass, M, of the binary system

4n2 a3

For some favored cases, the light curve solution also provides the mass ratio

Combining (4.4.15) and (4.4.16) yields the masses M1 and M2 for the stars

1 + q 1 + q separately. From (3.1.17) and (3.1.18), we derive q1

Usually, light curve programs return dimensionless values for each star's surface area, A, and mean surface brightness; we neglect the index indicating the component. The mean radius relative to the semi-major axis can be defined either from the surface area, A, or from volume, V,

10 Here, light curve analysis and light curve solution are used in the extended sense, i.e., each might include photometric data, radial velocities, and other observables.

The definitions give slightly different values for r but usually the differences are very small. The WD program also provides specific radii for each axis of the star. The arithmetic, geometric, or harmonic mean may be used to average them yielding a mean radius. To the requisite precision, any of these possible estimates of mean radius is probably acceptable. The uncertainty in the surface area or volume is not provided, so an indication of uncertainty in the mean radius is computable only from the individual radii described on page 287. A conservative method is to compute the weighted mean of these radii and to adopt the largest derived uncertainty.

The absolute mean radius and absolute surface may be computed if the semimajor axis, a, is known from astrometry, spectroscopy, or the light curve solution:

The acceleration, g', due to gravity at the surface follows as

## Telescopes Mastery

Through this ebook, you are going to learn what you will need to know all about the telescopes that can provide a fun and rewarding hobby for you and your family!

## Post a comment