## Info

The photometric mass ratio qph indicated by ( ) can be determined with much confidence only for lobe-filling or over-contact systems. Unfortunately, the incorrect notion that qph derives mainly from ellipsoidal variation has gained moderately widespread acceptance. The actual situation is that qph comes from coupling the Roche configuration to the radii of the stars. Thus, the same light curve characteristics that define the radii4 also define qph, but only when the radii can be related to the equipotential configuration. There are two distinct cases. In a semi-detached binary the radius of the lobe-filling star fixes the lobe radius, and thus the mass ratio. Best results are obtained for complete eclipses. In an over-contact binary with complete eclipses, the ratio of the radii is fixed by elementary considerations [see (4.4.1)]. The ratio of radii in turn is essentially a unique function of the mass ratio for over-contact equipotentials, with minor dependence on the degree of over-contact (Wilson 1978).

Essentially all published values of photometric Fs are for rapidly rotating Algols. Two conditions help in determining the values of F. First, eclipse circumstances (shape and depth) are altered by rotationally induced oblateness. Second, and more subtle, the proximity effects due to reflection and ellipsoidal variation of the secondary star are effectively enhanced by the reduced brightness of the fast rotating primary for observers near the orbit plane.

The question mark in parentheses (?) indicates a chance that the temperature(s) can be derived from spectral features. In order to compute T2 from a light curve solution, T1 has to be known in advance (e.g., derived from color or spectral type); of course we could also fix T2 and adjust T1. The quantity a1 sin i can be obtained from a single-lined system, and a sin i from a double-lined system to give lower limits for the orbital size a = a1 + a2. In ideal cases, an eclipsing, double-lined system therefore provides everything needed. Whether or not a given system does so in fact is a matter to be determined. The light curve of an EB depends nonlinearly on the parameters, so solving the inverse problem, and thus minimizing F(x) or f (x) = afit(x), requires to navigate all the pitfalls of nonlinear multiparameter fitting.

4.1.1.2 General Problems of Nonlinear Parameter Fitting

Binary parameters can become correlated: Changes in one parameter can nearly be compensated by a combination of changes in other parameters. Unfortunately, this problem is usually present in light curve analysis (Wilson 1983), and it is an intrinsic property of the problem. The problem can be imagined geometrically by considering the hyperspace formed by the sum-of-squares of the residuals, plotted as a function of the parameters (see Figs. 4.1, 4.2, 4.3 and 4.4). The valleys in this hypersurface are long and narrow, so that a small error in the direction taken by the solution vector at each step can cause the algorithm to run up and down the sides of the narrow valley instead of along its axis which would bring it much faster to the solution. The first problems of correlations are numerical difficulties in solving the linear algebra

4 In detached systems, correlations of qph with and other parameters make it almost impossible to derive accurate photometric mass ratios. In lobe-filling or over-contact binaries either of or is eliminated from the adjustable parameter list.

problem [the rows of the Jacobian matrix J (see Appendix A.3.3) are nearly linear combinations of each other]. This difficulty may be enhanced if the linearization makes use of numerical partial derivatives (finite-difference approximations). The situation becomes even worse when the linearized problem is solved by using the normal equations (4.3.9). In that case, the condition number of J is squared.

The second problem is the nonlinearity itself, which bends the long, narrow valleys and prevents iterations from taking a large step along the valley axis, before it starts to run up one side of the valley. Consequently, step-size limiting algorithms are commonly used to prevent Differential Corrections from making the solution worse rather than better.

4.1.1.3 Special Problems of Nonlinear Parameter Fitting in Light Curve Analysis

In addition to these frequently encountered problems, a few aspects are especially important in connection with light curve analysis.

First, the influence of parameters on the shape of the light curve is strongly phase dependent. Slight changes of the temperature T2 of the secondary star may show up only near the eclipses. The albedos A1 and A2 have an effect mainly on the shoulders of the minima. Spots affect the light curve only when they appear on a part of the

star visible to the observer. Thus, the derivative, dxpl°kal(x) of calculated light l¿al(x) with respect to a spot parameter, xp, say, longitude, is zero over a range of phase values.

The second problem that may occur is the existence of local minima in parameter hyperspace (see Figs. 4.1, 4.2, 4.3 and 4.4). It is very difficult to prove uniqueness in the m-dimensional parameter space S as that requires initial solutions in all the hyperspace valley regions. Such questions become important when analyzing light curves with primary and secondary minima of similar depths, for example. In the case of BF Aurigae, two well-separated local minima of afit occur (Kallrath & Kamper 1992). One reason is the dual possibility of atransit or occultation eclipse at primary minimum. The two assumptions lead to solutions with afit values of comparable size. Such nonuniqueness problems can sometimes (as in the BF Auri-gae system) be overcome by additional information, such as spectral line ratios (in an SB2). Milone et al. (1991) dealt with such uniqueness questions by perturbing the parameters of their TY Bootis solution by about 10%, one at a time, and iterating each time to a new solution. This technique still begs the question about the location of the deepest possible minimum in parameter space because the range of parameters explored is limited in such a procedure. In the case of TY Bootis, confidence in the solution was enhanced somewhat by good radial velocity curves. Different models, though, can arise from different assumptions about the assumed

temperature of one of the components. What temperature scale should we use when deciding the temperature of one of the components? Detailed atmospheres for these kinds of systems are sorely needed before color indices can be de-reddened and used in models.

There are two aspects of analyzing data sets that highlight the problem of strong parameter correlation: The numerical difficulties and the uniqueness problem. The numerical difficulties can be overcome, for instance, by the method of multiple subsets described in Sect. 4.3.2. "Overcoming" in this context means that the algorithm finds a parameter solution with a value of the least-square function close or almost identical to the minimum value. This formulation brings up the second problem: The uniqueness problem. We sometimes face very flat minima with similar values in the least-squares function around the solution. If light is scaled to unity, differences in afit smaller than, say, 10-3 are usually not statistically significant.

The correlation or uniqueness problem can often be overcome if it is possible to reduce the number of free parameters. For example, limb-darkening coefficients might be taken from a model atmosphere; albedos and gravity brightening parameters might be fixed. Such decisions have to be made with care because they could bias a solution toward an incorrect model. We have to bear in mind that this

procedure can artificially improve determinacy but introduce the wrong physics. Perhaps the most honest approach is to solve the problem, find a decent fit and a corresponding parameter vector, and clearly state the uniqueness problem. In addition, we should try to establish confidence limits for the parameters.

### 4.1.1.4 On the Use of Constraints

In mathematical optimization problems (Appendix A.2) and constrained least-squares problems (Sects. 4.2.2 and Appendix A.4), constraints are relations among parameters. In optimization theory, constraints are implicit relations connecting several parameters and decreasing the size of the solution space. Sometimes, if constraints are available in explicit form, they can be used to eliminate unknown parameters directly [see, for instance, (4.1.19)]; explicit constraints reduce the dimensionality of the solution space, i.e., the number of adjustable parameters. In some light curve programs, such as the Wilson-Devinney code, explicit constraints are exploited directly in the model. If emission-line activity or secular period change indicates that one component of the binary fills its Roche lobe, the lobe potential can be replaced by the critical value. The eclipse-duration constraint (Wilson, 1979) is another example.

As described in Appendix E.11 for the special case of circular orbits and synchronous rotation, the relation

contains the semi-duration &e of the X-ray eclipse and allows us to eliminate the inclination i, the potential Q, or the mass ratio q. Usually, (4.1.18) is inverted w.r.t. Q, i.e.,

which expresses the fact that the X-ray eclipse duration puts a limit on the size of the optical star (the X-ray star has negligible dimension).

### 4.1.1.5 Assignment of Weights

In the context of light curve analysis, a weight assigned to each data point can be regarded as the product of three (hopefully) independent factors w = wintrwfluxwc, (4.1.20)

with the following meaning as in Wilson (1979):

1. wintr is an intrinsic weight. If normal points are used, wintr is often taken to be the number of data points averaged to produce the "normal" (sometimes called, "binned data") point;

2. wflux is a flux-dependent weight (see below); and

### 3. wc is a curve-dependent weight.

We discuss the importance of each factor in turn. For unbinned data, most observers take wintr = 1; however, experience suggests that binning data saves computer time and provides a measure of scatter for each binned point. The calculation of the standard deviation of each binned point obviously requires the use of correct values for the uncertainty in each individual point, if known, or, instead, one can avoid reliance on individual datum weights by applying the factor wflux, as we shall see below. Some practitioners prefer to use merely the number of individual data points in each bin for wintr, irrespective of scatter in those data. If this is done, if the light-dependent weights are selected appropriately, and if the data do indeed scatter accordingly with the flux level, then the effect may be nearly the same as if the weights of the individual data points were calculated from their intrinsic errors and applied directly. But there are no guarantees! Like the issue of dome flats versus sky flats in CCD photometry (see Sect. 2.1.4), the issue of binning or not binning raises the specter of religious warfare among practitioners. If you want to be safe, perform a solution both ways, but compute the weights correctly in each case!

wflux may be set to 1 if and only if the weight factor wintr is computed so as to reflect the actual intrinsic error in each observation. If this is not the case, wlight must be selected carefully. Let ^l denote the standard deviation of a single observation in units of normalized light l. The precision of a photometric observation usually depends on the star brightness. The dependence of the precision on light level l is governed by the source of the noise. Figure 4.5, adapted from the classical review article by Code & Liller (1962, p. 285), shows the noise contributions to various regimes of star brightness. Note that for the brightest stars, "seeing" or scintillation dominates; for fainter stars, "shot noise" is most important, and for the faintest stars, fluctuations in the sky background contribute significantly to the noise. The sky background and level of seeing vary from site to site and night to night, of course, so the figure should be considered a rough guide only and not a prescription for any specific case. In the context of light curve analysis, Linnell and Proctor (1970) define the relation between ^l and l by an exponent b, b e {0, 0.5, 1}, as follows:

## Telescopes Mastery

Through this ebook, you are going to learn what you will need to know all about the telescopes that can provide a fun and rewarding hobby for you and your family!

## Post a comment