The telescopes and detectors discussed in the previous section are used astronomically for two main purposes: first, direct imaging, usually for photometry (measuring the brightness of individual objects); and second, spectroscopy, determining the energy distribution as a function of wavelength. The following sections outline the fundamental principles involved in these observational measurements.

The apparent brightness of astronomical objects is usually measured in units of magnitude. The system originated with Hipparchus' division of the naked eye stars into six subgroups, with the brightest stars grouped together in the 'first magnitude' and the faintest stars visible to the naked eye described as being of the 'sixth magnitude'. The human brain/eye combination tends to judge brightness differences as ratios, rather than linear differences. If there are three light sources, A, B and C, where B is twice as bright as A and C twice as bright as B, a visual observer will estimate the difference between A and B as the same as that between B and C, although in linear terms, the relative brightnesses are 1, 2 and 4, respectively. The result is that the magnitude scale is logarithmic, rather than linear, and a given difference in magnitude corresponds to a particular brightness ratio.

Pogson [P2] quantified Hipparchus' original qualitative scale into a system where a difference of five magnitudes is equivalent to a factor of 100 in apparent magnitude, retaining the convention of numerically-increasing magnitudes with decreasing intensity. Hence, magnitude is defined as m = —2.5 x log10(f)+constant (1-15)

where f is the apparent flux (in Watts m—22Hz—\ ergsec—1 cm—2 A—1 or equivalent units). One of the striking advantages of this convention is that the enormous brightness difference of 1021 between the apparent magnitude of the Sun (magnitude —26) and the faintest object detectable by the Hubble Space Telescope (magnitude 30) spans only 56 magnitudes. Thus, the magnitude system expresses large brightness differences in a compact, and widely understood, form. It is primarily for this reason that, despite the rumblings of some astrophysicists (for example, [L9]), the system remains in common use today.

The magnitude scale is defined as f mp = — 2.5log1()-^ (1.16)

where fp is the measured flux emitted by the source within a particular wavelength region (passband), usually defined by optical filters, and F0 is the flux density produced by a star which has magnitude 0 in that passband. The latter zero-point is arbitrary, but is usually set to give an A0 star (comparable to Vega) equal magnitudes at all wavelengths. The main exception to this convention is the Gunn ugriz system, used with the Sloan Digial Sky Survey (Section 1.6.3), which adopts a uniform zero-point in all passbands. Magnitudes in the Gunn system are defined as m = —2.5log10 fv — 48.60 (1.17)

where fv is the flux in ergcm—2 s—1 Hz—x. These are known as AB magnitudes.

Band1 |

Was this article helpful?

Through this ebook, you are going to learn what you will need to know all about the telescopes that can provide a fun and rewarding hobby for you and your family!

## Post a comment