Colour TV from a blackandwhite world

During Apollo, colour television was still in its infancy and was notorious for complicating the technology. Conventional colour TV cameras of the time required that there be at least three imaging tubes generating simultaneous images in red, blue and green. The cameras were therefore large, heavy and required constant attention to keep the three images aligned in the final camera output. A simpler system was required and designers turned to a derivative of one of the earliest methods of generating colour TV, the colour wheel.

CBS, one of the United States' three major TV companies at the time, initially developed the colour wheel camera in the days before a rival system was adopted for general use. The colour wheel camera had one great advantage that lent itself to use in space. Since the colour scans were expressed sequentially instead of camera °n its tripod °n me M°°n. simultaneously, only a single imaging tube was required and the camera could be made much smaller than conventional cameras of the time.

The Apollo colour camera produced what was essentially a standard black-and-white signal at 60 fields per second, 262.5 lines per field, with about 200 useful lines per field. Directly in front of the imaging tube, between it and the lens, was the colour wheel. This had six filters as two sets each of red, blue and green. It was spun at 10 revolutions per second such that each field from the camera was an analysis of the image in red, then blue, then green, over and over. If viewed on a black-and-white monitor, the image would display a pronounced 20-Hz flicker because the field that represented green was brighter than the other two, but would only come around 20 times per second. The bandwidth given over to this television signal was increased to 2 MHz which overlapped other components in the Apollo S-band radio signal. Careful filtering was required to remove these from the TV image.

The flickering black-and-white signal received from the spacecraft, or from the Moon's surface, had to undergo extensive processing at Houston. In television studios of the Apollo era, it was crucial that the timing of the TV signal was stable. In other words, the pulses within the signal that define the start of a line or field

Apollo 12's troublesome colour TV

should occur with extreme regularity and precision. In addition, all equipment dealing with the signal had to agree when the lines and fields began - that is, they all had to be synchronised. This was a problem for Apollo, owing to the velocity of the spacecraft or of the landing site with respect to the receiver on the turning Earth, and the resulting Doppler shift constantly altered the timing of the received TV signal.

Being in the days before mass digital storage made the task easy, engineers used two large videotape recorders to correct the signal's timing. The first machine recorded the pictures coming from space, synchronising itself with the pulses that were built into the television signal. However, instead of the tape going onto a take-up reel, it was passed directly to a second videotape machine which replayed the tape. This second machine took its timing reference from the local electronics, allowing it to reproduce the signal with its timing pulses occurring synchronously with the TV station.

With the timing sorted, a colour signal had to be derived from the three separate, sequential fields that represented red, blue and green. To achieve this, a magnetic disk recorder spinning at 3,600 rpm (once every 60th of a second) recorded the red, blue and green fields separately onto six tracks. From this disk, the appropriate fields could be read out simultaneously using multiple heads and combined conventionally to produce a standard colour television signal.

Apollo 10 proved that a colour camera worked within the overall Apollo system and, starting with Apollo 12, colour TV was transmitted from the lunar surface - or at least it was until the camera was inadvertently aimed either at the Sun or at one of its reflections from the LM, destroying part of the sensitive imaging tube!

The cameras for Apollos 11, 12 and 14 were merely placed on stands near the LM, which was acceptable as long as activity was centred around the LM, but when the Apollo 14 crew set off for their traverse, the audience was left watching an unchanging scene for several hours. It was clear that when lunar exploration stepped up a gear for the J-missions, the TV camera would have to be mounted on the lunar rover. It could not be operated while driving, but at each stop, Ed Fendell in mission control operated the camera by remote control. In this way, many eyes in Houston could watch what the two crewmen were doing nearby. This also enabled the scientists to build up panoramic views of each site and look around for interesting rocks for the astronauts to inspect.

For the final two missions of the Apollo programme, TV signals were linked to California where a proprietary system enhanced the images before returning them to Houston for distribution.

The changing image from the Moon. Still frames from the TV coverage on Apollo 11 (left) and Apollo 17 (right).
0 0

Post a comment