Sky and Telescope
May 1993, Page 83
(used with permission, see note below)
In the early evening of a promising night I start my preparations for an observing run. My interest is in variable-star photometry — the measurement of the changing energy output of stellar systems — and I coordinate my observations with Joseph Patterson, a professor of astronomy at Columbia University who studies cataclysmic binary stars.
About an hour before sunset I turn on the power to my observatory, which sits in a distant corner of my yard in suburban Washington, D. C. I open its clamshell roof and then go through the ritual of reconnecting all my data lines. I disconnect them during thunderstorm season to protect my electronics from surges due to nearby lightning strikes. (This never concerned me until I lost about half the circuit chips in my photo-multiplier-based photometer during a storm several years ago!)
I power up the computer that controls the telescope mount and command the telescope to point to the zenith. It will have cooled to roughly air temperature when observing starts. Next I activate the refrigerator that provides the primary cooling of my home built CCD camera. The system was built around a standard dehumidifier that a friend, Ed Ruitberg, and I modified to produce subzero temperatures. It takes about 30 minutes to cool the detector housing to -20° Celsius.
Returning to my house, I start up the camera-control computer so that it can monitor the detector temperature. After the primary cooling becomes stable, I return to the telescope and switch on the thermoelectric module that provides secondary cooling of my CCD. After another 20 minutes the CCD has reached a final temperature of about -37° C. Even at this modest temperature my bright-sky signal is about the same as the thermal noise from the CCD, so I am not paying a heavy penalty for operating the CCD at this temperature.
I do the cooling in two steps because if I turned on both systems together the detector would always be the coldest element in the housing and any water vapor would settle there. Separate steps cause any vapor to freeze onto the inside of my detector housing instead, out of harm’s way. A continuous purge of dry nitrogen provides moisture protection, but I have some very interesting pictures of frosted detector windows from when I was learning the ins and outs of CCDs. It took several months for me to be confident enough to start routine observing.
I actively control the camera’s temperature so that it varies less than 0.1° during the night. Only recently have commercial cameras been offered with the capability to regulate temperature accurately. This temperature stabilization is critical to photometry since it allows one set of calibration exposures (dark frames) to be applied to the entire night’s data. The temperature is controlled by small resistors near the CCD that are heated by a feedback loop in the camera electronics. I built up the hardware from printed circuit cards (no longer available) produced by a group of Palomar astronomers for the 200-inch telescope’s cameras. I modified these electronics slightly to allow control of my filter wheels and to provide remote readout of the detector temperature.
As sunset fades I ready the camera to take the calibration images needed for later data reduction. First come the flats, or short exposures (a few seconds each) of the twilight sky. They are used to correct for any uneven illumination of the CCD chip, including the effect of dust specks. Next come zero-length exposures called zeros (or biases), which allow me to measure the CCD signals that aren’t related to light striking the detector. Finally, I take several long exposures with the shutter closed, to measure the thermal noise in the chip’s picture elements.
After these dark frames have been collected, I start the telescope tracking and point it at a bright star whose accurate coordinates are known, to establish the telescope’s initial position. I put the CCD camera into a rapid readout mode and center on the star, then set the telescope encoders to match the star’s catalog position. Next I move to a 5th-magnitude star to set the focus. The secondary mirror in my 12-inch bent Cassegrain is moved by a stepper motor to change the focus setting. All of these activities are done from the basement of my house, and I return to the telescope only if it gets off track or needs to be closed up.
When the last vestiges of twilight are gone, I move to the first variable star. Since I can easily see 15th-magnitude stars in a few seconds of exposure, I use finder charts generated from the Hubble Space Telescope Guide Star Catalog. My field of view is only 4 by 6 arc minutes (256 by 400 pixels), so no other catalog would suffice. I move the telescope to the fields I want by coordinates alone. Using a simple telescope flexure-correction map in the software, my absolute pointing errors are less than an arc minute — a level of accuracy good enough to acquire my target fields.
Usually I will take several hundred 1-minute exposures of the same star fields, back to back, so I typically observe just one or two stars on any given night. When I finished processing all my data files for 1992, I found that I had produced 21,000 independent magnitude measurements of the variables I had studied. Since each comes from its own 1-minute exposure, I had gathered 21,000 frames of data or about 350 hours of photometry.
The “normal” stars in the field of each exposure are used to measure the brightness of the variable star relative to them. As each raw image is read into the computer and stored, the position of the brightest star in the field is measured and compared to a preselected value. Any differences are relayed to the telescope mount’s control computer so that it can correct the pointing. With this closed-loop procedure the telescope can run all night without losing a star field.
After the telescope is tracking well on the night’s target, I move to my data-analysis computer and start work on the previous night’s data. To produce the highest quality results you need a very capable data-reduction program, and I chose the Image Reduction and Analysis Facility (IRAF) software since it meets professional standards and is low cost. IRAF is available from the National Optical Astronomy Observatories, P. O. Box 26732, Tucson, AZ 85726. I use a workstation-class computer to run this software since personal computers are just starting to reach the needed performance levels for this package.
The flats, zeros, and darks are combined into master calibration frames, and an automatic routine then calibrates all the images, extracts the star location, and performs the photometry. In the final stage of reduction I calculate the differences in magnitudes and plot the brightness of the star as a function of time.
After using my camera for a year, I realize how many improvements could be made. If I could use a CCD with a readout noise of 7 electrons per pixel (rather than mine with 70), I could detect stars 2.5 magnitudes fainter than my present limit. An improvement from my 3-arc-second seeing to 1 arc second would yield about another 2.5 magnitudes. With dark skies and a colder CCD I would reach a further magnitude or so, and additional refinements would probably gain another. The bottom line is that in future years I expect to see small telescopes with long-exposure CCD images reaching 24th magnitude — exciting stuff!
So there I sit, with a pair of computers on one side of me tracking the stars and taking images, and on the other side a third computer plotting data. Meanwhile, outside, photons rain down and ricochet off aluminum at the speed of light. They are caught by silicon and software and are converted into graphs that faithfully capture the tortured fires of an accretion disk many light-years away. Is this a great hobby or what?
Copyright © 1993 Sky Publishing Corporation; used with permission. This material may not be reproduced in any form, either printed or electronic, without first obtaining permission from Sky Publishing Corp., P.O. Box 9111, Belmont, MA 02178-9111, USA.