Astronomical Techniques - Detectors

Objective: Accurate measurement of the intensity of light from a distant source, perhaps as a function of position, time, or frequency. In principle, we have a detector response of some kind (a voltage or count) that is a function of the input light intensity I:

R = f (I)
but more commonly we have
R = f1(I t) + f2(t) + f3(T) + (noise)
where t is exposure time and T is detector temperature, for example, and must deal with extraneous sources of signal. These frequently include background or sky (actually foreground in most astronomical cases) intensity to be measured and subtracted, a zero-level response (bias level), a signal proportional to integration time or dark current, and terms (we hope slowly varying) in time and wavelength (for example). For various purposes, detectors may be zero-dimensional (sampling only a single region), one-dimensional (linear geometry), or two-dimensional (panoramic). Since one can collapse data to lower dimensionality during reduction, perhaps with some gain in rejecting spurious data, one usually prefers a higher dimensionality when other things are equal. However, such a detector may not exist, or be impractical due to limits on data rate or transfer time. Other things being equal, a big detector covers a wider area than a small one, but the size of resolution elements (pixels) needs to be more or less matched to the resolution of the optical system to avoid throwing away either information or field of view.

Considerations for detector choice and evaluation

  • Efficiency: Ideally, we want to detect all the incoming radiation, for a detection quantum efficiency (DQE) of 100%. The best we can do at optical wavelengths now is about 90%, but this can't be used for all applications because of noise tradeoffs. At very low photon rates, the signal-to-noise ratio may be better for a photon counting system with a DQE of 2% than for a CCD with DQE=90%, depending on the noise performance.
  • Dynamic range: Denotes the ratio between the strongest signal falling on the detector and the weakest measurable signal (or noise level). Also, this sometimes runs to the strongest signal that can be accurately measured before saturation or deadtime effects set in. Deadtime denotes the time after one photon event during which another one would not be detected, and is a property common to photon-counting devices. For a photomultiplier it may be nanoseconds, while for large-format imaging systems, more than 1 photon/pixel/second may give substantial nonlinearity.
  • Resolution: To be distinguished from sampling. This is the area (typically given as FWHM) over which an infinitely narrow input signal is spread. One wants the limit to be set by diffraction or atmospheric turbulence, not the detector. Here, too, tradeoffs sometimes must be made. The pixel size for the WFPC2 CCDs on HST are larger than the point-source image size, because the gain from a wider field was so large. Very commonly we encounter Nyquist sampling, from a theorem stating that all the information in a signal stream may be recovered if it is sampling with spacing at least as fine as 1/2 the FWHM of the shortest signal present (i.e. in imaging, information is lost if the pixels are larger than 1/2 the FWHM of the point-spread function). Periodic signals give spurious results if undersampled - they can generate alised signals at frequencies given by the sum or difference of the signal a sampling period. Ths situation is better when the signal is integrated rathr than sampled at each pixel location (as in most astronomical use), but can still be a worry.
  • Scattering: Related to resolution, especially important in spectroscopy.
  • Stability and calibration accuracy
  • Background - dark current and readout noise

    People actually started out measuring radiation intensity by eye. I'm impressed. But the eye is non-recording and non-integrating.

    Photomultipliers: one- or few-channel systems. Very accurate and sensitive, but there can be only a limited number of channels so that a slow array beats a fast PMT. Semiconductor basics: valence and conduction bands in energy level diagram. DC/pulse counting regimes. Magnetic shielding. Fabry lenses to overcome cathode inhomogeneities. Sky subtraction, chopping. PMTs excel in time resolution; devices with nanosecond response are avaiable. These detectors currently find their largest astronomical use on the astroparticle frontier, used for measuring the very short Cerenkov flashes produced by neutrino interactions in ice or energetic particle and gamma rays producing showers in the upper atmosphere. For example, this image shows a technician preparing one of their giant PMTs for installation in one of the Ice Cube digital optical modules.

    For atmospheric Cerenkov arrays, PMTs (or arrays of them) are located at the foci of large "light buckets"; the light splashes are large enough that fairly crude imaging quality is acceptable.

    Photographic emulsions (hypersensitized or not) - special spectroscopic emulsions can be optimized for long exposures without reciprocity failure. Easy to use, portable, no support equipment needed, came in very large formats. On the other hand, their DQE is low (almost always <1%), they are quite nonlinear in response and painful to calibrate, and grain irregularities cannot be calibrated. They store their own results without mounds of tapes, though. Sometimes used with image intensifiers to increase DQE and evenness of spectral response (at the expense of introducing spatial structure due to the internal cathodes, limiting S/N with any detector). These may also produce temperature-dependent geometric distortions. Details are here for historical interest, in case you ever need to retrieve information from photographic data. Various artifacts can be produced in photographic images - adjacency effects can alter the appearance of closely space images, there may be reflections in the camera or off the film backing, nonlinear nature of detection can make decomposing some images very difficult.

    Electronographic emulsions: record photoelectrons from a cathode rather than photons, onto a nuclear emulsion. Could be very sensitive, have higher dynamic range than direct plates. Also fantastically finicky and difficult to use.

    Discrete-aperture systems (one-dimensional use of detectors)

    IIDS (Intensified Image-Dissector Scanner): a hybrid system with three-stage image tubes, and the final output phosphor scanned rapidly (on the decay time scale of eah photon's output flash) along spectral traces of two apertures by an image dissector, magnetically scanned into a photomultiplier. Description by Robinson and Wampler 1972, PASP 84, 161. Almost exactly linear (some applications have output = const * input1.03, for example). S/N limited to 100 or so by instabilities in exact location of output spectra, so that the flat-field correction changes. At high count rates, a coincidence correction is needed (as with true photon-counting systems). These systems (Lick, AAT, KPNO, ESO) have been widely used for surveys of stars, QSOs, galactic nuclei. The readout is visible in real time. Dual apertures serve for simultaneous sky subtraction or (for large objects) simultaneous measurement of two position and time-switched sky subtraction. Polarimetric operation is possible by scanning four spectral traces, one from each aperture as split into polarization senses by calcite blocks (Miller, Robinson, and Schmidt 1980, PASP 92, 702). Adjacent pixels are not truly independent, since each photon flash has nonzero width (clever software can improve this), so S/N statistics are not trivial to work out.

    Reticons (with or without intensifiers): 1-d semiconductor arrays (as used in store checkout lines) typically having 1024 diodes, using internal connections for self-scanning readout. This introduces 2,4,8,16...-channel fixed-pattern noise (removable by bias observations). The readout noise tends to be rather high, but the electron capacity of each diode is huge. This allows observations of bright targets at S/N up to 1000. Relatives of these also exist, such as the HST-FOS Digicons.

    Panoramic detectors:

    SIT (Silicon-Intensified Target) tubes: integrating TV cameras, used in a charge-storage mode or constantly read out in an "equilibrium" mode. The dynamic range in direct mode is quite limited by changes in the flat-field pattern, as is the S/N achievable. Use of photon-counting (event-centroiding) electronics as in the IPCS remedies some of this. The preparation time for each exposure can be long (up to 15 minutes) since each part of the camera tube must be cleared of accumulated charge. Readout can be similarly lengthy, passing an electron beam across the faceplate and recording the resulting current. This may suffer from beam-pulling effects. Vidicons of this kind were used on IUE, whose staff described their properties as being like dirty but reusable photographic plates..

    IPCS (Image Photon-Counting System): uses TV or related CCD system and fast centroiding electronics to give position and time of each detected photon, accumulated in real time into a display memory. Has very low (essentially zero) ``readout noise", and is thus most effective at high dispersions and low photon rate, where CCD readout noise overwhelms the higher DQE of CCDs. Large numbers of pixels (2048 by 100) are possible. Relatives: KPCA, 2-D Frutti, 6-m IPCS, HST FOC.

    MAMA (Multi-Anode Microchannel Array): microchannel plate feeding a multi-anode array that times individual electron bursts; acts as a photon counter. Like a generalized photomultiplier with many spatial channels. Can be very UV-sensitive, as used in STIS and GALEX. Refinements include curving the electron-avalanche channels to reduce "poisoning" of the detector by ions pulled backwards by the voltage. Charge depletion is an issue - FUSE and HST-COS lost sensitivity at wavelengths of goecoronal emission due to constant exposure.

    CCD (Charge-Coupled Device): the current observers' darling. See C. MacKay 1986, Ann Rev 24,255 and various observatory newsletters. Solid-state array of potential wells (in fixed pixel array), in which changes in clock voltages can move charge around and eventually through an on-chip amplifier and thence to the outside world. Pixel sensitivities nonuniform but usually flatten to 1% or better. Excellent stability with time, linearity, DQE up to 90% in some spectral ranges. Readout noise usually the limiting factor. Thin/thick chips, red/blue sensitivity, blue enhancements by UV flooding or coatings, cosmic-ray sensitivity. Cooling is required to reduce thermal current enough to allow long exposures - either LN2 or thermoelectric (which is why the HST CCDs run warmer than normal). Much helpful detail in this page from Bob O'Connell at UVa, and Jorden's chapter in PSSS1.

    Chip formats up to 8192 by 8192 pixels (about 120mm square) exist, with pixels typically 15-30 microns. At large format, readout time becomes an issue; sometimes only a subsection of the chip is actually read out unless the full format is needed. Larger chips often have four amplifiers which can be run simultaneously for different quadrants.

    Fringing in far-red, monochromatic or spectroscopic applications; front-illuminated versus back-illuminated use.

    On-chip binning for readout noise and dynamic range improvement.

    Bias and charge-transfer efficiency, preflashing.

    Cosmetic defects in imaging/spectroscopic applications: bad pixels, dead/hot columns, flat-field features. There are also frequently features from dust particles on the dewar widow or filters.

    Can be read in analog mode (TV rates) for guiding or (with intensifier) for a pseudo-photon counter (2D-Frutti, KPCA). This amplifies individual photon events into blobs that can be seen above readout noise even for very short frame times.

    Behavior at/near saturation: we distinguish saturation of the analog-digital converter, which results in loss of intensity information, and saturation of the full-well potential depth of a pixel, which leads to migration of charge into adjacent pixels along the column, migration along rows in the transfer register, and often to deferred charge that may show up on subsequent exposures. Improved circuit design and readout can reduce these (a good thing with large chips since there are stars all over the sky).

    ADUs versus photons and noise calculations; the "CCD equation". If each ADU represents g photoelectrons (the gain) and the readout noise is r in ADU, the noise expected for a pixel with N detected ADU is

    σ = (N/g + r2)1/2
    in ADU (be careful about working consistently in either electrons or ADU). Thus there is a regime in which one is readout-noise limited, and at higher intensity, longer exposure, or a better-performing chip, in which one is photon-noise limited. The latter is a true physical limitation, and for a given time and intensity one would usually rather be in this limit. This enters especially into calculating exposure times - there are advantages to breaking a long exposure into shorter pieces, and if the shorter pieces are still photon-noise limited there is essentially no noise penalty to such a split. Be careful to include sky background in the statistics for noise but not for signal.

    S/N in instrument comparisons and exposure calculations; sometimes a less sensitive detector will give better results. Some examples have been given in the NOAO Newsletter, in a continuing series on ``Which chip is right for you?".


    « The human eye | Noise and S/N considerations »

    Course home page | Bill Keel's Home Page | Image Usage and Copyright Info | UA Astronomy

    wkeel@ua.edu
    Last changes: 1/2014	  © 2000-2014