Top Banner
A CCD / IR Data Reduction Primer A CCD / IR Data Reduction Primer I/LXXII
72

A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

Oct 05, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

A CCD / IR Data

Reduction Primer

A CCD / IR Data Reduction Primer I/LXXII

Page 2: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

A CCD / IR DATA REDUCTION PRIMERAuthors:Stefano Gallozzi: [email protected] Testa: [email protected] Fontana: [email protected] Grazian: [email protected] Santini: [email protected] Boutsia: [email protected] Paris: [email protected]

Last modified: Rome, 07/10/2010

Table of Contents:Ch1 : INTRODUCTION Ch2 : OVERVIEW ON THE CCDs & OTCCDs DETECTORsCh3 : ADVANTAGE & DISAVANTAGE OF CCDs Ch4 : CCD PIXEL SIZE, FIELD OF VIEW & PSFCh5 : CCDs' BAD PIXELs & MASKs Ch6 : READ-OUT SIGNAL & BIASCh7 : THERMAL NOISE & DARK CURRENT Ch8 : PHOTON NOISE, SNR, GAIN & RON Ch9 : PIXEL SENSITIVITY & FLAT FIELDING Ch10: FLAT FIELDS CALIBRATION Ch11: NATURE OF CCDs' OUTPUT Ch12: PRE-REDUCTION Ch13: COSMIC-RAY EVENTS & ANTICOINCIDENCECh14: TRAILs & SATELLITEsCh15: SOURCE DETECTION, FWHM EVALUATION & HEADER CORRECTIONCh16: SUPERFLAT & DELTA CORRECTION FLAT Ch17: DETAILS PROCEDURES on FRINGING & GHOSTs Ch18: CHIPs EQUALIZATION & EXTRACTION Ch19: ASTROMETRY FIX & COADD Ch20 : SKY-BACKGROUND ESTIMATION & INVERSE ASTROMETRYCh21: MORE ON BACKGROUND Ch22: BASIC REDUCTION STEPs & PIPELINEsBibliographic Reference

----------

A CCD / IR Data Reduction Primer II/LXXII

Page 3: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH1 - INTRODUCTION

Two-dimensional optical CCDs (Charge-Couple Devices), infrared arrays and CCDs X-ray arrays which are very similar are now the type of detectors usually used to produce direct astronomical images (that is, simple pictures of a region of sky) at optical and infrared wavelengths. These arrays are much more sensitive and have a much larger useful dynamic range than the panoramic detectors used before (principallyphotographic plates) and it is hardly an overstatement to say that their widespread adoption in the past two decades has effected a revolution in astronomy. However, the un-processed images, as obtained from CCDs, are affected by a number of instrumental effects which must be corrected before useful results can be obtained. We are going to illustrate the right steps to remove these instrumental effects in order to recover an accurate picture of the field of sky observed. Anyway it is not possible to clean efficiently images without taking into account all physical phenomenas that produce them. This cleaning process is normally called CCD Data Reduction though, figuratively at least, it can just as well be thought of as repairing 'degraded images'.

The final product of CCD data reduction is an image which accurately reproduces the brightness distribution in the field of sky observed (subject to the limits on spatial resolution imposed by atmospheric seeing and the instrumental characteristic, of course). This image is in entirely arbitrary units.

With the installation of new instruments and the Telescopes in addition to the availability of CCDs offering large pixel areas and higher quantum efficiency, the variety of observing modes has grown and, as an obvious consequence, the amount and the diversity of data taken have dramatically increased. So a data reduction process should be "intelligent" in the sense of being robust, user friendly and able to operate in a sistematic and automatic way, without losing any science information of the original raw datas.How intelligent the system is depends on the capabilities of other parts in the data acquisition, archiving, and reduction systems.

The next chapters describe in detail the various steps in the CCD data reduction, starting from reading in the science and calibration frames and ending with the final cosmetic fix up of the final calibratedframes. We make use of our understanding obtained in the meanwhile with LBC@LBT (Large Binocular Camera @ the Large Binocular Telescope in Arizona) and other instruments' data such as HAWK-I of VLT and ACS of HST.

A CCD / IR Data Reduction Primer III/LXXII

Page 4: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH2 - OVERVIEW ON THE CCDs & OTCCDs DETECTORs

Before the introduction of photography to astronomy the only way of recording images of extended objects seen through a telescope was to draw by hand a sketch as seen on eye. This approach worked moderately well for the planets, which are illuminated by reflected light, but was much less successfulfor nebulae and other objects beyond the solar system, both because they are much fainter and because of the inherent difficulty in reproducing the gradations in brightness of an extended luminousobject using drawing techniques. Photographic Plates were first used to record images of regions of the sky around the middle of thenineteenth century. The techniques proved successful and photographic plates were ubiquitous in astronomy for more than a century. The advantages that they offered were essentially three:

1. unlike the eye they were an integrating detector, so : fainter objects could be detected by making longer exposures to accumulate more light

2. the images were objective and reproducible (unlike a sketch)

3. the photographic image constituted a quantitative measure of the light distribution across the luminous object (at least in principle).

Nonetheless there were problems with photographic plates: they had only a limited dynamic range and their response to the brightness of the illuminating light was non-linear (logarithmic), leading to continue calibration problems. In the middle years of the twentieth century Photoelectric Photometers were developed as electronic devices which were more sensitive, accurate, linear and had a wider dynamic range than the photographic plate. However, they were collimating devices (not imaging devices): they merely produced a single output corresponding to the brightness of one point on the sky or making use of a grid plate in a certain direction. 1

1 Such imaging devices are sometimes called 'panoramic detectors'.

A CCD / IR Data Reduction Primer IV/LXXII

Illustration 1: CCD simple voltage schema.

Page 5: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

In many ways CCDs (Charge-Couple Devices) combine the advantages of both photographic plates and photoelectric photometers, though their principles of operation are very different from either. They have a high sensitivity, linear response, large dynamic range and are imaging devices which record a picture of the region of sky being viewed.

The CCD was invented in 1969 by W.S. Boyle and G.E. Smith of the Bell Laboratory. They were not interested in astronomical detectors (and were, in fact, investigating techniques for possible use in a `picture-phone'). Indeed, most of the applications of CCDs are not astronomical. CCDs were first used in astronomy in 1976 when J. Janesick and B. Smith obtained images of Jupiter, Saturn and Uranus using a CCD detector attached to the 61-inch telescope on Mt Bigelow in Arizona. CCDs were rapidly adopted in astronomy and are now ubiquitous: they are easily the most popular and widespread imaging devices used at optical and near infrared wavelengths.Infrared cameras were introduced into astronomy in the mid to the late 1980s.

A CCD is best described as a semiconductor chip, one face of which is sensitive to light. The light sensitive face is rectangular in shape and subdivided into a grid of discrete rectangular areas (picture elements or pixels) each about 10-30 micron across. The CCD is placed in the focal plane of a telescope so the the light-sensitive surface is illuminated and an image of the field of sky being viewed forms on it. The arrival of a photon on a pixel generates a small electrical charge which is stored for later read-out. The size of the charge increases cumulatively as more photons strike the surface: the brighter the illumination the greater the charge. This description is the merest outline of a complicated and involved subject. The CCD pixel grids are usually square and the number of pixels on each side often reflects the computer industry's predilection for powers of two. Early CCDs used in the 1970s often had 64x64 elements. 256x256 or 512x512 element chips were typical in the 1980s and 1024x1024 or 2048x2048 element chips are common now, for instance FORS1 and FORS2 are chips of 2048x1024 pixels. LBC@LBT consists of four chips of 4096x2048 pixels and operates in binocular mode.

A CCD / IR Data Reduction Primer V/LXXII

Illustration 2: LBC Blue Camera deep field mosaic (MEF chips); note different equalization levels, illumination trends and bad columns.

Page 6: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

The astronomical standard for CCDs' images is the FITS (Flexible Image Transport Standard) standard; any LBC fits file consists of four chip integrated in a data storage fits file called MEF (Multi Extension Fits).

A CCD in isolation is just a semiconductor chip; in order to turn it into a usable astronomical instrument it needs to be connected to some electronics to power it, control it and read it out. By using a few clocking circuits, an amplifier and a fast analogue-to-digital converter (ADC), usually of 16-bit accuracy, it is possible to estimate the amount of light that has fallen into each pixel by examining the amount of charge it has stored up. So, the charge which has accumulated in each pixel is converted into a number. This number is in arbitrary `units' of so-called `analogue data units' (ADUs); that is, it is not yet calibrated into physical units. The ADC factor is the constant of proportionality to convert ADUs into the amount of charge (expressed as a number of electrons) stored in each pixel. This factor is needed during the data reduction and is usually included in the documentation for the instrument. The chip will usually be placed in an insulating flask and cooled (often with liquid nitrogen) to reduce the thermal noise level and there will be the usual appurtenances of astronomical instruments: shutters, filter wheels etc . The whole instrument is often referred to as a CCD camera (i.e. LBC is one of such cameras). A CCD camera is often exposed at a focus of a collecting flux instrument such as a telescope; the optical collection is better realized using one mirror with paraboloid profile and alluminated in its reflecting area. Some range of spectral energy photons such as X-ray photons need of a special reflecting geometry, named Wolter-1, which consists of a double reflection with a paraboloid profile for the first one and an iperboloid profile for the latter. X-ray photon can be reflected only within a critical degree, depending on the atomic weight of the reflecting area (it is preferred to use aurum or iridium).

A CCD / IR Data Reduction Primer VI/LXXII

Illustration 3: HAWK-I typical image with four squared chips. Note that like LBC chips presents different equalization and bad pixel patterns as well as illumination trends to be removed. Note also that chip3 and chip4 has inverted MEF extensions.

Page 7: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

The electronics controlling the CCD chip are interfaced to a computer which in turn controls them. Thus, the images observed by the CCD are transferred directly to computer memory, with no intermediate analogue stage, whence they can be plotted on an image display device or written to magnetic disk or tape. Normally you will return from an observing run with a magnetic tape cartridge of some sort containing copies of the images that you observed or as used in the late time is a fast connection link to a database repository where to store and download later all required raw datas concerning your observation. LBC cameras have two main controllers that acquire science images and during the integration stores a sub-set of secondary technical images to refresh guiding, tracking and active optics.

IR CCDsThe IR CCDs are very similar to optical CCDs except for additional cryogenics required to operate to reduce thermal noise (the telescope glows at these wavelengths) and a reduced physical dimension and quantum efficiency too. Anyway due to sky levels and their physical properties, infrared arrays saturate very quickly, consequently they are read-out very frequently in order to produce a stack of frames (or co-adds) which are subsequently added. Also the instrument and telescope of an IR CCDs array may be 'chopping' and 'nodding' during the observation: rapidly switching between observing the target object and neighbouring sky in order to allow the otherwise dominant contribution from the sky background to be estimated and and subtracted real time. This effect is usually achieved by oscillating some component of the optical system, often the telescope secondary mirror. Chopping and nodding were the usual modes of operation with earlier single-element photometers, but are less common with modern array detectors. Chips system of HAWK-I instrument is an example of such IR devices. The HAWK-I frames are composed of 4-squared chips with the spatial configuration shown in figure.

A CCD / IR Data Reduction Primer VII/LXXII

Page 8: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

In the last decade the successor of the simple CCD was born the OTCCD (Orthogonal Transfer CCD), that is a device capable of shifting charge in all directions and was developed for compensating the image motion across a sensor in situation where motion of the scene or the sensor platform during the image integration would otherwise lead to confusion and degrade the final image. The OTCCD can perform the so called tip-tilt correction electronically, resulting in improved image resolution and SNR as applied an hardware adaptive optics to compensate atmospheric induced phase distortion.Using the OTCCS it is possible to create GigaPixels cameras using the OTAs (Orthogonal Transfer Arrays); this new device is designed to provide wide-field tip-tilt correction of astronomical images using arrays of OTCCD instead of simple CCDs.

It is also possible to perform windowing in very large area images to exclude image strips where no detected sources are present.

----------

A CCD / IR Data Reduction Primer VIII/LXXII

Illustration 4: OTCCD pixel schema and windowing facility.

Page 9: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH3 - ADVANTAGE & DISAVANTAGE OF CCDs

The principal advantages of CCDs are their sensitivity, dynamic range and linearity. The sensitivity, or quantum efficiency, is simply the fraction of photons incident on the chip which are detected. It iscommon for CCDs to achieve a quantum efficiency of about 80%. Compare this value with only a few percent for even best sensitive photographic plates. CCDs are also sensitive to a broad range of wavelengths and are much more sensitive to red light than either photographic plates or the photomultiplier tubes used in photoelectric photometers. However, they have a poor response to blue and ultra-violet light. One of the aim of LBC binocular configuration was to optimize one channel in the optical UV region of the optical electromagnetic spectrum and the other in the IR region of the optical spectrum.

CCDs are sensitive to a wide range of light levels: a typical dynamic range (that is, the ratio of the brightest accurately detectable signal to the faintest) is about 105, corresponding to a range of about 14.5 magnitudes. The corresponding value for a photographic plate are a range of less than about 1000 corresponding to 7.5 magnitudes. Furthermore, within this dynamic range the response is essentially linear: the size of the signal is simply proportional to the number of photons detected, which makes calibration straightforward. Anyway, axpecially in a collimating CCD camera or in a not properly focussed instrument, photons may impact on the CCD area in different ways, depending both by their inclination and their energy (see illustration n.4).

The principal disadvantage of CCDs is that they are physically small and consequently can image only a small region of sky. Typical sizes are 1.0 to 7.5 cm across, much smaller than photographic plates. There is a practical limit to the size of CCDs because of the time required to read them out (except for latest

A CCD / IR Data Reduction Primer IX/LXXII

Illustrazione 5: Representative schema of different way a photon may interact with a pixel mosaic. The detection algorithm reads all pixels in the CCD mosaic and selects all pixels above a threshold; concerning on the 3x3 grid of possible interaction at every event is assigned a degree of goodness; according to that degree pixels event may be inserted in the telemetry or filtered (i.e. a single-pixel event has degree '0', while a 9-pixels event has degree 255.

Page 10: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

OTCCD). Thus, in order to image a large area of sky it is usual to place several chips in a grid (or array/mosaic) in the focal plane rather than fabricating a single enormous chip, and output images are stored in a single MEF file (see illustration n.2).

---------

A CCD / IR Data Reduction Primer X/LXXII

Page 11: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH4 – CCD PIXEL SIZE, FIELD OF VIEW & PSF

In images observed close to the optical axis of a well-designed telescope an angular displacement on the sky is simply proportional to a linear displacement in position in the focal plane. The constant of proportionality is usually called the plate scale (a name which betrays its origin in photographic techniques) and is traditionally quoted in units of arcsec/mm. Plate scale is usually called PixelScale if the proportionality is between arcsec and pixels on the CCD grid.That is:

p=delta' '

deltapxls (4.1)

where p is the plate scale in arcsec/mm (or arcsec/pxls) , (delta)'' is a displacement on the sky in seconds of arc and (delta)mm is the corresponding displacement in the focal plane in mm. If you know the plate scale and the size of either a single pixel in the grid or the linear size of the CCD then it is trivial to use Eq.4.1 to work out either the angle on the sky subtended by a single pixel or the field of view of the CCD respectively. In the following we will consider the astronomical meaning of this quantity as the size of a single pixel and we will name it Pixel Size (or Pixel Scale).LBC cameras pixel scale is not constant over the entire CCD area and the mean value is 0.2244 arsec/pix.

The manual for the instrument or telescope that you are using will usually quote a value for the pixel scale. However, if necessary it can be calculated from other parameters for the telescope. By simple geometry the pixel scale is the reciprocal of the effective focal length of the system:

p'=

1f

(4.2)

where f is the effective focal length of the system and p' is the pixel scale in units depending on the way f is measured. Thus, for f in meters and applying the factor for converting radians to arcsec, the pixel scale is: p=206.26/f . More f is itself related to the diameter of the primary mirror, D, and the focal ratio, F:

f =F⋅D

(4.4)

At larger distances from the optical axis there is no longer a simple linear relation between angular displacement on the sky and displacement in position in the focal surface. That is, p varies as a function of position in the focal surface. This effect is usually not important in instruments containing a single chip because of

A CCD / IR Data Reduction Primer XI/LXXII

Page 12: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

the small size of individual CCDs. However it may be important only if a grid of chips is used.

For instance in the LBC chips distortion at the edge of the FOV is very great (even 15arsec) due to heavy optical system adopted for the camera. These value depends on the filter used and the mean value is about 1.7%. Those distortion must be taken into account to get a stable astrometric solution, as result FWHM of stars is not constant over the entire FOV (see illustration n. 5).

The PSF (Point Spread Function) of a detector is a measure of the resolution of an instrument; the PSF is the diffraction profile of the real distribution o photons focussed by any optical system. More sharp is the PSF more high is the spatial (or angular) resolution of the optical system. The PSF is usually measured at the FWHM (Full Width at Half Maximum) as the angular width of the intercepts of the PSF at the value of half maximum of the profile (see illustration n.6).

The LBC FOV's PSF varies in the FWHM measure following a general radial 2nd

order power law; the median amount of variation is ~5% but at the edge of chips can reach the 20%.

A CCD / IR Data Reduction Primer XII/LXXII

Illustration 6: Average distortion of LBC FOV with chip projections. The two circles indicate the radii where the linear distortion is respectively of 2% and 3%.

Page 13: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

This effect, together with the pixel size variation with radius, is important expecially for surface photometry.The raw images returned by a CCD contain a number of instrumental effects which must be removed before the image can be used for scientific purposes. Next chapters will describe some of these effects and the way we can correct them, taking care of discriminate between additive contribution (that should be subtracted) from multiplicative contribution (that should be divided).The instrumental effects are usually corrected by taking various sorts of calibration frames in addition to the images of the astronomical objects observed. The objects observed are called "target objects" and the observations of them correspondingly called "target images" or "target frames".

---------

A CCD / IR Data Reduction Primer XIII/LXXII

Illustration 7: FWHM of a generic function f(x).

Page 14: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH5 – CCDs' BAD PIXELs & MASKs

Some of the pixels making up the light sensitive grid may be faulty and return signals which are grossly inaccurate. Such pixels are often referred to as being hot, cold or simply bad. Because of the way that CCDs are read-out, in some circumstances a bad pixel will contaminate all the pixels in its row or column in the grid, leading to entire bad rows or columns. Fabrication techniques have improved markedly in recent years, though bad pixels are still regularly encountered.

The software to process CCD images must contain facilities to handle individual bad pixels, bad rows and bad columns. Typically it will either contain options to select and ignore or directly to replace themwith artificial but reasonable values, usually computed from neighbouring pixels. To detect and create an image mask of bad pixels, it is possible to use lots of automatic procedures; those procedure should detect the bad response of bad pixels pattern and flags it; the final flag image is referred to be a static Cosmetic Bad Pixels Mask of the CCD array; with the time the cosmetic BPM can modify but only in the worst direction, since the electronic or some shocking event may corrupt the array more and more. Though infrared arrays contain individual bad pixels the way that they are read-out means that they are unlikely to contain bad rows or bad columns (see figure).

A CCD / IR Data Reduction Primer XIV/LXXII

Illustration 8: LBC Cosmetic Bad Pixels Mask superimposed on a science RAW image.

Page 15: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

To produce a BPM mask there are two different procedures:1) clipping a median image

• create a good bias/dark image list• perform a clipped median to detect deviant pixels value from

averaged median level• dilate the obtained mask

2) detect artifacts• create a good bias/dark image list• detect sources on the positive image• convert the positive image to negative• perform a second detection on the negative image• sum the two segmentation with “||” OR operator

N.B. The source detection should be optimized using a retina file, that is a neural network which knows such artifact (see Crmask/Trails chapter to know more).----------

A CCD / IR Data Reduction Primer XV/LXXII

Page 16: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH6 - READ-OUT SIGNAL & BIAS

Usually the amplifier which boosts the signal prior to its digitalization by the ADC will also generate an offset, false signal or bias, which is imposed in addition to the real signal generated by the illuminating light (there are sound reasons for doing this). This bias varies slightly with position on the chip, can vary slowly with time (though this is minimized if the chip is kept at a constant temperature) and inevitably has noise associated with it. There are two techniques for estimating and correcting the bias.

● Bias Strips / Prescan / Overscan Regions Here the CCD controller software is written in such a way that the images generated contain regions (usually two narrow strips on either side of the chip) that are created by reading out the CCD without sampling any of its stored charge. These regions are called bias strips or pre/over scan regions. The values of pixels within these strips consist only of the bias and its noise. Usually for each row in the image the pixels in the corresponding row of the bias strips are averaged and the resulting value is subtracted from all the pixels in the row. The bias strips serve no further purpose and can then be discarded, thus reducing the size of the images.The operation to exclude the bias strip is called trimming of an image and consists of exclude the bias section from a science image. The following pixels regions are the scientific chip section and the bias strips sections:

TRIMSEC=> [Xmin:Xmax,Ymin:Ymax]=[2099:2304,1:4608]BIASSEC => [Xmin:Xmax,Ymin:Ymax]=[51:2098,1:4608]

● Bias Frames In this situation the entire CCD array is read-out without sampling any stored charge (that is, no light is incident on the detector, with the shutter closed) so that any small scale structure in the noise is detected and can subsequently be corrected for. Such frames are called bias frames. In practice bias frames are acquired by taking short exposures with the shutter closed before or after each night of observing. Typically in order to reduce read-out noise several frames are taken and averaged. The resulting `master' bias frame is then simply subtracted from the genuine image frames.

Which method is preferable depends on the quality and stability of the chip. If the chip and amplifier are stable during the observing session them

A CCD / IR Data Reduction Primer XVI/LXXII

Page 17: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

observing separate bias frames is straightforward and gives satisfactory results. Conversely, using overscan regions can be more convenient because you do not have to acquire, store and process separate bias frames. Of course, if the CCD controller software does not generate bias strips then you must use separate bias frames.

However you make the bias correction, you need to apply it to all the other frames acquired: target objects, flat fields (see below) etc . Often making the bias correction is the first stage of CCD data reduction.

● Master Bias Taking a set of bias frame is it possible to create a masterbias image that is a median stack image of all debiased frames with a sigma rejection clipping to avoid high discrepancy pixel values. For all these bias situation we developed scripts and programs to subtract and create master bias calibration frames (see pipeline documentation for details).It is possible to create a bias file list and check the stability of a list of dark images using the “imstat” program; this is necessary to exclude from the above list all image above a threshold given by the MEDIAN:

imstat -H 1 -l 10 -u 30 cat darkfilelist.txt or:

bash for f in cat darkfilelist.txt do imstat -H 1 -l 10 -u 30 $fdone

The final step is to combine all remaining image files in the list in a MasterBias Image, using the mkmasterbias.sh script, which make use of “imcombine” program to stack fits images; the program also generate a BPM mask.

Infrared array data may appear to have no bias strips or bias frames. This absence may be due to the bias having already been automatically subtracted or it may be because the bias correction is subsumed into the dark current correction (see below).

Sometimes happens in the LBC-Blue channel the amplifier are not so good to operate near the saturation limit, so when a star is over the saturation point arise a trail centering on the star in the

A CCD / IR Data Reduction Primer XVII/LXXII

Illustration 9: It is possible to view the Blue LBC amplifier bad trail response near the saturated stars.

Page 18: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

read-out direction and this is very difficult to correct (see Illustration n. 6).

As mentioned above, CCD chips have a wide dynamic range within which their response is essentially linear. However, if the illuminating light is sufficiently bright the response will become non-linear and will ultimately saturate (that is, an increase in the intensity of the illumination produces no change in the recorded signal). In principle the response in the non-linear region can be calibrated.

However, in practice, the onset of saturation is sufficiently rapid that it is more sensible to limit exposures to the linear region. In order to prevent saturation it is usual to take a series of short exposures rather than a single long exposure of equivalent duration. The individual short exposures can then simply be added during the data reduction. This technique offers other advantages, for example in the detection and removal of cosmic-ray events (see below). Usually the documentation for the instrumentation that you are using will include the range of intensities over which the response is linear.The mean saturation limit for CCDs is about 65536 ADUs = 216 ADUs (16 bit = 2 byte), but at ~40000 ADUs chips loose linearity between number of incident photons and output signal, so the saturation limit should always be set to a lower value than the 65536 limit.The right image acquisition procedure, when very luminous sources are present in a CCD's FOV, is taking several 'short' exposures to reduce (or cut) saturation phenomena for bright sources and 'long' exposures to reach detection limit of faint sources. with this procedure the bad pixels over threshold generated by saturated objects in the long exposures will be replaced (especially if dithered) by the right values of the short not saturated images.

----------

A CCD / IR Data Reduction Primer XVIII/LXXII

Page 19: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH7 - THERMAL NOISE & DARK CURRENT

Another effect which is sometimes present is an offset from zero that is generated thermally within the CCD, even when no light is present. This offset is termed the dark current because it is present whether the shutter is open or closed. It varies somewhat from pixel to pixel and slowly with time (as long as the chip is kept at a constant temperature). It is usually minimized by cooling the CCD to the temperature of liquid nitrogen.

If necessary, the dark current can be measured by taking long exposures with the shutter closed, and usually with a narrow blue filter, thus to reduce at minimal level the entered light; so the dark images must be bias-subtracted and corrected for cosmic-ray events (see below) and dividing by the exposure time. Usually the electronics of CCDs remains unchanged or, at least, may slightly going bad, so it is better to choose dark images with date near the scientific observation or, alternatively later.The dark current response is then scaled to the exposure time of each target image and subtracted from the target image. However, the dark current is usually insignificant (and ignored) for visible light CCDs (i.e theorethycal studies for LBC@LBT reveals that the dark current should contribute for only ~3ADUs every 15 minutes of integration, and the experimental measurement found ~0.01ADU/s so about ~6ADUs every 10 minutes), but it is important for infrared arrays, as in the HAWK-I@VLT instrument . The contribute of the dark current is important for infrared arrays. Dark frames should be taken frequently throughout the observing session.

Removing DARK currentSo a good way to remove the spurious counts depending on the exposure time, it is possible to take a set of DARK images, having the same exposure time of the observation we want to correct.It is very frequent that for the IR instruments, science images are distribuited as mosaic (often created with the “nodding” technique), so each science image is yet a coadd of a certain number of RAW images. In these files is always present a KEYWORD named “NDIT” which represents the number of input RAW images to produce the science mosaic frame.########THIS STEP IS CRUCIAL: to produce a feasible “masterdark” image is necessary to get dark frames having the same exposure time of the science images we want to correct with and, in the case that science images are yet mosaic of a particular number (NDIT) of RAW images, NDIT parameter should be the same.########

A CCD / IR Data Reduction Primer XIX/LXXII

Page 20: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

Normally the spurious counts should raise with a linear law depending on the exposure time:

Dark(T)≈ Bias + a T (7.1)

It is possible to check the stability of a list of dark images using the imstat program; this is necessary to exclude from the list list all image above a threshold given by the MEDIAN:

imstat -H 1 -l 10 -u 30 cat darkfilelist.txt

or: #bash for dark in cat darkfilelist.txt do imstat -H 1 -l 10 -u 30 $dark | sed -n 2,2pdone

The final step is to combine all good list in a MasterDark Image, using the mkmasterdark.sh script, which make use of “imcombine” program to stack fits images; the program also generate a BPM mask (for memory reason we recommend not to exceed 12 images simultaneoussly processed!).

for i in `seq 1 $numdarklist`do IMCOMBINE --blank 0 -r sigclip -n 3.0 -T '[1:2048,1:2048]' -o $masterdark -v ${filenames}done

###DEFAULT OPTIONS USED IN THE ALGORITHM####-r sigclip => reject pixels using a sigma-clipping algorithm.

Available choices {none|minmax|sigclip}#-n 3.0 => Sigma factors for rejecting deviant points from the clipped median [default 3.00 in HAWK-I]#-T '[1:2040,1:2048]' => Image section defining the trimmed

output (this is the default for HAWK-I, while we use '[51:2098,1:4608]' for LBC.

IMCOMBINE ALGORITHM:The IMCOMBINE program execute the pixel2pixel combining using a computed weight. The Algorithm works as follows:

start a loop on all passed images1) for each image stack pixels applying a rejection algorithm2) calculate a scale given as MEDIAN(STACK)

3) get the inverse as INVSCALE[i]=1/MEDIAN(STACK[i])4) multiply STACK[i]=DATA[i]*INVSCALE[i]5) update BPM according to stacked BPMs[i] and rejected pixels.

A CCD / IR Data Reduction Primer XX/LXXII

Page 21: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

Correcting for Dark CurrentTo correct for the dark currents in the chips, physical images should be subtracted of the masterdark image.

---------

A CCD / IR Data Reduction Primer XXI/LXXII

Page 22: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH8 - PHOTON NOISE, SNR, GAIN & RON

The final, irreducible, source of noise is the photon noise due to the poissonian nature of counting photons. The error in the signal is proportional to the square root of the signal.

err X =X ⟨X ⟩≡ X±X (8.1)Any measurement made by a CCD is in units of counts; the conversion factor from counts and electrons is called “gain” of the CCD. The gain value is set by electronics that readout the chips and it is expressed in units of [electrons/count], though the number is an average since it is not possible to split an electron charge into fractional parts. Anyway “counts” do not provide an absolute comparison between different cameras, since the gain varies from chip to chip.Since the photons recorded by the CCD arrive in random packets (courtesy of nature), observing the same source many times records a different number of electrons every time. This variation is a random error, or "noise" that is added to the true “signal”.The concepts of "signal" and "noise" can be described in terms of their ratio (SNR or Signal_to_Noise Fraction).The signal is defined as the quantity of information you measure in the image—in other words, the signal is the number of electrons recorded by the CCD or the number of counts present in the CCD image. The noise is the uncertainty in the signal.So it is possible to measure the gain of a CCD by comparing the signal level to the amount of variation in the signal (i.e. measuring the signal and its variation within the same region of pixels at different intensity levels or, alternatively, pixel-by pixel).

Signal (SE for electrons and SC for counts) and Noise (NE for electrons and NC for counts) are simply related through the “gain” factor (g):

S E=g×SC SC=1g×SE

N E=g×NC NC=1g×N E

(8.2)

here the contributes to the noise (NE) depends by various sources, we define the ReadOutNoise (RON) as:

N E2=RON E

2errE

2errEvar

2

counts

g2N C

2=g

2RON C

2g

2errC

2g

2errCvar

2

(8.3)

Where for poissonian nature of light errC2≡SC , and errCvar

2 is the variance

A CCD / IR Data Reduction Primer XXII/LXXII

Page 23: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

ExtraNoise term; so NC2=

1g

SC2RON C

2errCvar

2 , where

1g

is the slope of linear

regression and the point where NC2 goes to zero and SC

2=0 gives the (RON)

RON C2=errCvar

2 .All other quantities can be calculated from the gain and RON value.

---------

A CCD / IR Data Reduction Primer XXIII/LXXII

Page 24: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH9 - PIXEL SENSITIVITY & FLAT FIELDING

Due to imperfections in the manufacturing process the sensitivity of the pixels will vary slightly (usually by a few percent) across the grid. This effect is essentially random, and is not a function of, for example, position on the grid. The relative sensitivities of the pixels can be calibrated by imaging an evenly illuminated source, such as the twilight sky, and examining the variation in values recorded. Once this calibration is known, astronomical images of the sky can be corrected to the values they would have had if all the pixels had been uniformly sensitive. This correction is known as flat fielding and images of evenly illuminated sources, such as thetwilight sky, are known as flat fields. The pixel-to-pixel sensitivity variations change with wavelength, so the flat fields should always be acquired using the same filter as the observations of the target objects. The flat fielding procedure also corrects for several other effects:

1. small sharp dark features with the same percentage absorption on all flat fields. These come from dust particles on the CCD chip

2. vague ring or torus-shaped features . These come from dust on the filters, which are out of focus as seen from the chip. They are the same on all exposures with the same filter, but obviously differ from filter to filter, and can differ from time to time

3. vignetting , the dimming of objects observed towards the edge of the telescope field of view. Vignetting is caused by various out-of-focus obstructions in the light path, such as the support for the secondary mirror

Two types of flat fields are usually used: "dome flats" and "sky flats". Follow a brief details about these two kind of flat field.

- Dome flats are images of the inside of the telescope dome, illuminated by a bright continuum source free of emission lines. The interior surface of the dome is usually a smooth, diffuse reflector and is completely out of focus for the telescope optics.Consequently the image recorded is completely featureless. Dome flats are convenient because they can be taken in unlimited numbers during the day, rather than at night or during twilight when time is short.However, they have two disadvantages:

● light reflected from the dome is incident on the telescope at a slightly different angle to light from the sky. This difference does not affect the pixel-to-pixel sensitivity variations but can affect the vignetting and the

A CCD / IR Data Reduction Primer XXIV/LXXII

Page 25: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

shape of the images caused by dust particles● the color (that is the wavelength distribution) of the lamp is not the same as

that of the night sky. This effect is more important for observations made through a broad band filter than a narrow band one and can lead to fringing (see below).

- Sky flats are images of the sky taken during twilight when it is relatively bright. To obtain a good flat field in this situation the sky should be much brighter than any stars, which happen to be in the field of view, but not bright enough to saturate the chip.

The optimum time to acquire the flat field depends on the filter: a narrow filter, a filter corresponding to a wavelength for which the chip is insensitive, or to a wavelength range where the Sun emits little light (such as the U band), can be taken nearer to sunrise or sunset than a broadband filter at the peak of the chip's sensitivity. In an optimally exposed flat field the photon noise (see below) is negligible but the image is not saturated. However, it can sometimes be difficult to judge the exposure time correctly, particularly for frames acquired close to sunrise or sunset. At the sunset the sky temperature became cold and so the cooling atmospheric particles recombine and emit (recombination line may present strange features in the flat fields); at the sunrise instead the sky temperature increases so its spectrum is in absorption; so the number of counts in the flat field depends on the time during the 24h when the image is taken and to get best stable flat fields it is recommended to take images during the sunrise.Also, in such frames the interior of the dome is illuminated by sunlight and this light reaches the chip by internal reflections in the telescope. Thus sky flats show some of the vignetting and dust effects seen in dome flats. De-focussing the telescope to make any star images present less prominent is usually not viable because it may change the vignetting function.

An alternative to taking flat fields during twilight is to take then during the night. This approach is particularly common for infrared observations because at these wavelengths the sky is relatively bright. It is possible to combine different sorts of flat fields to obtain the advantages of each. For example, you could use dome flats to correct pixel-to-pixel sensitivity variations and twilight flats to correct large-scale effects such as vignetting.

In outline, you use the flat fields to correct the target exposures as follows. Choose several correctly exposed flat fields, de-bias them and combine them into a single Master FlatField image.

A CCD / IR Data Reduction Primer XXV/LXXII

Page 26: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

The de-biassed images of the target objects are simply divided by this master flat field. You should always calibrate target images using flat fields obtained through the same filter (that is, in the same color) and on the same night. Flat fields acquired with a 16-bit camera should ideally have a mean pixel count which averages around 20,000 in order to allow high accuracy to be obtained.

At infrared wavelengths the night sky is sufficiently bright that it can be used to construct flat field frames and this is the usual procedure, rather than acquiring twilight or dome flat fields. Aparticular disadvantage of dome flats is that they may contain a blurred image of the telescope reflected off the dome because the telescope glows at these wavelengths.

---------

A CCD / IR Data Reduction Primer XXVI/LXXII

Page 27: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH10 - FLAT FIELDS CALIBRATION

We can now investigate more deeply the way to obtain a good calibration using flat fields calibration images.

The starting point for determining the flat field calibration is an observation of something which should have uniform response at all points on the detector. In addition the color of the light falling at each pixel should be the same as that in an observation so the same filter must be used when determining the flat field (the issue of the matching the color of the objects observed at the appropriate pixels is ignored here). The best calibration observation is of a blank sky. If an accurate blank sky observation can be obtained then this is all that is needed for a flat field calibration. This type of flat field might be called a "SKY FLAT", though this term is more often used for a type of flat field described below. There are two difficulties with this type of calibration; finding a really blank sky and getting asufficiently accurate measurement without using all the observing time.

It is usually not possible to get a blank sky observation accurate enough to calibrate the individual pixels without introducing undesirable noise. What is generally done is to use a lamp to either uniformly illuminate a part of the dome or directly illuminate the field of view. The first type of observation is called a "DOME FLAT" and the second is called a "PROJECTIVE FLAT". We shall call both of these types of observations "LAMP FLAT FIELDS".

If the illumination is truly uniform then these types of observations are sufficient for flat field calibration. To get a very accurate flat field many observations are made and then combined.

Unfortunately, it is sometimes the case that the lamp flat fields do not illuminate the telescope/detector in the same way as the actual observations. Calibrating with these flat fields will introduce a residual large scale illumination pattern, though it will correctly calibrate the relative pixel responses locally. There are two ways to correct for this effect:

1. correcting the flat field observation. 2. applying the uncorrected flat field to the observations and then apply an

"Illumination Correction" as a separate operation.

The first is more efficient since it consists of a single correction applied to each observation but in some cases the approximate correction is desired

A CCD / IR Data Reduction Primer XXVII/LXXII

Page 28: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

immediately, the observation needed to make the correction has not been taken yet, or the residual illumination error is not discovered until later.

For the two methods there are two types of correction. One is to use a "Blank Sky Observation" to correct for the residual illumination pattern. This is different than using the sky observation directly as a flat field calibration in that only the large scale pattern is needed.Determining the large scale illumination does not require high signal-to-noise at each pixel and faint objects in the image can be either eliminated or ignored. The second method is to remove the large scale shape from the lamp flat field. This is not as good as using a blank sky observation but, if there is no such observation and the illumination pattern is essentially only in the lamp flat field, this may be sufficient.

For the "Spectroscopic Flat Fields" there are some differences: they differ from direct imaging in that the spectrum of the sky or lamp and transmission variations with wavelength are part of the observation. Application of such images will introduce the inverse of the spectrum and transmission into the observation. It also distorts the observed counts making signal-to-noise estimates invalid.This, and the low signal in the dispersed light, makes it difficult to use blank sky observations directly as flat fields. As with direct imaging, sky observation may be used to correct for illumination errors if necessary. At sufficiently high dispersion the continuous lamp spectrum may be flat enough that the spectral signature of the lamp is not a problem. Alternatively, flux calibrating the spectra will also remove the flat field spectral signature. The spectroscopic flat fields also have to be corrected for regions outside of the slit or apertures to avoid bad response effects when applying the flat field calibration to the observations.

The basic scheme for removing the spectral signature is to average all the lines or columns across the dispersion and within the aperture to form an estimate of the spectrum. In addition to the averaging, a smooth curve is fit to the lamp spectrum to remove noise. This smooth shape is then divided back into each line or column to eliminate the shape of the spectrum without changing the shape of the spectrum in the spatial direction or the small scale response variations.Regions outside of the apertures are replaced by unity. This method requires that the dispersion be aligned fairly close to either the CCD lines or columns.

All pixels in a Flat field have different efficiency (εi), so they must be normalized to the same unitary efficiency:

Ii =εi · I (10.1)To reduce noise it is better to perform a stacking of a set of a list of flat images and it is better to use images with the same filter and greater exposure time (at least ten seconds).

A CCD / IR Data Reduction Primer XXVIII/LXXII

Page 29: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

Anyway in any flat field image there is always the BIAS (or DARK with the sme exposure time of the observation) additive levels that must be subtracted before to stack the images and create the Master Flat Field.It is possible to check the stability of levels in a list of flat images using the “imstat” program; this is necessary to exclude from the list all saturated images or any image whose MEDLEVEL is not in the range 15000-20000 ADUs.:

imstat -H 1 -l 10 -u 30 cat flatfilelist.txt

or: bash for flat in cat flatfilelist.txt do imstat -H 1 -l 10 -u 30 $flat | sed -n 2,2pdone

The final step is to combine all remaining list in a MasterFlat Image, using the “mkmasterflat.sh” script, which make use of “mkmasterflat” program to stack fits images and normalize them to the unity at the center of the FOV, using a sigma-clipping algorithm; the program also generate a BPM mask. For some detectors any chip has a different value of the GAIN (gj), so any flat image will be:

Fij≈

ε ij

gij (10.2)

with j=1,2,3,4 are the number of chips (LBC or HAWK-I has the same number), and i=1,...,2048 is the pixel row lenght.So the scientific image will have a Bi contribute of the BACKGROUND and a Si

contribute of the SOURCE:εi

j

g ij×BiS i (10.3)

using a global normalization factor (c) is it possible to substitute and obtain the final flat fielding image, where

F F ij

is the Master Flat field:

I i

j

F Fij≈

ε ij

gij⋅BiSi

ε ij

gij×c

(10.4)

A CCD / IR Data Reduction Primer XXIX/LXXII

Page 30: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

Creating a MasterFlat imageHere is the algorithm used to compute the masterflat image and its BadPixelMask.

for i in `seq 1 $numflatlist`do MKMASTERFLAT --blank 1.0 -r sigclip -n 3.0 -B '[1:2048,1:2048]' -T '[1:2048,1:2048]' -I -M $masterdark -l 0.01 -u 1.5 -o $masterflat --outbpm $masterflatbpm -v ${filenames}done

###DEFAULT OPTIONS USED IN THE ALGORITHM####-r sigclip => reject pixels using a sigma-clipping algorithm

available {none|minmax|sigclip}#-n 3.0 => Sigma factors for rejecting deviant points from the clipped median [default 3.00 in HAWK-I]#-T '[1:2040,1:2048]' => Image section defining the trimmed

output (this is the default for HAWK-I, while we use '[51:2098,1:4608]' for LBC.

#-B '[1:2040,1:2048]' => Overscan bias image section this is the default for HAWK-I, while we use '[2099:2304,1:4608]'

for LBC. -I => Disable debiasing along y-axis. -M $masterdark => the option permits to include a masterdark image in the computation of the masterflat-l 0.1 => Low threshold to be applied to the output pixels. Pixels below this threshold are set to blank value. If INDEF the threshold is not used [INDEF].-u 1.5 => High threshold to be applied to the output pixels. Pixels above this threshold are set to blank value.

If INDEF the threshold is not used [INDEF]. --blank 1.0 => Value to use for undefined pixels. Allowed values are: NaN (Not a Number), inf (infinity) and any real number Possible choises are {NaN|inf|real}.

MKMASTERFLAT ALGORITHM:

STACK PIXEL2PIXEL FROM the FLAT RAW IMAGES (max num depending on CPU-MEMORY normally <=12 images)for each chip Extension “i” of the FLAT RAW:

1. STACK[i]=(HDU[i] – MBIAS[i])# subtract masterbias (if option selected)

2. debias the STACK[i] matrix# debias frames (if option “no-debias” is not selected)# the debias algorithm works as follows:

a) Perform clipped median on the BIASSEC returning an unidimensional vector (using reshape and interpolate

functions)

A CCD / IR Data Reduction Primer XXX/LXXII

Page 31: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

b) subtract the unidimensional vector from input data3. CREATE SIMILAR BPMSTACK[i]

# stack created with “LOGICAL-OR” pixel2pixel and with all # bad pixels derived by rejected pixels outside selected sigma

# or without –lower / --upper selected limits. 4. NORMALIZE FLAT (using STACK vector, CORRDATA, BPMSTACK)

a) if a correction image (CORR) is selected: STACK=STACK*CORR

b) create a normalization matrix (NORM): NORM=MEDIAN(STACK)

c) invert normalization matrix (INVNORM): INVNORM=(1/NORM) d) multiply to obtain output (NORMSTACK):

NORMSTACK=STACK*INVNORM

Equalizing pixel-2-pixels sensitivities (apply MasterFlat image)

The operation of applying the masterflat images to scientific data images is performed by the “prereduce operation”, see chapter n. 12 for a detailed description.

N.B.: The autoflat dichotomy is not a particular flatfield but it is a script designed for taking twilight flat field frames automatically, with minimal intervention from the observer. autoflat is an automatic procedure to determine exposure times for flats and taking a series of dome or sky flats. Once called, the script will wait until official twilight (evening) or nautical twilight (morning) before starting to take test exposures at regular intervals to monitor the sky brightness. Once the sky is sufficiently dark (evening) or bright (morning) the telescope scratch mode will be disabled and the script will start to take real flat field frames.

----------

A CCD / IR Data Reduction Primer XXXI/LXXII

Page 32: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH11 - NATURE OF CCDs' OUTPUT

Imagine to look at a single CCD chip as a matrix of (i,j) pixel elements; so we have to study what happens when a single pixel of a CCD chip is illuminated by the electromagnetic radiation, in order to learn what kind of output it will generate.

The nominal output “Xij” of a CCD-element to a quantum of light “Iij” can be given as

X ij=M ij⋅IijA ijF ij I ij (11.1)

where the additive contribution “Aij” is caused by the dark current, by pre-flashing, by charge that may have skimmed from columns having a deferred charge (skim) and by bias added to the output electronically to avoid problems with digitizing values near zero. Quantum and transfer efficiency of the optical system enter into the multiplicative term “Mij”. The term “Iij” consist of various components:

● object photons● sky photons● photons emitted by telescope structure

It is known that the response of a CCD can show non-linear effects that can be as large as 5-10%. These effects are represented by the term “Fij”.

In the following we ignore the pre-flash and skim term, and we only take the bias and dark frames into account. The objective in reducing CCD frames is to determine the relative intensity “Iij” of a science data frame. In order to do this, at least two more frames are required in addition to the science frame, namely:

● dark frames to describe the term “Aij”● flat frames to determine the term “Mij”.

As known the dark current dark is measured in absence of any external input signal, by considering a number of dark exposures a medium <darkij> can be determined:

⟨darkij ⟩=darkijbias (11.2) The method to correct the frame for multiplicative spatial systematics is know as flat fielding. Flat fields are made by illuminating the CCD with a uniformly emitting source.

A CCD / IR Data Reduction Primer XXXII/LXXII

Page 33: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

The flat field then describes the sensitivity over the CCD which is not uniform. A mean flat field frame with a higher S/N ratio can be obtained by combining a number of flat exposures. The mean flat field <flatij> can be described by:

⟨ flat ij ⟩=M ij⋅iconsdarkijbias (11.3)

So the science frame <scienceij> is: ⟨ science ij ⟩=M ij⋅intensijdarkijbias (11.4)

where intensij represents the intensity distribition on the sky, and icons a brightness distribution from a uniform source. If set to the average signal of the dark and corrected for flat, a frame or a subimage will have the icons term on the form:

icons= ⟨ flat−dark ⟩ (11.5)

So the reduced intensity frame intensij will have similar data values as the original science frame science.

Combining Eqs.(11.2), (11.3) and (11.4), we can isolate the intensij term:

⟨ intensij ⟩=[scienceij−⟨dark ij⟩ ]

[⟨ flatij ⟩−⟨darkij⋅F ⟩]⋅icons (11.6)

Here icons can be any number, and term now denotes a dark frame obtained by e.g. applying a local median over a stack of single dark frames. The subscript in denotes that this dark exposures may necessarily be the same frame used to subtract the additive spatial systematics from the raw science frame.

The mean absolute error of intens(i,j) yields with icons=1 (only the first letter is used for abbreviations):

∂ I 2=[ ∂ I ∂S

2

⋅∂ S 2 ∂ I ∂D

2

⋅∂D2 ∂ I

∂F 2

⋅∂F 2] (11.7)

Computing the partial derivatives we get:

∂ I 2=[ F−D

2⋅∂ S2 S−F

2⋅∂D

2 S−D

2⋅∂ F2

]

F−D 4 (11.8)

A small error (delta(I)) is obtained if (delta(S)) ,(delta(D)) and (delta(F)) are kept small.

A CCD / IR Data Reduction Primer XXXIII/LXXII

Page 34: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

This is achieved by averaging Dark, Flat and Science frames. (delta(I)) is further reduced if S=F, then Eq.11.8 simplifies to

∂ I 2=[∂ S

2∂F

2]

F−D2

(11.9)

This equation holds only at levels near the sky-background and is relevant for detection of low-brightness emission. In practice however it is difficult to get a similar exposure level for the flat frame and science since the flats are usually measured inside the dome. From this point of view it is desirable to measure the empty sky (adjacent to the object) just before or after the object observations. In the case of infrared observations this is certainly advisable because of variations of the sky on short time scales.

----------

A CCD / IR Data Reduction Primer XXXIV/LXXII

Page 35: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH12 – PRE-REDUCTION

Once obtained the MasterBias (or MasterDark) and MasterFlat images it is possible to run the preliminary standard reduction procedure, which is called “pre-reduction”.The prereduction of LBC can be performed using the “prereduce.sh” script that for any image in the input RAW file list, performs the following reduction steps:

1. crosstalk, when a pair of CCDs are readout simultaneously one chip (called “killer”) affects counts in the other chip (called “victim”) adding or subtracting ADUs from real value; it is necessary to equalize and correct for this effect (use the xtalk routine):

The cross-talk algorithm performs the following operations:a) loop over all image chip and calculate the corrected chip matrix:

corrected = victim + xtcoeff * (killer1 + ... + killerN)

2. prereduce (debias/flatfielding), so subtract images for masterBias (or masterDark), subtracting before the bias trend of the overscan region (using a 'Chebishev' function of order 10 and binsize 10) and divide them for masterFlat field; it also searches for saturated pixels and update any input BPM provided.

The prereduce algorithm should be run with performs the following options: PREREDUCE -N -M masterbias_or_masterdark -F $masterflat

-B '[1:2048,1:2048]' -T '[1:2048,1:2048]' -s 50000

where a masterbias should be preferred if the dark current is negliable elsewhere a masterdark image shoud be used, expecially for IR detectors.###DEFAULT OPTIONS USED IN THE ALGORITHM####-s 50000 => Pixels with values equal to or greater than this value in the input data are identified as saturated by the mask value set by option --saturation-flag (by default#-N => Disable input image debiasing. #-n 3.0 => Sigma factors for rejecting deviant points from the clipped median [default 3.00 in HAWK-I]#-T '[1:2040,1:2048]' => Image section defining the trimmed

output (this is the default for HAWK-I, while we use '[51:2098,1:4608]' for LBC.

#-B '[1:2040,1:2048]' => Overscan bias image section this is the default for HAWK-I, while we use '[2099:2304,1:4608]'

for LBC. #-M masterbias => Subtract masterbias from each input image

[IF IR detector prefer to use the MASTERDARK].#-F masterflat => Apply flatfield correction to each input image.

A CCD / IR Data Reduction Primer XXXV/LXXII

Page 36: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

PREREDUCE ALGORITHM:PERFORM BASIC REDUCTION DEBIASING, SUBTRACTING a MASTERBIAS and DIVIDING for a MASTERFLAT IMAGEfor each chip Extension “i” of the FITS file:

1. DBIMAGE=debias(image) → (see mkmasterflat for detailed algorithm)2. FLATTEDIMAGE=(DBIMAGE / MASTERFLATFIELD) 3. update PREREDUCTION_MASK identify saturated (or undefined) pixels and

all pixels out of a rejected sigma threshold making also an operation of LOGICAL_OR with input BADPIXELS_MASK and, if provided, MFLAT_BADPIXELs_MASK

The input MasterBias (or MasterDark) must be created before to run the prereduction, more the masterFlat is normalized to 1 so it does not depends on the integration time of the images. The output BPM mask is called the prereduction mask, which can be used for following reduction steps as input BPM.

N.B.: For IR detectors like HAWK-I instrument the first x-talk is usually skipped in order of equalize and subtract ghost sources in following reduction steps.For HAWK-I if old images are present (date<october-2008), it is necessary to correct the header for bad angles notation before to run prereduce.You must run the correctheader.sh script which runs “fitsedit” program to update and adjust the CD1_2 and CD2_1 keywords inside the RAW fits file. You can perform the operation by hand using this command: for j in `seq 1 4` do

CD1_2=`get_keyword_value $filename CD1_2 $j`; cd1_2=`eval "$cmd"`cmd="FITSEDIT -e $j -x $cd1_2 -t f -v -u CD1_2 $filename"; eval “$cmd”CD2_1=`get_keyword_value $filename CD2_1 $j`; cd2_1=`eval "$cmd"`cmd="FITSEDIT -e $j -x $cd2_1 -t f -v -u CD2_1 $filename"; evat “$cmd”

done

For HAWK-I images it is also necessary to add a FWHM keyword (script addfwhm.sh) to be used during the reduction; the value of FWHM is not well calculated and should be set as the mean value between the “FWHM START” and “FWHM END” in the HIERARCH of the header file.

----------

A CCD / IR Data Reduction Primer XXXVI/LXXII

Page 37: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH13 - COSMIC-RAY EVENTS & ANTICOINCIDENCE

Astronomers usually refer to spurious signals in CCD frames caused by ionizing radiation as cosmic-ray hits, named cosmic-ray events. However, these terms are slightly misleading as the ionizing events are as likely to be due to background terrestrial radiation as cosmic-rays.When a cosmic-ray particle hits a CCD pixel it causes an increase in charge which is indistinguishable from the arrival of focused photons. These spurious signals are usually (though not always) confined to a single pixel. Cosmic-ray hits appear as a set of pixels with intense values sparsely scattered over the CCD frame. Typically an exposure of a few minutes might have about hundreds (or thousands) of cosmic-ray hits depending on the chip dimensions. The location of the hits within the chip is random. If several frames of the same target object or flat field have been acquired (for example to avoid saturation, see above) then the cosmic-ray hits will occur at different positions in each frame and it is possible to detect and remove them by comparing corresponding pixels in the different images and rejecting those with aberrantly large values. The algorithms used to flag cosmic rays events are so called “derivative functions”, which acts on the chip pixels and flags objects whose derivate explode on the scale of 1 or 2 pixels; to perform such task it is usually used a Laplacian filter. To select an event as a cosmic ray it is important that the detected FWHM is significantly smaller than the average PSF of the detector or the SEEING of the exposure.The algorithm may works fine for more than one iteration on the same image (two iteration with a 4.5 sigma is a good compromise).The standard LBC pipeline package has a script to perform such tasks called “crmask.sh”, the output results consists of an updated version of the BPM provided in input (i.e. the one resulted from prereduce algorithm).

The CRMASK algorithm should be run with performs the following options:

CRMASK -c -m $prereducemask -g 1.705 -g 1.870 -g 2.110 -g 1.735 -r 5 -N 2 -n 4.5 $filename

###DEFAULT OPTIONS USED IN THE ALGORITHM###

# -g 1.705 -g 1.870 -g 2.110 -g 1.735 => Gain in electrons/ADU; multiple insertions will apply different values for any chip extention (these are the default for HAWK-I, for LBC detectors gain is ~2.0 (see table).

#-m $prereducedmask => It is the input badpixels_mask to be used.

#-n 4.5 => Number of sigma threshold to reject deviant pixels from median (in LBC-RED n~6-7-8).

A CCD / IR Data Reduction Primer XXXVII/LXXII

Page 38: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

#-N 2 => Number of iterations default 2.

#-r 5 => Read-out noise in electrons. ron~=5 ADUs is the default in HAWK-I and LBC (~10-11 [e-]) (see table).

#-c 8 => Mask value to flag cosmic rays default=8.

CRMASK ALGORITHM:

We briefly describe what is called a derivative algorithm used in the crmask program to flag pixels whose ADUs derivate's explode in few pixel.

Given a Laplacian kernel k =[[0,-1,0],[-1,4,-1],[0,-1,0]] and a Growth kernel gk = [[1,1,1],[1,1,1],[1,1,1]]

1) loop all over itaration and all chips

2) L = convolve(data) with the kernel (k), choose positive values

3) create a filtered image (med5) using filter2D routine

#the filter2D routine execute a bi-dimensional filtering

# Create a noise model based on this median image knowing

# the gain and readnoise of the image.

# Note that this step supposes that no sky background subtration

# has been done.

4) compute noise = ( sqrt(med5*gain + ron*ron) ) / gain

5) compute a sigmap = ( (data/noise) – median(data/noise) )

6) identify firstselections matrix if sigmap>sigclip

7) create a filtered images (med3 and med7) as done for med5 using

filter2D, compute (med3-med7)/noise and choose pixels >0.01

8) compute rejection matrix and combine with firstsel matrix

9) defiune gfirstsel=convolve(firstsel) using gk kernel, combine

with sigmap and flag those pixels > sigclip

10) define finalsel=convolve(gfirstsel) usign gk kernel, combine

with sigmap and flag those pixels > sigclip

11) finally create a logical_or iterating with finalsel mask and

combining with input badpixels_mask.

The Xray CCDs and detectors usually are equipped with a special device which surrounds the CCDs' area to act as a filter of the spurious cosmic rays events.

In such satellites usually are present magnetic traps, which establish a magnetic field to deflect a substantial fraction of charged particles to avoid spurious counts

A CCD / IR Data Reduction Primer XXXVIII/LXXII

Page 39: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

on the detector, and there is also a anti-coincidence device, whose aim is to detect and reject spurious counts of neutrons that first hit on the CCD surface and then the anti-coincidence device.

N.B.: Before to proceed all BPMs should be converted in a relative weight image by the “flag2weight” script included in the LBC standard pipeline package.

To convert into weight image you ca use a bash-for like cicle:

for flag in `bpm.list`do

WeightImage=`echo $flag | $SED 's/.bpm.fits/.weight.fits/'`FLAG2WEIGHT -v -o $WeightImage $flag#if you prefer to have weight images with only 1-0 you must#run this second command: IMARITH -v -o $WeightImage $WeightImage '&' $WeightImage

done

FLAG2WEIGHT algorithm (works using data-types):1) get input flags data matrix [>0 → BAD; 0 → IMAGE]2) transform the flag into weight [>0 → IMAGE; 0 → BAD ]

The LBC GAIN is slightly different from chip2chip:================Blue Channel===================== ###CHIP Gain(e-/ADU) RON(ADUs) BiasLevel(ADU) ###1 1.96 5.2 675 ###2 2.09 4.8 745 ###3 2.06 4.8 675 ###4 1.98 5.0 660 =================Red Channel===================== ###CHIP Gain(e-/ADU) RON(ADUs) BiasLevel(ADU) ###1 2.08 5.00 310 ###2 2.14 5.00 305 ###3 2.13 5.30 290 ###4 2.09 4.80 290 =================================================

CRMASK LBC-EXAMPLES:LBC-BLUE: CRMASK -v - g 1.96 -g 2.09 -g 2.06 -g 1.98 -r 10.19 -r 10.03 -r 9.89 -r 9.9 -N 2 -n 4.5 -o BLUECRMASK.fits -m BLUINBPM.fits INPUTFILE.fitsLBC-RED: CRMASK -v -g 2.08 -g 2.14 -g 2.13 -g 2.09 -r 10.4 -r 10.7 -r 11.29 -r 10.032 -N 2 -n 12.0 -n 8.8 -n 11.5 -n 8.5 -o REDCRMASK.fits -m REDINBPM.fits INPUTFILE.fits

N.B. To avoid crmask spurious detections along fringes it is better to run the crmask algorithm after the superflat and fringe-subtraction for RED LBC filters.

The CRMASK Van Dockum Algorithm is highly time/cpu expensive:here are elapsed time for different combinations on a single LBC mef (4chips):

A CCD / IR Data Reduction Primer XXXIX/LXXII

Page 40: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

1-iterations (N) and sigma=4.0 => 2m:20sec2-iterations (N) and sigma=4.0 => 4m:36sec3-iterations (N) and sigma=4.0 => 6m:54sec

The best performances of the algorithm are reached with Niter=2 and Sigma=4.5, but we suggest to use a single iteration with a lower sigma-threshold i.e. N=4.0.

Dued to higher biaslevel for the Red Channel the sigma must be bigger (i.e. to avoid ----------

A CCD / IR Data Reduction Primer XL/LXXII

Page 41: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH14 – TRAILs & SATELLITEs

Usually the astronomical observations presents spurious tracks and trails caused to the presence of some satellite on orbit that reflects the incident sun light.Such trails may be detected and excluded from the image using appropriate algorithms similar to those of cosmic rays events.The main function used is able to selects sources with a particular trail profile and flag them.You can iterate to affine such process and selects different input parameters such as SIGMA THRESHOLD and so on, but default values are always a good compromise.The standard LBC pipeline package has a script to perform such tasks called “trailmask.sh”, the output results consists of an updated version of the BPM (see illustration).

The TRAILMASK algorithm should be run with performs the following options:

TRAILMASK -m $inBPM -g 10.0 --high=0.975 -s 4.0 -w 19 -v $filename

###DEFAULT OPTIONS USED IN THE ALGORITHM###

#-m $inBPM => Name of the input bad pixel mask (i.e. CRMASK).

In case of multiple input images a file name for each

input image must be given

#-g 10.0 => Maximum gap between line segments lying on the same

line to treat them as the single line segment {def 20.0}.

#--high=0.975 => High threshold for hysteresis {def 0.900}.

#-s 4.0 => Sigma value of the gaussian smoothing filter.

#-w 19 => Width of the gaussian smoothing filter {def 13}.

####other options:

A CCD / IR Data Reduction Primer XLI/LXXII

Illustrazione 10: reduced image on the left and updated version of BPM with trails on the right.

Page 42: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

#-d DILATION => Edge dialtion in pixel units {def 5}.#-f TRAILFLAG => Mask value to flag trails [1-256] {def 16}.#-t THETA => Angle resolution measured in degrees {def 1.0}.#-r RHO => Distance resolution in pixel-related units {def 1.0}.#-n NBIN => Number of bins of histogram for nonmaxima suppression {def 100000}.###TRAILMASK ALGORITHM:The trailmask algorithm uses canny functions to detect trails:

a) define a default gaussian kernel (canny_gaussian) with its sigma and convolve the image with the kernelb) find gradient of image using Sobel convolution masks (convolve2D function)c) find edge direction and magnitude and calculate angles of gradientd) apply nonmaxima suppression creating arrays of x/ye) where magnitude equals 0. No offset... else

-Get points immedieatly up and down gradient; -Only accept points that are local maximum in direction of the gradient;-Get special mask to account for smooth steps, this sort of edge fails the above test but it is nice to return them as edges;-Get suppressed magnitudes;-Update mask with step edges. (the suppressed magnitude array is not yet updated because the original values are needed below when calculating thigh and tlow;-If no edges found then return an empty array;-Calculate high and low based on input low and high take cumulative histogram of nonzero magnitude values;

-Finding the histogram values may cause floating-point errors if all input values are the same;-High value is percentage point in the histogram with Low value is a percentage of high value-

f) edge thresholding (hysteresis) and detecting tracks.g) dilate the track mask

N.B.: Before to proceed all BPMs should be converted in a relative weight image by the “flag2weight” script included in the LBC standard pipeline package (see chapter n.13 for a detailed description of the FLAG2WEIGHT algorithm).

N.B.: For HAWK-I instrument we do not run the trailmask algorithm but we prefer to run by hand the imreplace program toreplace some values in a fits image: in particular we can use it putting to zero the weight image where is present a trail nin the corresponding scciene fits; you can use the imreplace program in the following way:

A CCD / IR Data Reduction Primer XLII/LXXII

Page 43: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

if P1=X1;Y1 and P2=X2;Y2 are the extremities of the trail and the trail has a width of nwidth (pixels) it is possible to mask the weight image (whtIMAGE) of IMAGE with the following command:

IMREPLACE -v -o whtIMAGE -T '[X1:Y1,X2:Y2,nwidth]' whtIMAGE 0

----------

This procedures are highly time/cpu consuming and in some circumstances it is preferred to skip them; but it is possible to proceed with alternative computations:

1st ALTERNATIVE (MEDIAN CLIPPING)An alternative procedures to take into account deviant pixels such as cosmic rays and trails (or even new bad pixels or hot/cold columns/rows) is to skip CR/TRAIL mask detection and proceed with reduction in order to produce a MEDIAN coadd.This coadd can be used to clip any input image to reject deviant pixels outside an opportune sigma threshold interval. New masks should take into account both Cosmic Rays events, both satellite trails, but also supernovae or asteroids.Moreover with a such procedure it is necessary to perform a new background estimation if a median sky (produced by source detection masking) is not subtracted before.

2nd ALTERNATIVE (NEURAL NETWORK)Create a neural network retina with EyE (E.Bertin et al.) software to take into account cosmic rays events, satellite trails and bad rows/columns also saturation artifacts near bright sources. Passing the FILTER.ret retina filter to the Sextractor program as -FILTER Y -FILTER_NAME FILTER.ret and using a -DETECT_MINAREA 1 and -CHECKIMAGE_TYPE SEGMENTATION, the detection software will produce a check-image (default=check.fits) of only such artifacts to be used as additive image mask. -step to perform the relative SCRIPT is “ maskall.sh ”:         a) take one or more generic scientific image (LBC or so ON) b) crop by hand any image when artifacts are found (cr events, trails, bad columns/rows, droups or saturation trails) c) run Sextractor with high threshold, creating segmentation image for any input artifacts, and put to zero not desidered objects and convert them into a flag image (1=artifacts; 0=background) d) run EyE with -LEARNING_TYPE NEW -i input*.fits -o flag*.fits and produce the DEFAULT.ret file e) run sextractor on the science image with -FILTER_NAME DEFAULT.ret and DETECT_MINAREA 1 and -CHECKIMAGE_TYPE SEGMENTATION perform (a good compromise for fast/reliable detection parameters may be sigma=10/20 so det_thresh=10/20). f) perform an “||” OR imarith operation on the segmentation images to produce flags and update BPM of the input science image.

A CCD / IR Data Reduction Primer XLIII/LXXII

Page 44: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

A CCD / IR Data Reduction Primer XLIV/LXXII

Page 45: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH15 – SOURCE DETECTION, FWHM EVALUATION & HEADER CORRECTION

Source detection is not a science it is an Art!We make use of Sextractor which runs usually with a convolution filter. The convolution filter usually is a gaussian filter, whose sigma should be optimally selected to adapt at the seeing condition of the observation. In other words you must adapt the filter to the width of the sources you are going to detect, according to the image pixel scale and observation seeing.To create a gaussian kernel you can use the “mkkernel” utility in the LBC pipeline tool selecting the mean FWHM (in pixels) scale of the filter; mkkernel can be used as follows:

MKKERNEL -v -o kernel.conv FWHM[pixels]

Once selected the right kernel in the sextractor configuration file or by command line using “-FILTER Y -FILTER_NAME kernel.conv” there are other two important parameters to be configured: the “detect minarea” and the “detect threshold”:

DetMinarea=ROUND [ π⋅FWHM

2

2

] (15.1)

DetThresh=n⋅σ=n

DetMinarea (15.2)

So if number of sigma ('n') is high only few sources will be found (those with high signal to noise); I.e. n=10 is optimal sigma number to obtain a good compromise between good detections and fast run of the Sextractor program.

For a great number of operation it is necessary to obtain a good estimation of the FWHM, because the one measured by the telescope is only a raw estimation and it is not enough accurate for our purpose (both in LBC and HAWK-I).In the HAWK-I standard pipeline library is present “mksexcat.sh” and “estimatefwhm.sh” which create a catalogs and make cross-correlations in RA and DEC between a local input catalog and the one produced with Sextractor.

The mksexcat algorithm performs the following operations:1) create the kernel ($filterconv) with mkkernel program2) run sextractor on ($inputfile) and its weight image ($weight)

creates a catalog ($cat) using calculated detection minarea ($detminarea) and a detection and analysis threshold ($detthresh); here is a typical command line:

SEX $inputfile -c $sexconf -PARAMETERS_NAME $sexparam -STARNNW_NAME $sexnnw -CATALOG_NAME $cat -DETECT_MINAREA $detminarea -DETECT_THRESH $detthresh -ANALYSIS_THRESH $detthresh -FILTER_NAME $filterconv -WEIGHT_TYPE MAP_WEIGHT -WEIGHT_IMAGE $weight"

A CCD / IR Data Reduction Primer XLV/LXXII

Page 46: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

The estimatefwhm.py algorithm performs the following operations:1) defined a SELSTAR.dat input star file (RA DEC stars degrees

baricenter)2) executes a sigma clipping by measuring the median and the mean

value of the FWHM and rejects all source for which the two values differs too much.

3) So median FWHM and its STDEV is computed by the median of the FWHM of all clipped sources having a match in the input catalog.

4) Run fitsedit to insert FWHM in the FWHM keyword:FITSEDIT -x $FWHM -t f -v -u FWHM $filename

N.B. In LBC we do not have RA/DEC input (SELSTAR.dat) coordinate so this step may be skipped or performed after the astrometry program. In the last period of 2006 LBC Blue channel has some error in the header value of the following keywords: CD1_1, CD1_2, CD2_1, CD2_2, CRPIX.These keyword depend on the position angle (I.e. PA_PNT keyword value).The LBC pipeline package has the script “correctheader.sh” script to read and correct such values; the correctheader script executes the “fitsedit” program on all involved keywords CRPIX1,CRPIX2,CD1_1,CD1_2,CD2_1,CD2_2 ACCORDING TO Position angle (PA_PNT) see table for PA_PNT=0.0:

FITSEDIT -x RIGHTCRPIX -t f -u CRPIX1 $filetocorrect

Chip#1 Chip#2 Chip#3 Chip#4 ChannelCRPIX1=-1098.0 CRPIX1=1024.0 CRPIX1=3146.0 CRPIX1=-1687.0 LBCBCRPIX2=2946.0 CRPIX2=2946.0 CRPIX2=2946.0 CRPIX2=2284.0 LBCBCRPIX1=-1043 CRPIX1=1079 CRPIX1=3201 CRPIX1=-1740 LBCRCRPIX2=2893 CRPIX2=2893 CRPIX2=2893 CRPIX2=2239 LBCR

PA_PNT==0

CHIP#1 CHIP#2 CHIP#3 CHIP#4

CD1_1=-6.222E-05CD1_2=0.0CD2_1=0.0CD2_2=6.222E-05

CD1_1=-6.222E-05CD1_2=0.0CD2_1=0.0CD2_2=6.222E-05

CD1_1=-6.222E-05CD1_2=0.0CD2_1=0.0CD2_2=6.222E-05

CD1_1=0.0CD1_2=6.222E-05CD2_1=6.222E-05CD2_2=0.0

PA_PNT==45

CHIP#1 CHIP#2 CHIP#3 CHIP#4

CD1_1=-4.400E-05CD1_2=0.0CD2_1=0.0CD2_2=4.400E-05

CD1_1=-4.400E-05CD1_2=0.0CD2_1=0.0CD2_2=4.400E-05

CD1_1=-4.400E-05CD1_2=0.0CD2_1=0.0CD2_2=4.400E-05

CD1_1=4.400E-05CD1_2=4.400E-05CD2_1=4.400E-05CD2_2=-4.400E-05

A CCD / IR Data Reduction Primer XLVI/LXXII

Page 47: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

PA_PNT==90

CHIP#1 CHIP#2 CHIP#3 CHIP#4

CD1_1=0.0CD1_2=6.222E-05CD2_1=6.222E-05CD2_2=0.0

CD1_1=0.0CD1_2=6.222E-05CD2_1=6.222E-05CD2_2=0.0

CD1_1=0.0CD1_2=6.222E-05CD2_1=6.222E-05CD2_2=0.0

CD1_1=6.222E-05CD1_2=0.0CD2_1=0.0CD2_2=-6.222E-05

PA_PNT==135

CHIP#1 CHIP#2 CHIP#3 CHIP#4

CD1_1=4.400E-05CD1_2=0.0CD2_1=0.0CD2_2=-4.400E-05

CD1_1=4.400E-05CD1_2=0.0CD2_1=0.0CD2_2=-4.400E-05

CD1_1=4.400E-05CD1_2=0.0CD2_1=0.0CD2_2=-4.400E-05

CD1_1=4.400E-05CD1_2=-4.400E-05CD2_1=-4.400E-05CD2_2=-4.400E-05

PA_PNT==180

CHIP#1 CHIP#2 CHIP#3 CHIP#4

CD1_1=6.222E-05CD1_2=0.0CD2_1=0.0CD2_2=-6.222E-05

CD1_1=6.222E-05CD1_2=0.0CD2_1=0.0CD2_2=-6.222E-05

CD1_1=6.222E-05CD1_2=0.0CD2_1=0.0CD2_2=-6.222E-05

CD1_1=0.0CD1_2=-6.222E-05CD2_1=-6.222E-05CD2_2=0.0

This script is also usable for GOODS images which must have the cross diagonal roto-traslation keywords (CD1_2 and CD2_1) negative.For HAWK-I images you ca use in sequence three scripts:

a) mksexcat.sh; b) estimatefwhm.sh; c) addsexfwhm.sh.

----------

A CCD / IR Data Reduction Primer XLVII/LXXII

Page 48: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH16 – SUPERFLAT & DELTA CORRECTION FLAT

The superflat depends on the night and it is often produced by gradient illumination over the sky, dued to moon phase or the light of a near city. Without any luminous sources the superflat image is always flat and take care of about a gradient of <1% during several minutes.If the master flatfield is not good there may remain a component which should be compensed by the superflat correction.We should use temporal sequences of about 10-12 minutes (for HAWK-I these sequences are too wide we can use 5 minutes), because it is needed some sort of dithering during the 10 minutes in order to compensate BPMs and saturation, so at least 3 images are needed to perform a good stack.Because of the constance of fringing pattern during a night, it is strictly recommended to use sky images of the same night or if necessary it is possible to use different lists of files taken at the beginning and at the end of the night to take into account the motion of the moon during the night.If the moon phase is “0” (which means that there is no moon in the sky) the superflat correction should take care only of masterflat residuals.The LBC/HAWK-I common pipeline package presents a script to stack the images and create the superflat (starting from a previous step of objecting mask), which is called “mksuperflat.sh” and once obtained the superflat fits file it is possible to apply the correction dividing any single image with the superflat.

So the super-sky-flat is created by combining multiple object frames, effectively (or ideally) rejecting all of the real objects in the frame and leaving the image with a high SNR image of the "blank" sky.

Unfortunately, any pupil-ghost image or fringing (present in at least I-BESSEL, i-SLOAN, z-SLOAN, Y-FAN and most red narrow-band fin LBCR camera) that might be present in your input images are still in this combined frame. So before you can apply a sky-flat to your images, and assuming you want to remove the additive pupil-ghost and fringing components from all of your images, you must first create template pupil-ghost and fringe-correction frames and subtract scaled versions of these frames from all of your object frames.

Sometimes resulting images presents bad columns or bad pixels, mainly caused by not enough dithering steps or few stacking images.Mainly the superflat is an improvement of the flat file procedure to erase the residual of a non optimal master flat field; in this case the superflat file is often called “delta correction flat”.

A CCD / IR Data Reduction Primer XLVIII/LXXII

Page 49: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

● Making the Pupil Template

The main "trick" we use in generating our "super-sky" flat is that we have many exposures in a given filter of many different regions of the sky. We combine these images, excluding objects, to get an image of the sky, so identifying "objects" and defects (by BPMs) in images that we would want to exclude from a combined super-sky-flat.

To be able to subtract the pupil-ghost from the object frames we will need a template image that can be scaled and then subtracted from these frames. The "input image" can be a combined dome-flat from a narrow-band image close in wavelength to your observed band. We find, however, that the best results come from using an input image combined from many science frames of different (or dithered) regions of the sky, an initial "dark-sky" or "super-sky-flat" image.

Now you need to create a mask for the pupil template so that only the relevant parts of it are used later when we use it to remove the pupil ghost from our science frames. This only has to be done once for all runs and filters with a given instrument.

● Removing the Pupil-Ghost

This routine scales the pupil template generated previously, and subtracts it out of the input image.

Inspect the corrected images to verify that the correction was done correctly.

● Making The Fringe Correction Frame Just as we had to remove the pupil-ghost, we might have to remove the contribution of fringing from our frames. In a manner similar to that followed to correct for the pupil-ghost we combine multiple object frames (this time those that have had the pupil-ghost removed) to produce a new image from which we will construct a fringe template. This is also a good time to check the accuracy of our subtraction of the pupil-ghost, since if all the images have been handled properly there will be no pupil in this new combined sky-flat. Note that the removal of the pupil-ghost will have partially (in some cases completely) corrected the fringing in the region of the pupil-ghost.

As we did for the pupil template, we now need to make a mask indicating the regions to be used later for the subtraction. For this mask, we want to use only those extensions not affected by the pupil pattern for estimating the the fringe scaling.

A CCD / IR Data Reduction Primer XLIX/LXXII

Page 50: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

● Subtracting the Fringe Template from Your Frames and Generating Your Final Sky-Flat

Now that you have versions of all your object frames that are free of pupil-ghost images and the effects of fringing, you can finally combine and stack your resulting images and create your super- or dark-sky-flat.

Very final step is to apply the sky flat to all the object frames that have had the pupil-ghost and fringe components subtracted.

The superflat algorithm works with file lists and conceptual steps may be summarized here:

1) create an object mask (you can use the “mkobjmask.sh” script)

# the mkobjmask algorithm defaults are:

-dilation=5 (for LBC and HAWK-I)

-detect_sigma=5.0 for HAWK-I (and 3.0 for LBC)

-fwhm_default=9.0 for HAWK-I (and 4.5 for LBC)

and performs the following step:

a) given a default FWHM and detect_sigma it calculates the

DETECT_MINAREA=round(math.pi * (${FWHM}/2.0)**2)

DETECT_THRESH=${detect_sigma}/math.sqrt(${DETECT_MINAREA})

ANALYS_THRESH=DETECT_THRESH

b) create a proper kernel according to default FWHM

c) run extractor and create segmentation images

d) convert segmentation images to flag images

e) dilate flag images to obtain mask images (uses “dilate” program):

DILATE -d $dilation -o $mask -v $flag

f) creating abjmask as mask LOGICAL_OR bad pixel mask (uses “imarith” program):

IMARITH -o $obm -v $mask '|' $bpm

2) create a pupil/ghosts map algorithm. The procedure to correct for pupils and ghost are summarized in two scripts:

a) first run the “mkoblist_gh.sh” that sort all files by dateobs and ObservingBlock name;

b) then run the “mkghostbpm.sh” that performs the following operations:

A CCD / IR Data Reduction Primer L/LXXII

Page 51: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

starts a for cycle on all files in input list:

-compute the SHOSTBPM[i] with a LOGICAL_OR with BPM[i] and previous OBMMASK[i-1] (using IMARITH program):

IMARITH -o $ghsbpm[i] -v $bpm[i] '|' $obm[i-1]

-transform into WEIGHT image the GHOSTBPM[i] (using FLAG2WEIGHT program):

FLAG2WEIGHT -o $ghsweight[i] -v $ghsbpm[i]

-link the input image mask (.msk.fits) as ghostmask:

LN -s $mask[i] $ghsmask[i]

-obtain the GHOSTOBM[i] performing a LOGICAL_OR between the GHOSTMASK[i] and the GHOSTBPM[i] (using IMARITH program):

IMARITH -o ghsobm[i] -v ghsbpm[i] '|' $ghsmask[i]

The resulting GHOSTOBM will be the OBMs to be used to compute the SUPERFLAT

3) create the superflat (you can use the “mksuperflat.sh” script).

# the mksuperflat algorithm defaults are:

-blank=1 (for LBC and HAWK-I)

-filtersize=4 for HAWK-I (and 16 for LBC)

-dilation=5 for HAWK-I (and 4.5 for LBC)

-detect_sigma=3 (for LBC and HAWK-I)

and performs the following step:

a) creating median super-flat $medsuperflat (uses the “imcombine” program):

IMCOMBINE --blank $blank -o $medsuperflat --outbpm $medsuperbpm

-m $bpms -S '$STATSEC' -r sigclip -n 3.0 {$FILENAMES}

b) filtering $medsuperflat to create $superflat (uses the filter2D program):

FILTER2D --blank $blank -f $filtersize -m $medsuperbpm -o $superflat --outbpm $superbpm -n 3.0 -v $medsuperflat

4) apply the superflat calculate to images (you can use the applysuperflat.sh script).

# the applysuperflat algorithm performs the following

operations:

a) divide input images for the created superflat (uses the “imarith” program):

IMARITH -o $sflfilename -v $filename '/' $superflat

b) update masks, weights and bpm linking to files.

A CCD / IR Data Reduction Primer LI/LXXII

Page 52: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

The whole procedure to compute and subtract a ghost/pupil map i

Once a superflat is created together with its median filtered image it is possible to correct for BLANK=1 values using the “correctsuperflat.sh” whose operations may be summarized here:

#the correctsuperflat.sh algorithm works on the median superflat created and performs the following steps:

a) run sextractor on superflat.med.fits with the following option to create a BACKGROUND image (superflat.med.bkg.fits): -BACK_SIZE=32 (or 64) -BACKFILTER_SIZE 3

b) transform the superflat.med_BPM into weight image (uses the “flag2weight” program)

FLAG2WEIGHT -v superflat.med.bpm.fits

c) run IMARITH with the superflat.med_BPM image to produce a flag image with only 1/0 values:

IMARITH -v -o superflat.med.bpm superflat.med.bpm '&' superflat.med.bpm

d) multiply superflat.med.bpm with superflat.med.bkg.fits obtaining sf1.fits

e) multiply superflat.med.fits with its weight obtaining sf2.fits

f) add d) + e) SF.fits=sf1.fits+sf2.fits

g) link superflat.med_BPM to SF.bpm.fits

N.B.: for LBC it is not necessary to correct the superflat because it is a second order correction from the main masterflat and we can only use the FLAG images to exclude bad areas.

For HAWK-I the superflat correction is crucial and should be performed before to apply the superflat with the “applysuperflat.sh” script.

----------

A CCD / IR Data Reduction Primer LII/LXXII

Page 53: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH17 – DETAILS PROCEDURES on FRINGING & GHOSTs

In the case of observations made through a narrow filter, or where the incident light contains a strong component at a single wavelength, multiple reflections within the CCD chip, or the filters in front of it, can cause wave-like patterns across the image. These patterns arecalled Fringes. The precise pattern depends strongly on the exact wavelength of the illuminating light. Consequently, correcting for fringing requires a flat field whose wavelength corresponds closely to that of the image.

The emission from the night sky usually includes narrow emission lines originating in the terrestrial atmosphere. These lines will often fall within the bandwidths of broad band filters. However, they are not present in the featureless spectra of dome flats. Consequently dome flats may not be appropriate when fringing due to night-sky lines is present (see illustration .n 8).

The fringe pattern is an additive effect and must be subtracted. To remove fringes it is necessary to obtain several exposures of either a region of night sky containing no objects or, alternatively, remove all the contaminating objects from data frames which otherwise contain large areas of night sky. These frames should then be combined to give complete spatial coverage and to reduce the noise contribution. The resulting fringe-frame should be scaled to the fringes present in the data frame (after normalisation) and subtracted.Some Fringes are dued to the OH atmosferic component (visible expecially in the wide IR filters); the fringe pattern created by the emission line of that component varies on spatial scale of about ~arcmin and time scale of about ~10 minutes.The correct procedure to reduce or exclude the OH component are the following:

1. create the sky image as -OBJECT Sextractor checkimage_type for any image excluding the object contributes

2. for any image of the list stack previous and following images in the 10-minutes time interval (the Half Window parameter is the number of images according to single exposure time that can be added to reach the time interval selected).

A CCD / IR Data Reduction Primer LIII/LXXII

Illustrazione 11: example of fringe pattern, on a sky flat.

Page 54: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

3. apply filter2D which is a median filter that exclude all high frequences components such as holes or bad pixels by any sky-images

4. subtract any image with its own sky stack5. normalize each image with one another6. remains a low-frequence component that can be subtracted using SExtactor

background sub.

The input image list can be either a sky-aperture corrected list or a super-flatted list of file.Electronically in any image remains a signature of all pixels in the previous image which resulted saturated above a threshold. These patterns are called Ghosts.There are both electronical both optical kind of ghosts.Electronical Ghosts occurs mainly during the simultaneous readout of multiple amplifiers: when multiple amplifiers are readout (as in mosaic CCDs cameras), the controller electronics relate the signal from pairs of extensions in multiextension format. Tipically when a pair of CCDs are readout simultaneously one chip (called “killer”) affects counts in the other chip (called “victim”) adding or subtracting ADUs from real value.This is reduced performing the CrossTalk procedure (for a detailed description of the algorithm see chapter n.12) before any other operation, just because is the last systematic effect from others to be corrected (the coefficient is called the XTalk Coefficient).The correction for the LBC-Blue Arm is: All other electronical ghosts caused by bad work of amplifiers are always produced by bright stars leaving false stars images in the FOV. Such patterns must be flagged in the image in order to mask their spurious counts. At least 1% of the objects are ghosts in crowded fields, and to select and flag them correctly it is needed to know the FWHM in a precise way.So the best procedure is the following:

1. create an object mask (dilating the Sextractor segmentation image of an arbitrary value, i.e. 5)

2. divide the exposure by time grouping them in intervals of about 5-10 minutes

3. sum at the original BPM all sources of the previous image within the selected interval time

I j−1=B j−1S j−1 x , y ,t I j =B jS jx , y , t

I j1=B j1S j1 x , y ,t (17.1)

Suppose that S j−1=S j=S j1 it is independent by B j for 'j' varying within the Half Window parameter time, so it is possible to evaluate the median background with the fringe component to be subtracted to obtain S j .

A CCD / IR Data Reduction Primer LIV/LXXII

Page 55: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

To find in automatic the HalfWindow parameter as the number of image ($HW) to be grouped to reach the 5-10 minutes of partial integration time of a file list sequence, it is possible to include in your bash script the following code:

bashfirstfile=`cat filelist.txt | sed ­n 1,1p`NUM=`wc filelist.txt | awk '{print $2}' `SET=`dfits $firstfile | fitsort EXPTIME | sed ­n 2,2p | awk '{print $2}'`TET=`echo "(${NUM} * ${SET})/1" | bc`NW=`echo "(600.0/${SET})/1" | bc`echo " $NW"

$NW will be the half window number of exposure to put into the skysub script.

By simulations performed with Zemax the Optical Ghosts due to a bright star falls near the center of the bright star (along the radial FOV direction always shifted far away from the center respect to the bright star) and it is about 12 magnitudes fainter (~0.6+/-0.3% of the total flux of the star) than the source responsible of the reflected light. Such optical ghosts depends by filter used (Uspecial is affected, while G and R sloan in LBC do not show measurable ghosts) with sometimes contributes irrelevant (see figure).

The procedure to subtract FRINGE-Sky Component is collected in two scripts:1) “mkoblist_sf.sh” → produce a list of sorted images in order2) “skysub_loc.sh” → perform the sky/fringe subtraction

#the skysub algorithm performs the following steps:starts a for cycle on all image in the file list:a) Populate a background vector for each extension using the “imstat” program and passing the OBM updated version of the ObjectMask derived from previous steps:

IMSTAT -f MEDIAN -F -m $obm $infile[i]b) insert the BACK keyword in the infile[i] for each extension $ext (using FITSEDIT program):

FITSEDIT -e $ext -t f -x $back -v -u BACK $filenamec) foreach image in the list ($infile[i]) starts a loop over the HalfWindow image before and after: for j in `seq $(($i - $hw)) $(($i - 1))` I) subtract its background foreach image in the HalfWindow

(using IMARITH program):IMARITH -o $bksframe -v $frame '-' -- $background

II) combine the all bksframes in the HalfWindow and create the

A CCD / IR Data Reduction Primer LV/LXXII

Illustration 12: Representative figure of an optical ghost centered on a bright star (from LBC COSMOS Uspec image).

Page 56: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

median sky image $medskyname (using IMCOMBINE progeam):IMCOMBINE --blank 0 -o $medskyname --outbpm $medskybpmname

-r sigclip -n 3.0 $bpms -v {$bksframes}III) creating sky image $skyname filtering the medskyname (using filter2D program):

FILTER2D --blank 0 -f $filtersize -m $medskybpmname -o $skyname --outbpm $skybpmname -r sigclip -n 3.0 -v $medskyname

IV) creating sky subtracted image $skysubname and update BPM and WEIGHT MASK AND OBJMASK (using IMARITH and FLAG2WEIGHT programs):

IMARITH -o $skysubname -v $infile[i] '-' $skynameIMARITH -o $skysubbpm -v $bpm '|' $skybpmnameFLAG2WEIGHT -o $skysubweight -v $skysubbpm$LN -s $mask $skysubmaskIMARITH -o $skysubobm -v $skysubmask '|' $skysubbpm

----------

A CCD / IR Data Reduction Primer LVI/LXXII

Page 57: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH18 – CHIPs EQUALIZATION & EXTRACTION

Because of the GAIN that is different from chip to chip, at this reduction step we could have good reduced science images (without strange patterns or gradient) but with different level of ADUs from chip to chip.This different equalization should be a residual of a non optimal master/super flat procedure.So assuming that all differences are imputable to a multiplicative factor we can apply the “equalize.sh” script to calculate and apply the scaling factors in flux, reporting 2nd, 3th, 4th ADUs levels to the 1st chip level.The scaling factor is computed masking all object and performing a statistic on the sky pixels:

I '1=I 1

I ' 2=I2×B1

B2

I '3=I 3×B1

B3

I ' 4=I 4×B1

B4

(18.1)

Starting from this point it is necessary to extract all Multi Extension Fits produced and work, especially for LBC and HAWK-I, with single extension fits.The LBC pipeline standard library presents a script named “extract.sh” which perform thechip extraction and save them in files having [OBJNAME][FILTER]_nFILE#[ext].fits:

pr.fileMEF.cor.sfl.sks.bks.cor.fits⇒ {OBJNAMEFILTER § Nfile§ 1.fitsOBJNAMEFILTER § Nfile§ 2.fitsOBJNAMEFILTER § Nfile§ 3.fitsOBJNAME FILTER § Nfile§ 4.fits

} (18.2)

The same for all WEIGHTs and BPMs images, where [ext] ranges from 1 to 4 and are the chips' extensions.All important keywords to trace for next reduction steps written in the primary extension of the MEF file (ext 0) are copied in any single chip file, here's a list of main keyword copied:

AIRMASS, ZD, HA, PA_PNT, ROTANGLE, PARANGLE, TELAZ, TELALT, TELRA, TELDEC, PIXSCALE, FWHM, BACK, ELLIPT, ELONG

---------

A CCD / IR Data Reduction Primer LVII/LXXII

Page 58: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH19 – ASTROMETRY FIX & COADD

Image astrometry found in the RAW LBC images is not accurate and does not take into account the great LBC FOV distortion (even 15 arsec at the edge of the FOV). It is necessary to perform an accurate astrometric calibration in order to perform coadd of frames without tilts. The HAWK-I raw images are yet a set of mosaic and so its astrometry is more accurate than LBC RAWs images but it is recommended to run the astrometry even on HAWK-I frames.The output of a such procedure is not a fits image but an header image with particular keywords, like the following:

AIRMASS = 1.2079 IMAGEID = 1 INLIST = list.10.txt KTAB = Y INTREG = COSMOS2g-SLOAN_1# OFFSET = -1.043,95.026LINXRMS = 1.759LINYRMS = 1.888LINCAT = USNOLINTOL = 5ASTXRMS = 0.000ASTYRMS = 0.000NAST = 3PRESOL = ps1PSORD = 3PV1_0 = -0.0111723784PV1_1 = 0.9240647515PV1_2 = -0.0423310048PV1_4 = -0.1890918239PV1_5 = -0.1599659037PV1_6 = -0.0621460562PV1_7 = -0.2469022109PV1_8 = 0.0168996523PV1_9 = -0.2470895420PV1_10 = 0.0038031498PV2_0 = -0.0163148423PV2_1 = 0.8905535602PV2_2 = -0.0426612093PV2_4 = -0.2663110140PV2_5 = -0.1267557713PV2_6 = -0.0814180494PV2_7 = -0.2584775592PV2_8 = 0.0146282883PV2_9 = -0.2566816350PV2_10 = 0.0068325682ASTBAD = 0ZP = 6.391FLXSCALE= 2.777778e-03ZP0 = 0.000

I

A CCD / IR Data Reduction Primer LVIII/LXXII

Page 59: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

f keywords PV_?_? are not plotted in the header means that some procedure of the astrometry pipeline did not works; elsewhere if in the headers ASTBAD keyword is set to 1 means that all the astrometry program works but the procedure did not coverged to a good solution (because these frames will be rejected for coadds, you may try to run again the astrometry modifying the threshold parameter “thresh” in the astrom.conf configuration file).The FLXSCALE is a fondamental computed keyword and represents the scaling flux factor computed by cross-matched flux value of sources respect to a reference frame.

The procedure to fix an astrometry algorithm (Astromc, Radovich et al, 2006) consists of several steps to be performed on single extracted chips:

1. create catalogs with nominal RAW astrometry of only high SNR sources (programs mkheah and mkcats of the astromc-package or alternatively use the runastrom_mkcats.sh script):

MKHEAD list.*.txt -f ./MKCATS list.*.txt ./ --sweight \"wht\" --wtype \"map_weight\"

2. cross match detected sources with some online catalog (such as USNO_B1.0 or UCAC2 where available), to minimize the system at 10 variables at least 10 sources matched are needed (programs getref, intoffset of the astromc-package):

GETREF list.*.txt; for LBC → INTOOFFSET list.*.txt --redo –kname=psol.tab --presol=\"Presol/ps\"for HAWK-I → INTOOFFSET list.*.txt --redo --kname=psol.tab

The input catalog is configurable in the astrometryconfiguration file (we reports some fields):---“astrom.conf”, which contains lots of parameters:list = list.txt #default list file namesdelim = '#' #delimiter, i.e. OBJNAME_Nframe#Nchip.fitscatalog = cat.reg #input catalog (maybe USNO)radius = 40 #search radius for cdsclient querymaxnum = 100000 #maximum number of source retrievedkname = psol.dat #if an astrometric solution is yet presentrejangle = 1.5 #rejection angles to exclude bad framespresol = '' #the nae of an astrometric pre-solutionwtype = WEIGHT #which kind of weight to use (RMS/WEIGHT/BACK)phap = 20 #Photometric Aperture os sextractorthresh = 10 #Detect Threshold to usesigma = 2 #sigma of iterationsiterations = 3 #number of iterationsairmass = AIRMASS #name of the AIRMASS keyword

3. apply a presolution (not needed for HAWK-I images) and minimize the system to find a mean offset calculation from all catalogs (program lastrom of the astromc-package):

A CCD / IR Data Reduction Primer LIX/LXXII

Page 60: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

for LBC → LASTROM ${lista} --moffset=yes –presol=\"Presol/ps\"for HAWK-I → LASTROM ${listgroup[*]} --moffset=yes

4. before to perform coadd all exposures are scaled to the same exposure time (1sec) and airmass (0) so that the first local astrometric solution (chip by chip) is found. You can find the header files inside the ./astrom/ directory. It is now that FLUXSCALE keyword is computed rescaling fluxes to those of a reference list ($reflist) and inserted in the output header (program setph of the astromc-package):

SETPH -d astrom $reflist –phot5. then all local astrometric solution are tiled together to find a

global more accurate solution. Now you can find the header files inside the ./astglob/ directory (program globcal of the astromc-package):

GLOBCAL list.*.txt --rphot --zp0=06. the program hcheck of the astromc-package executes the scanning of the

final solutions and check or reject all frames where the astrometry did not converge (ASTBAD==1), see illustration):

HCHECK list.*.txt --hdir . --xflag –mklist > astromlist.txt

To perform such operations it is possible to run the “runastrom_mkcats.sh” script and “runastrom_global.sh” script or alternatively the “deeprunastrom.sh” script (optimized for LBC deep fields).

N.B.: in case of standard stars fields or where it is yet present a star catalog to minimize the astrometric solution it is not necessary to find the global astrometric solution (astglob program).

A CCD / IR Data Reduction Primer LX/LXXII

Illustrazione 13: The four images are the plot results of astrometric solutions; the top left is the source matches with the USNO catalog, the top right image is the residual DX-DY from the matches; the bottom left is the residual after the astrometric solution and finally the bottom right is the output of the hcheck program with the projection in sky of the good astrometric solutions.

Page 61: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

The final good “astromlist.txt” list can be used to resample and coadd frames normalizing to a whole 1sec exposure time; the algorithm works fine (mean RMS ~0.03 arsec) with a slight dithering (~20-30 arsecs) between single exposures.The LBC pipeline presents the “runswarp.sh” scripts to perform such task and can be used by also HAWK-I images.

The runswarp.sh algorithm worsk as follows:1. it scans the input list of files for bad astrometry (and eventually

reject the file), it performs several swarp steps:check=`grep ASTBAD $header | $SED 's/\(ASTBAD\).*/\1/'`

if [ -z $check ]; then filenames="$filenames $filename"

resampnames="$resampnames $resampname" else echo "WARNING [bad astrometry]: rejecting '$filename'" fi

2. create a global header ($OBJNAME$FILTER.head):SWARP $filenames -IMAGEOUT_NAME $extheader -HEADER_ONLY Y

3. resample all chip according to the created header (without background subtraction):

SWARP $filenames -c swarp.conf -IMAGEOUT_NAME $outimage -WEIGHTOUT_NAME $outweight -SUBTRACT_BACK Y -BACK_SIZE 128 -BACK_FILTERSIZE 3 -COMBINE N

-FSCALASTRO_TYPE NONE4. coadd all resampled chips to create a whole coadd mosaic

($OBJNAME$FILTER.fits) and its weight image ($OBJNAME$FILTER.wht.fits) using a WEIGHT_TYPE MEDIAN and/or (recommended) WEIGHTED:

SWARP $resampnames -c swarp.conf -IMAGEOUT_NAME $outimage -WEIGHTOUT_NAME $outweight -RESAMPLE N -SUBTRACT_BACK N

-WEIGHT_SUFFIX .weight.fits

N.B.: the runswarp.sh of HAWK-I pipeline performs two times the coadd using -COMBINE_TYPE WEIGHTED (out images mosaic.fits) and also -COMBINE_TYPE MEDIAN (out images mosaic.med.fits).

The final step to produce a full characterized observation is to produce an EXPOSURE map and an RMS map: to produce the exposure map it is possible to run the “expomap.sh” script.The expomap algorithm performs the following operations: start a for cycle on all good astromlist.txt imges:

1. it convert all weight resampled images to image having the exposure time of the frame and the same NAXIS1/2 grid of the whole mosaic:

ln -s OBJNAME.head $HEADERWEIGHTSWARP $mapimages -c swarp.conf -IMAGEOUT_NAME $singleimage -RESAMPLE N -SUBTRACT_BACK N -FSCALASTRO_TYPE NONE -COMBINE_TYPE WEIGHTED -INTERPOLATE YIMARITH -o $singleimage $singleimage '|' $singleimageIMARITH -v -o $smapimage $smapimage

2. sum them into the whole map.

A CCD / IR Data Reduction Primer LXI/LXXII

Page 62: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

IMARITH -o $expomap $expomap '+' $smapimage

To convert the WEIGHT image to an RMS image it is possible to use the “wht2rms” script in the LBC pipeline, which performs the following operation:

WHT ij=1

RMS ij2RMS ij=

1 WHT ij

(19.1)

N.B.: a weight image produced by the LBC/HAWK-I pipeline is a relative weight rescaled to FLUX of the reference frame in the astrometry, so a RMS map created by that weight image is not an absolute VARIATION map but also relative to the rms of reference frame and does not take into account pixels2pixels variance.To avoid this problem and to obtain an ABSOLUTE RMS map it was developed a dedicated algorithm (a little time consuming) named “buildrms.sh”, which generate also exposure map and absolute weight map. Main procedure steps are summarized here.

The buildrms algorithm works as follows.1) for each chip of the initial raw image (first you must extractthem), compute the following sigma[i] RAW images variance MAP:

sigmai= RAW i

gaini

RON i2×

1.0FLAT i

×1.0

Ndith×EQUALi

(19.2)

where:RAW[i] is the ith-chip of the input RAW image DEBIASED by debias algorithm interpolating across bias-secsgain[i] is the ith-gain referring to the ith-chip. FLAT[i] is the ith-chip combination of masterflat[i]*superflat[i]

and all other multiplicative contribute applied in the reduction(i.e. Pixel area correction if applied only once).

Ndith is a particular keyword representing the number of RAW images coadded to produce the input RAW image Ndith=1 for LBCEQUAL[i] it is the chip_2_chip equalization factor computed from equalize.sh script

2) run WEIGHT2RMS to produce WHTRMS[i] from EQUALIZEWHT[i]3) multiply:

SIGMA i×WHTRMSi=sigmaRMSi

4) now use these RMS to resample images with swarp option -WEIGHT_TYPE MAP_RMS (-WEIGHT_SUFFIX .rms.fits) and produce RMS[i]

5) now sum all RMStot=∑iRMS i with a for cicle (using imarith

program)6) the final coadd will be computed in this way:

I=∑i

Immi×RMS i

RMS tot

(19.3)

7) compute final RMS as:

RMS final=1.0

∑iRMS i

(19.4)

A CCD / IR Data Reduction Primer LXII/LXXII

Page 63: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

For LBC it is necessary perform this step at the end of the usual reduction pipeline, while in IR detectors such as HAWK-I it is not necessary to perform this step at this point but at the end of the complete reduction pipeline, where a new OBJECT mask will be used to estimate better the sky-background (see next reduction steps)

N.B.: for LBC instruments the PIPELINE reduction ends here.

---------

A CCD / IR Data Reduction Primer LXIII/LXXII

Page 64: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH20 – SKY-BACKGROUND ESTIMATION & INVERSE ASTROMETRY

To make a good sky-background estimation it is necessary to have the object mask most deep possible. To achieve this goal it is possible to obtain a good mask starting from the coadded frame so that the mask results more deep, because the low SNR objects in the coadd results over the detection threshold.There are some reduction step to perform:

1. on the produced mosaics (i.e. Mosaic.fits and mosaic.med.fits) generated by previous reduction steps run the “mksexcat_coadd.sh” script which creates a good deep catalog of objects:#the script produce catalogs,background_subtracted image and a segmentation image, passing the following extra-params:

-CHECKIMAGE_TYPE SEGMENTATION,-BACKGROUND -CHECKIMAGE_NAME MOS.SGM.fits,MOS.BKS.fits -BACK_SIZE 128 -BACK_FILTERSIZE 3

2. on this catalog rn the “estimatefwhm.sh” script to estimate the a good value of the FWHM and put it inside the HEADER (using “fitsedit” program):

FITSEDIT -x newFWHMvalue -t f -v -u FWHM MOS.fits FITSEDIT -x newFWHMvalue -t f -v -u FWHM MOS.MED.fits

3. run again the “mksexcat_coadd.sh” script to produce again a catalog on the mosaic image with the right value of the SEEING/FWHM in the image

4. create a weighted mask containing all objects in the median coadded frame (make use “mkmaskedwht.sh” script, that performs the following operations:#reduction step of the mkmaskedwht.sh algorithm:# a) subtract the MEDIAN_MOSAIC to the WEIGHTED_MOSAIC (using # “imarith” program): IMARITH -o MOS.wht-med.fits -v MOS.fits '-' MOS.MED.fits# b) trasform in flag image (1/0) the segmentation map of the MEDIAN# MOSAIC, using “sgm2flag” program):

SGM2FLAG -o MOS.MED.MSK.fits -v MOS.MED.SGM.fits# c) transform the MASK image into a WEIGHT MAP (using “flag2weight”# program):

FLAG2WEIGHT -o MOS.MED.INVMSK.fits -v MOS.MED.MSK.fits# d) multiply the difference MOSAIC.WHT-MED.fits image with the# INVMSK (using imarith” program):

IMARITH -o MOS.WHT-MED.MSK.fits MOS.WHT-MED.fits '*' MOS.MED.INVMSK.fits# e) perform a statistical counts on the produced image (using# “imstat” program): IMSTAT -S '[500px X/Y sector] MOS.WHT-MED.MSK.fits # f) run the “mk.wht-med.wht.py” script to produce an image where# pixels above a threshold (1.0) are put to “1” elsewhere to “0”# and produce the respective weight image MOS.WHT-MED.WHT.fits

PYTHON2.4 mk.wht-med.wht.py MOS.WHT-MED.MSK.fits 1.0# g) multiply original WHT image with the new WHT-MED.WHT image # (using “imarith” program):

A CCD / IR Data Reduction Primer LXIV/LXXII

Page 65: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

IMARITH -o MOS.MSK.WHT.fits -v MOS.WHT.fits '*' MOS.WHT-MED.WHT.fits# e) now uses the MOS.SGM.fits file and transform to MSK image (using# the “sgm2flag” program):

SGM2FLAG -o MOS.MSK.fits -v MOS.SGM.fits# f) dilate the new produced mask (using the “dilate” program):\

DILATE -o tmp.MOS.MKS.DIL.fits -v MOS.MSK.fitsthe dilate algorithm works as follows:#-define a binary-data-structure which expand a value of a# defined dilaction-factor#-use this structure to convolve with mode=”constant” the# input data#-redirect output in a vector matrix setting to “1” the # output data greater than “0” and to “0” elsewhere

# g) transform the int16 MASK image into a REAL float32 image (using # “imarith” program): IMARITH -o MOS.MSK.DIL.FITS -v tmp.MOS.MSK.DIL.fits '*' 1.0 # h) Extract the 6th and 7th columns of input mosaic catalog (MOS.cat) # and redirect to a new XY catalog (using “awk” program):

AWK '!/#/ {print $6,$7}' MOS.cat > MOS.XY.cat

5. run the “mkinvcat.sh” script, which perform the inverse transformation on baricenters of the catalog. This procedure make use of the “invtrans” program included in the AstromC package.#the invcat algorithm performs the following operations:# a) scan astromlist.txt file containing the good astrometrized # frames used to create the mosaic# b) run the “invtrans” program to convert the RA/DEC of sources # in X/Y coordinates according to the provided pre-astrometry # single chip header# c) concatenate and paste the X-Y coordinate of objects in a new # catalog (MOS.XY.INV.cat)

6. run the “mkinvtrans.sh” script, which performs the inverse

transformation to obtain updated mask on pre-astrometrized images grid-scale.#the mkinvtrans.sh algorithm starts a loop on all image in the list# and for each image in the list write the mkinvtrans.cl IRAF# executable file and then execute it with the following command:

`cl < mkinvtrans.cl`#the mkinvtrans.cl IRAF script execute the following operations:# a) run IRAF geomap tool with a command line like the following:geomap(input='OBJECT_Nframe_Nchip.inv.txt',database='OBJECT_Nframe_Nchip.inv.db',result='geomap.log',xmin=1.0,xmax=2048,ymin=1.0,ymax=2048,xxorder=5,yyorder=5,xyorder=5,yxorder=5,xxterms='full',yxterm='full',function='chebyshev',interactive-)# b) run the IRAF geotran tool with a command line like the following:geotran(input='OBJECT_Nframe_Nchip.msk.dil.fits',output='tmp.OBJECT_Nframe_Nchip.inv.msk.fits',database='OBJECT_Nframe_Nchip.inv.db',transform='OBJECT_Nframe_Nchip.inv.txt',nxblock=2048,nyblock=2048,interpolant='drizzle',boundary='constant',constant=0.0,fluxconserve+)

A CCD / IR Data Reduction Primer LXV/LXXII

Page 66: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

# c) execute the expression with IRAF imexpr tool: imexpr 'a < 0.990 ? 0 : 1' OBJECT_NFreame_Nchip.inv.msk.fits

a='tmp.OBJECT_Nframe_Nchip.inv.msk.fits' outtype='short'

7. Now it is possible to execute the “mkmef.sh” script to recreate MEF files starting from the single extension FITS (i.e. like OBJECT_Nframe_Nchip.inv.msk.fits)# the mkmef.sh algorithm execute the following operations:# a) get a fake-header of the same width/height ($primaryHDU)# b) run the “fxcopy” program to create a MEF file starting from # the single chips:

FXCOPY -v -o file.pr.msk2.fits $primaryHDU OBJECT_Nframe_Nchip.inv.msk.fits

8. FROM THIS STEP WE WILL USE PREREDUCTION FILES (i.e. file[i].pr.fits, file[i].pr.wht.fits, file[i].pr.bpm.fits, file[i].pr.msk.fits, file.pr.obm.fits) and new MASKS (file[i].pr.msk2.fits); these new masks should be overwritten to those older (file[i].pr.msk.fits)

9. RE-START PIPELINE from the chapter n.15 before the superflat creation and before to detect ghosts: starts from the “mkoblist_gh.sh” script .

10.All other pipeline step should be run and you can use the “patchskysub.sh” script to subtract sky and replace bad pixels with neightbour background values.

11.The final coadd should be performed with IMARITH having computed before the globalRMS variance map and all RMS[i] map as done in the chapter n.18.

----------

A CCD / IR Data Reduction Primer LXVI/LXXII

Page 67: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH21 – MORE on BACKGROUND

Background counts in a scientific image depend on different contributors:● Scattered light (SL) not focussed, called StrayLight.. In LBC this

component is present and during the Blue channel commissioning were relevant and were caused by electronic connections made of optical fibers, which flares depending on the filter used.

● Fringing (F) , caused by electron interaction with chip array (it is the high frequency pattern visible in the flatted images), it is quite constant during the night.

● Mirror Pupils (P) , caused by internal reflections of optics. And ● Sky patterns (S) , depends by OH- ions in the sky emitting light.● Ghosts (G) , electronic and optical ghosts

So, calling 's' the superflat, it is possible to counts the background contributes as:B tot

=s×BFSGPSL (21.1)

For HAWK-I scattered light, pupils and ghost are negliable and the (21.1) became:B tot

=s×BFS (21.2)

To estimate in best way the background it is necessary to subtract before the additive contributes and only then the multiplicative ones, elsewhere the background counts will be overestimated making a underestimate of the signal flux of even 5%.Sometimes it is not possible to subtract any additive contributes so it is possible to divide directly by the superflat and so:

⟨Bitot⟩=Bi−⟨si B i⟩=si F isi S i

F i ,Si not dependingon B i

⟨Bi⟩=⟨s i F i⟩si ⟨S ⟩ (21.3)

Where ⟨S ⟩ is the background component at low frequency (scale ~minutes); ⟨F ⟩ is quite constant and depends from the reticulum (scale ~arsec), higher

background frequency component is the electronic noise, which can be filtered and rejected using a gaussian or wavelet convolution filter).----------

A CCD / IR Data Reduction Primer LXVII/LXXII

Page 68: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

CH22 – BASIC REDUCTION STEPs & PIPELINEs

The software available in the CCD package takes care of the relative calibrations of the pixel intensities, of averaging, and of cleaning frames. Cleaning in this context means removal of the instrumental signature and other defects from the frames.

A CCD / IR Data Reduction Primer LXVIII/LXXII

Illustrazione 14: General LBC Pipeline workflos (brown labels and arrows are referred to IR reduction steps).

Page 69: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

A full reduction of CCD data involves several steps depending on the reduction strategy which instead is related to the physical use of the final images and particular characteristic of the whole set of targets.It is possible to identify some standard procedures and other that, even if optional, are sometimes recommended (see illustration for global pipeline workflow).

Basic Deep Fields Reduction Pipeline:● apply crosstalk correction (for optical CCDs) to equalize images chip by

chip;● trim the frame of the overscan strip and other irrelevant columns and/or

rows; ● fit and subtract a readout bias given by the prescan/overscan regions; ● create masterbias, masterdark (for IR CCDs) and masterflat:

● stack the bias frames and subtract the average bias frame to the science images;

● (for IR CCDs) stack the dark frames, remove the defects from the average dark frame, scale it and subtract to the science images; N.B. For IR detectors the dark current contribute may be subtracted from the individual frames, without combining them to obtain a masterdark file;

● stack flat frames, remove the defects from the average flat frame, prepare the final flat (subtract dark/bias and normalize; possibly apply illumination correction) and final divide the science images by the final flat;

● run the prereduce program to apply calibration frames to raw images;● fix the BPM in any output frames;● update all BPMs with relative CRMASK;● update all BPMs with relative TRAILMASK, if necessary (take a look for

trails);● convert the final BPM to a relative WEIGHT image;● apply sky-concentration factor to normalize pixel area, if optical distortion

are important (LBC) elsewhere skip this step;● correct for GHOSTs pattern; ● create and apply SUPERFLAT;● correct for FRINGEs pattern (and consequently subtract the MEAN SKY); ● equalize chip to chip levels; ● subtract the background;

N.B.: in IR detectors an alternative, though less usual, approach is to subtract the sky background before making the flat field and dark correction, if the observation is performed in “jittered mode”:

A CCD / IR Data Reduction Primer LXIX/LXXII

Page 70: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

FINALFrame=RAWFrame−SKYFrame

FLATFrame−DARKFrame (22.1)

● correct photometry (inverse sky-concentration correction if applied before)● extract chips from MEF and perform astrometry headers computation● resample images to the same pixelscale and to a global header● coadd all science frames (WEIGHTED & MEDIAN weight_type)

● only FOR IR detectors: ○ make a deep catalog on the mosaic with corresponding masks○ perform the inverse transformation to obtain X/Y baricenters from deep

RA/DEC○ dilate and compute the single fits frames as deep masks○ make mefs from those deep masks○ starts again the pipeline from the ghost creation with newer fainter OBJ

MASKS ○ execute all intermediate reduction steps ○ create a GlobalRMS Variance MAP○ compute the last mosaic as SUM of the resampled images weighted by

its absolute RMS map[i]

Some of these steps are optional and depend on the kind of data you have taken; I.e. For specific targets some reduction steps can be not appropriate as in the case of pretty picture objects.

Time consuming and Pipeline Step Statistics for an input of 5 Uspec RAW lbcb images (*) not included in total time calibration master image creation:

● make masterflat ~565 sec / make masterbias ~120 sec (*)

● crosstalk correction + standard prereduction ~75 sec● area correction ~50 sec● object masking ~115 sec● make and apply superflat ~475sec● sky evaluation and subtraction ~ 405 sec● chip equalization ~70 sec● cosmic rays mask ~1465 sec (~850sec for n_iter==1)● trails mask ~460 sec● maskall (if (!crmask) && (! trailmask)) ~ 1000 sec● background subtraction (sextractor ~135 sec / constant 70~ sec)● photometry correction ~15 sec● chip extraction ~20 sec● astrometry minimization ~110 sec● resampling & coadd ~155 sec● physical built of RMS/Weight and ExpoMap ~1010 sec

A CCD / IR Data Reduction Primer LXX/LXXII

Page 71: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

total elapsed time ~ 4500 sec ~1.26hours => ~25sec to process any input MPXls => ~90sec to produce any output MPXlsinput file sizes: ~400Mb (LBC RAW science frames) + ~600Mb (MASTER CALIBS)output files: ~1.2Gb final (and ~42Gb intermediate files to be removed)

----------

A CCD / IR Data Reduction Primer LXXI/LXXII

Page 72: A CCD / IR Data Reduction Primer - WordPress.com · 2012. 3. 12. · ch12: pre-reduction ch13: cosmic-ray events & anticoincidence ch14: trails & satellites ch15: source detection,

Bibliographic References

• “Pixel Response Effects on CCD Camera Gain Calibration”, Mirametrics, link: http://www.mirametrics.com/tech_note_ccdgain.htm

• “LBC Blue Channel Report on the Commissioning”, V. Testa.• “Frames Combination Techniques for Ultra High-Contrast Imaging”, J.C.

Carson.• “Pipeline Documentation v. 1.1”, E.R. Deul.• “LBC@LBT camera”, link: http://lbc.oa-roma.inaf.it.• “IRAF.NOAO.edu tutorials”, link: http://iraf.noao.edu/tutorials/• Astromatic/Terapix softwares documentation: http://www.astromatic.net• “Mining Pixels: The Extraction and Classification of Astronomical Sources”,

E. Bertin.• Toward a More Perfect Flat-Field”, Noao.edu, link:

http://www.noao.edu/noao/noaodeep/ReductionOpt/Skyflat.html

A CCD / IR Data Reduction Primer LXXII/LXXII