by
LDO
- Updated January 5th, 2005
Manually processing thousands of images is really a waste of time. For this reason we have written a set of automated spica scripts to quickly process many images and to provide homogenous results.
Once an image have been downloaded from the CADC, a script verifies that the image has not been corrupted during transfer and contains all the necessary image keywords. If this is the case then it is copied to one of our cluster nodes. On each node, we try to group images by runid and filter in order to minmise the (slow) data transfers between nodes during image stacking.
Another tool is used to sort images into directories based on their filter and their runid number. Finally, image headers are loaded in our database.
Next image quality is evaluated using the qualityFITS tool. This tool provides output in the form of a webpage and results are stored in our database permit image selection later on.
Next, the core processing steps take place, as outlined below. This generally involves computing an astrometric and photometric solution for all input images and then producing a single combined stack.
Several process levels exist from a one step image-by-image mode to a more expert level using configuration files and images grouping. All of this processing is tracked using a database (dbterapix).
Logfiles can be checked during the reduction process and detailed reports are also available.
Reduction
Spica_auto is a Perl script which writes shell command lines into a file and writes/updates the database for storage/history/status information. A Perl daemon scans the produced shell files and then runs them.
Single Image Mode
This mode is used to produce astrometric photometric image calibration. In this case we use as many images as possible to get an accurate astrometric solution. The output files are written out the the outgoing fibre-channel array for transfer to CADC at a later date.
Survey Mode
We extract object name from fits header and group all images relevant to this object together to coadd them in a large frame.
Survey selected area Mode
We can also define a angular radius from which all images will be processed together for a selected survey.
Input Data
Data must be processed first by the QualityFITS quality assessment tool. Bad data are rejected. Additionally, QualityFITS produce weight-map files for each MegaCAM images and also a LDAC catalog, which are essential for the following processing steps.
According to criteria defined for processing of the survey, the perl script selects which images to process for a specific mode.
Processing The main processing step consist of the generation of an astrometric and photometric solution for an ensemble of input images, then the application of these solutions to the images to produce a combined stack.
Output Data
The final coadded image is reprocessed by qualityfits to provide information on the final stack. This coadded image with its weight map and catalog are send to our output raid disk in order and then sent by Snooppix to the CADC for user delivery. An XML file described how the image have been build and metadata (PNG, PS) files are also provided.
Example data processing steps
New 2004A data is available at CADC. Snooppix is run and it transfers these images to our incoming fibre channel array. Then a set of scripts checks each incoming image and rejects any which have been corrupted during transfer or have incorrect header keywords (missing world-coordinate information for example).
Incoming data is distributed across the TERAPIX nodes
We can select from a web interface where these images will be transferred to on our cluster. Once the data have been distributed to the different nodes, a perl script uncompresses the image and sorts them by runid and filter. Additionally, the master flat or dark are transferred if needed. Finally the image header is loaded in our database and the qualityFITS tool is run.
Quality control with qualityfits
A web interface is available to check and grade manually each image and notes can be added based on image properties or instrument problems.
Stacking with spica
Spica is used to select data from the database. Rules to select images depend on seeing, skyprobe value, observing date, runid, filter, exposure time and similar quantities.
The first step is to make an astrometric and photometric calibration using either astrometrix or scamp. As these packages work differently the image selection is different (for example, astrometrix requires exploded MEF files). Diagnostic plots, in PNG or PS format, are available on a web page to check each astrometric solution. Headers are then moved to the swarp directory where the images are combined with each image weighted using weightmaps generated by qualityfits. Once the stacking is completed, thumbnail images are generated for the web pages, and sextractor is used to to extract catalogues from the stacked image. This catalog is loaded into the database, and qualityfits is run again on the stacked image.
All ouput is copied to the outgoing fibre channel array for transfer to the CADC at a later stage.