How to reduce SHARC-II data

Like most (if not all!) sub-mm cameras, data from SHARC-II is not trivial to analyze. Each 10 minute file is roughly 30 Mb in size, and many targets (particularly faint point sources), require 3 hours of integration time (that's half of a GIGAbyte of data). In addition, estimating the sky background is difficult, and requires care on the part of the user. The purpose of this web page is to provide a ROUGH outline on how to go from raw data to calibrated map. The rest of the documentation off the main navigation bar to the left provides more detailed notes.

Data reduction takes place in three steps

  1. Preliminary
  2. Mapmaking
  3. Calibration

1. Preliminary steps

At this stage, you want to use your logs to identify what scan numbers correspond to which sources and calibrators. If you don't have them handy, you can generate logs using one of the programs available on our software page. This program takes as input a range of scan numbers, then reads the headers to produce a useful log.

The logs are useful because they keep track of the opacity (tau) and pointing offsets (FAZO, FZAO). If significant pointing changes were made throughout the night, you will have to account for that in the data reduction (by supplying shifts). The tau is useful, but we generally use polynomial fits to the tau data from a whole night.

Why are the polynomial fits useful? Keep in mind that the scaling factor between tau(225 GHz) and tau(350 micron) is about 25. Thus any uncertainty in the tau is amplified by that large factor. Meaurement error on the 225 GHz tipper is not insignificant, and hence a polynomial fit to the tau readings over the course of a night gives a better estimate of the atmospheric opacity. This is also standard procedure with SCUBA at the JCMT.

Finally, you want to run "sharcgap" on your files to see if any suffer from timing gaps. They occured when buffers were overwritten with data, thus losing a small chunk (on the order of a few seconds) of data. Some files with gaps showed no ill effects, though others suffered from a de-syncronization of antennae and science data. This resulted in the signals being associated with the wrong place on the sky, rendering the data difficult (if not impossible) to analyze. These used to show up in data taken before 2004, but haven't since. Still, it is worth checking. CRUSH also checks for gaps in the files.

2. Mapmaking

For almost all applications, you will be using CRUSH, which is the name of the primary SHARC-II data reduction package. There is also a package called SHARCSOLVE, but its use is reserved almost exclusively to observations done in CHOPPING mode. CRUSH also supports such data. CRUSH is public, but you should contact one of us if you want to use SHARCSOLVE for chopping observations.

In both cases, the software takes a list of scan numbers (as well as configuration options), and produces a FITS file with 4 images in it. The images are: SIGNAL, NOISE, WEIGHT, and SIGNAL/NOISE.

3. Calibration

This is the step that is probably the most important, and is the source of many of the questions that the SHARC-II group receives. In general you want to reduce your calibration scans, and then apply an appropriate scaling factor to correct the science maps.

How you do this depends on whether you will use PSF photometry or Aperture photometry. Whatever method you use on your science frames has to be the same on your calibration frames. For simplicity, I will assume you use a fixed aperture for the rest of this discussion. Take your calibration frame and measure the flux within your chosen aperture. Call this the "INSTRUMENTAL FLUX". Now, look up the true 350 micron flux of the calibrator. Many calibrators have a flux constant in time (such as Arp220), but objects in the solar system do not. We have provided a recipe for calculating the true 350 micron flux of such objects on the calibration part of the web page.

You can use the instrumental flux and true flux to derive a scaling factor. This can then be applied to the science frame using sofware from our utilites page (a simple program that reads a map and multiplies it by the scaling factor).

Then you can apply the aperture to the science frame to derive calibrated fluxes of your targets.

This file last updated on