Id really like to understand how exposure time relates to counts and the bit depth of the cameras as this is all relevant to getting good images for the processing. So here are a few questions:
- The cameras seem to capture images with 8 bit depth (256 levels) per color channel. Is this correct?
- The output at the end seems to be 14-bit (16384 levels). How do we do this conversion? Why is the final bit depth not either 256 levels (as this is the maximum depth of each pixel) or 256x256x256 which would be the number of combinations you can make from the three colors?
- Ive been told there is a 2000 (ish) count bias. When I look at a saturated star in the images I see the maximum count is 13583. Is this because the 2000 count bias is already taken into account? Or to be exact 16384-13583=2801?
- If this is try, then if we set the exposure time to something short (to minimize dark current), and kept the lids on, I should get signals that can go down to near 0. This would mean that the lowest level is 0 and the highest level is 13583 correct? And the 2000 ish counts has already been removed from this value?
- Or is it that for some reason the camera limits you to 13583 counts (as thats the peak count for many star in an image), but the 2000 count bias is inside this and has not been removed (i.e. if you had no signal and short exposure the minimum would be 2000 ish, not 0)?
Im trying to understand the mins, max, range and if the bias is included in the count or has been removed. Thanks