Webcams and Digital Imagers

Interesting exploration into imaging – Ian Anderson

For those of you who have not read my last article, my policy on twit mail stands.

Before you ask I will answer your first question: “Why is there an astronomy
article on an overclocking website?”.

The advancement of astronomy requires
science to find ever-fainter scraps of light and probe deeper into what was
once considered the purest of darkness. This means the advancement of
technology is the limiting factor of new research. This is where overclockers
come in.

Most astronomers would prefer to watch the stars than fiddle with a
camera. If not, then this is an off-topic general interest article. Most of
you have little need for a faster computer (aside from frame rates and all
important bragging rights), so I want to bring to your attention a technology
undergoing its formative period:

Webcams and Digital Imagers.

These little
consumer devices hold real potential in science if they are modified to do
so and given (or written) proper software. I am looking to start a discussion
amongst a group of people who do not work with the astronomical community
and would not normally think to apply their skills.

I would like to know your thoughts on this topic.

It should be noted webcams cannot, even in
theory, match the performance of a purpose built, low-light CCD detector, but
there are many things you can do you could not hope to with a full size
sensor. I should also note consumer digital still and video cameras will, in
most cases, work just as well (if not better) as webcams if frame rate is
not a concern.

Webcams are very good at collecting a small (300,000 pixels * 3 colours)
amount of information thousands of times a second, then compressing it into a
manageable size. Almost all webcam sensors have 5.7 micron pixels and have a
3.65mm by 2.74mm imaging area. This limits you to one object at a time, for
all practical purposes.

At this point in astronomy, there are a number of high resolution, all sky
surveys in progress or completed (2MASS SDSS, 2DF, CFHTLS, etc.). These are
designed to give astronomers a starting point on which to base future
research with the next generation of telescopes. The resolution of these
surveys is limited by the atmosphere, meaning there is very little ground
based astronomy left to do.

This leaves you with two options:

  1. Watch for changes and moving objects, or
  2. Find a way around the limitations of the atmosphere.

Either way you need three devices:

  • A telescope
  • A camera capable of high frame rates
  • A whole lot of extra CPU power lying around

Most
of you have two out of three and, for most of these applications, you can get
away with spending less on a telescope than you did on your last motherboard
and CPU upgrade (double if you want an out of the box solution).

On to what you can do with a webcam:
{mospagebreak}

For most of these, there is little in
the way of modification needed. In fact, most of these tasks are a
matter of processing the information and can be done with a little
programming. For many, there are already programs to do the same thing but
could use improvements with automation and/or user friendliness (not to
mention cost). If you want to do serious modifications, a good place to start
would be HERE, where you will find out how to gain full control of exposure time and upgrade the sensor.

What you must remember about astronomy (and photography) is it is not about
light – there is plenty of light in the world. Imaging is a question of
cutting out the light you don’t want, then compressing the rest into the
smallest possible area.

First upgrade you can make to any imaging device is the lens. The quality of
the lens limits the performance of the system. Make sure it is properly
focused

Second is light baffling. The purpose of baffling is to cut off all stray
light from contacting the sensor. Paint the inside of the camera flat black.
Place a diaphragm in front of the sensor which only allows the light from
the field of view to contact the image sensor.

Dark frame subtraction is based on the fact noise builds on an electronic
sensor at a predictable rate at a given temperature. If you take a
photograph of nothing (ie with the lens cap on) for the same length of time
at the same temperature, then subtract it from the original image. Doing this
will leave you with a clean smooth looking image.

CCD sensors are highly sensitive to heat, not because they will burn out but
because the sensor will convert it into electrons and confuse it with light.
Cooling the sensor will significantly reduce noise.

Uses not requiring additional hardware (a little programming knowledge will
help)

Satellite Tracking

Presumably you remember the recent shuttle disaster. You likely also know
one of the safety measures now being called for is a system to inspect the
space shuttle for damage while it is in orbit.

All you need to do this in
your back yard is a fast enough mount and a decent means to deal with the
atmosphere. Could you have seen the damage which is suspected of causing the
failure? Maybe. You would certainly need a large telescope and you
definitely would need to know what you were looking at.

Programs exist
(Heavenscape.com) which already know how and where to look for
artificial satellites. There is nothing particularly special about imaging
satellites; the most important factor is frame rate, because your subject will
appear to rotate due to the change in perspective.

Near Earth Object Tracking

It may surprise you to learn this, but there are fewer than five telescopes
dedicated to looking for, tracking and cataloguing near earth objects with
the potential of causing mass extinction.

Furthermore all the dedicated
telescopes are in the 1m class (32″ – 48″) and in the northern hemisphere.
Much like SETI, this program can use all the help you can offer (unlike SETI,
you are allowed to name whatever you find).

Finding Near earth objects
(NEOs) is a straight forward operation. Take an image of one part of the sky.
Take an image of the exact same part of the sky about an hour later then
subtract one from the other. Anything left is either moving or changing.

You
will then need to follow up the discovery over several nights to determine
what the object is, where the object is, and what its trajectory around the
sun is. True, this is easier said than done, but it is all Newtonian physics,
so it can all be automated.

The most helpful thing you can do for this
program is to refine measurements so predictions can be made further into
the future.
For more information on this (although it is not amateur oriented) I would
recommend looking into the LINEAR and NEAT programs.

Variable Star Observing

If you’re interested in this, look into AAVSO.org. This is a
much overlooked field by astronomers, because it takes years (decades and
centuries) to gather enough information for a proper study. This work needs
to be done to better understand the sun and the evolution of the universe.

Binary System Observing

This is roughly the same process as near earth object tracking.

The process is to
compare the motion of stars against the background. If you take the
images six months apart, you end up with a stereo image, with a baseline the
diameter of the orbit of the earth. It is basic trigonometry to calculate
the distance of the object. If you take several images (about every week or
two) over the course of a number of years, any wobble (back and forth motion)
will indicate a companion (although it is not as accurate as spectroscopy
for finding planets). Once again, this can all be automated.

If I have not yet bored you to tears, I highly recommend reading one of Ed’s
fine editorials on copyright law. Next article, I will continue with adaptive
optics, spectroscopy, and cooling the sensor (ie. interesting and practical
applications).

{mospagebreak}

Ian AndersonAnderson Fine Machinery

Spectroscopy

Every element absorbs some colour of light (actually they absorb many
different colours creating a sort of “fingerprint”). The colour of a star’s
light is a combination of its temperature and the various colours the
elements absorb (spectral fingerprints) the outer layers absorb.

When the
incoming light of a star is broken up by a diffraction grating (prism), it is
possible to measure the spectrum. The spectrum will show up like a rainbow
missing the colours absorbed by the elements in the star’s outer layer.

There are two things you can learn from a stars spectrum:

  • its composition (the elements it is made from), and
  • its velocity (directly toward or away from earth).

It is possible to decipher the composition of a star by looking
for the absorption lines of various elements (hydrogen, helium, lithium
etc.). Based on the amount of light each absorbs, you are able to measure the
relative volume of an element.

To find the speed of an object, you need to
look at the relative position of the absorption lines in a spectrum;
hydrogen alpha for instance. H-alpha is the primary absorption-emission line
for hydrogen. Cold hydrogen will absorb this colour light where as hot
(energetic) hydrogen emits this colour. The spectral line falls at about
656nm when the subject is not moving relative to the observer. Typically
this is the most prominent spectral line in the visible spectrum because
three quarters of the visible universe is hydrogen.

When the subject is
moving relative to the observer, this line (with all others) will appear to
move. If it shortens (moves toward the blue side of the spectrum), then the
subject is moving toward the observer and is said to be blue shifted. If the
opposite happens, it is said to be red shifted.

It is just a mathematical
function then to determine the speed of the object. Spectroscopy requires a
black and white CCD sensor. If you have a three-colour camera, it is possible
to measure the colour of the star. A three-colour image is enough to tell
the temperature. With the temperature, it is possible to approximate the
absolute luminosity, assuming it falls on the main sequence, and hence the
distance.
{mospagebreak}

Adaptive Optics

This is the field where computer enthusiasts can have the greatest impact.

For the purposes of this article, I will include functional equivalents of
adaptive optics. The easiest functional equivalent of adaptive optics is
frame stacking (in this I also include frame selection). The principle
behind frame stacking is increasing the signal to noise ratio.

Atmospheric
distortion is random. The objects you are looking at are constant, so the
more frames added the image will add but the blurring will be the average of
the frames. With a webcam, it is not unreasonable to take 10,000 images of a
bright object (a planet, the moon, or an artificial satellite), so rather
than stack them all, you can select the best 100 or best 50 so the average
amount of blurring is substantially lower.

To progress further on this
subject, a basic understanding of the atmosphere is necessary.
“Seeing” (blurring by the atmosphere) moves in “cells” (bubbles of air with
the same temperature). The refractive index (ability of the air to “bend”
light) of the air is determined by the pressure. Pressure drops
logarithmically with altitude. The higher the temperature of the air, the
lower pressure it will be for a given altitude.

These cells come in random
sizes and move at random speeds (by random I mean standard distribution
curve). The ability of any adaptive optics system is measured by the size
and speed of cells it can deal with. The third purely digital option goes
hand in hand with frame selection: increase the frame rate. The shorter the
exposure, then the more likely you will catch a moment of near perfect seeing
(the atmosphere will have less time to blur the image).

True active optics is based on the fact stars appear as infinitely distant
points of light. When properly focused, the image of a star will be
perfectly round (with a number of rings around it). The image of a distorted
star can then be processed as to how to best bring it back into focus. To do
this requires a flexible mirror. Fortunately, a 2mm thick Pyrex mirror will
suffice. Once the image has been processed, the computer then determines how
to move the actuators behind the mirror.

There is no set number of actuators – simply the more you have, then the
better the image can be corrected. One actuator will allow the focus to be
corrected. Three will allow focus and will correct for any cell larger than
the telescope. Four (three with one in the center) will also correct for
lensing effects of cells larger than the telescope.

Adding more will allow
correction for cells the size of the aperture of the telescope divided by
the number of actuators (the area an actuator controls on the flexible
mirror is analogous to the same area on the primary, so if an actuator
controls 5 mm in a 50mm flexible mirror, then it will affect a 20mm patch of
a 200mm primary).

What the webcam is needed for in an active optics system
is the wave-front sensor (the camera used to image the blurry star). Frame
rate is by far the most important factor in a wave-front sensor – a 100 by 100
pixel imaging area is more than enough. Wavefront sensing must be done on
the telescope used for imaging. The sensor should be placed after the
flexible mirror, using a beam-splitter or partially silvered diagonal to
split the light between camera and wave-front sensor.

Auto Guiding

All mounts have some error. For long exposures, it is necessary to use a
device to measure and correct for this. This is a camera to takes a series
of photographs simultaneously with the main camera. If it notices the image
moving, it will tell the mount how to correct for it. Simple but necessary
for long exposures. Go
HERE or
HERE for a program to do this automatically.

{mospagebreak}

Planetary Imaging

Webcams are the de facto standard for planetary imaging. Their high frame rate
and one shot colour ability make these cameras the only feasible way to take
a decent photograph of a planet before it rotates too far to stack the
images.

Planets are extremely bright compared to other astronomical objects, so
you can increase the focal length to the point where the planets disk fills
most of the frame. After that, it is a question of avoiding the atmosphere.
The three best (most technically challenging) images I have seen are:

  • Mountains and canyons on Mars
  • The rings of Uranus, and
  • Surface features on Ganymede.

I do not suspect this is the limit of amateur equipment (frankly, I
am waiting to see a volcano erupting on Io.).

For long exposures, it is necessary to cool the sensor. Dark framing only
works to a certain extent; to take high contrast images, it is necessary to
reduce the noise caused by heat. Cooling the sensor five degrees equates to
approximately an order of magnitude less noise.

  • >300K Passive cooling
  • ~300K Active air cooling
  • >270K Water cooling. Passive cooling, active cooling, and water cooling can
    only be used to reduce the temperature to ambient. Water is more efficient
    but is limited by its freezing point. TEC and PC cooling must be backed up
    by air or water cooling. I recommend evaporative cooling because it can
    drop below ambient temperatures.

  • >200K Thermoelectric cooling
  • ~200K Multistage TEC
  • >150K Phase change cooling (R134a)
  • ~77K Liquid nitrogen cooling. Far and away the best price/performance ratio
    (~50$ per week depending on where you live)

  • <77K Vacuum pumped LN2 cooling. Lowering the pressure will cause a liquid to vaporize at a lower temperature.
  • ~10K Limit of mechanical cooling. Mechanical cooling is limited by the
    amount of heat it must dissipate.

  • ~4K Liquid helium. Difficult to find and extremely expensive.
  • ~1K Vacuum pumped liquid helium
  • >1K (Pulsed) Adiabatic demagnetization refrigeration. This can only cool
    for short periods of time. (Introduction to ADR)

  • >1K (Constant) multi ADR. While one ADR is on its warm phase, it is turned
    off and another is started.

If your chosen cooling system reduces chip temperature below ambient, it is
necessary to enclose it in a cryostat.

A cryostat is a housing which
prevents heat and moisture from interfering with the operation of the chip.
The simplest cryostat would be to enclose the device in foam rubber. A far
more effective enclosure is a Dewar chamber.

The basic principle of this is
to remove the air from the enclosure to prevent it from being a passage for
heat to enter. If nothing else, this is what you should remember when you
are buying a phase change cooler or using liquid nitrogen to cool your
processor. In the case of
Tom’s hardware 5 GHz project, very little
consideration was paid to this. Although they did have the forethought to
insulate the reservoir, there is a quarter inch of snow around the processor
area. This means the LN2 needed to absorb almost as much energy from the air
as from the processor. Much of this heat was being transferred through the
processor on its way to the LN2.

For more info on CCD features and terminology,
(E2V technologies) has a superb PDF explaining it.

Ian AndersonAnderson Fine Machinery

Be the first to comment

Leave a Reply