requirements (specifications) for the images needed by the project

hello,

I found the Panoptes project while looking for information regarding to begin with astro-photometry.

I was considering het HOYS-CAPS project ( see http://astro.kent.ac.uk/~df/hoyscaps/index.html ),

but that project requires an image scale of about 1.3 arcsec per pixels: today I do not have the proper equipment to deliver images with that resolution.

The Panoptes project seems to be more feasible:

  • you aim at ( as I computed it) 10 arcsec per pixel images

  • the iOptron mount can deliver this accuracy without autoguiding

Does it make sense that I step into the programma, starting with a non-robotic approach, but delivering images according your projects’s specifications ?

My feeling is that a lot of amateurs do have the equipment to deliver the kind of images you are looking for.

So it could be interesting to make a clear ‘split’ in the Panoptes project:

1/ uploading and processing of the images needed ( regardless of the details of the equipment used to ‘manage the observatory’ )

2/ the low-cost robotic observatory that can produce images in continuous and unattended mode

I have available

a Canon 100D DSLR

a Canon lens 85 mm f 1.8

I plan to acquire the iOptron CEM25P mount

some basic automation could be done with the new ZWO ASIair module = https://astronomy-imaging-camera.com/product/zwo-asiair ( a Raspberry-style device)

Looking forward to your comments,

kind regards,

Ivo Demeulenaere

(coordinator of solar observers in belgium, delivering solar data to AAVSO)

(volunteer and teacher at Ghent University Public Observatory)

1 Like

Ivo,

You are correct - we are using 10arcsec per pixel sampling.
As you point out, this allows us to use a low-cost mount without autoguiding (we autoguide by software from the DSLR images the unit takes). For exoplanet searches, field of view is very important, and we also need to keep the system cost small to allow schools to participate. That’s why we designed the system around a 85mm lens + DSLR.

In principle, it should be possible to upload data to be processed and merged with the other PANOPTES units. Our current data processing pipeline expects a sequence of images on the same field so that the differential photometry can be computed. Images need to have some basic information (time stamp, etc). I don’t think we’ve clearly documented all of this very well, but it’s in the source code: https://github.com/panoptes/POCS which is public.

The ZWO ASIAIR module looks very powerful. Thanks for bringing it up - I was not aware it existed. We’ll discuss it with the development team. I think the POCS software already implements these features (+ handling weather, data management etc…), but it could be that a deployable system using this module could produce good data for the network.

Our development team focuses on supporting a single baseline design, but we like to see others play with the design and improve/change it. Some of these changes make it to the baseline design. You mention the CEM25P mount - we’ve been considering adopting it to replace the iEQ30, so please share your experience on the forum.

Olivier

Hi @Ivo, thanks for the post!

As @oguyon mentioned, the basic requirement would be to capture a sequence of images with good tracking so that the stars are not drifting across the pixels. It should be possible for us to “split” the project as you mention, namely to just accept any images, and attempt to process them on our pipeline. The infrastructure to do this isn’t quite set up but wouldn’t be too difficult. It would mostly be some logistics of how to handle those uploads (do we have a public dropbox area or do we create logins for interested users, etc).

Do you have an existing data set that we could do an initial attempt with?

I haven’t had a chance to play with the ZWO-ASIAIR yet but have been keeping my eye on them. I would be curious about your opinion on them.

I’ll follow this thread carefully as I am working (for quite some time now, only using spare moments) on a modified version of POCS that supports INDI devices as well as basic autoguiding via PHD2 API.

I am mentionning this because from what I know ASIair is a raspberry pi equipped with a software platform heavily based on Indi.

There are still changes that I would like to make to the software platform (messaging, database, and maybe web frontend) but in the end it would be very interesting to be able to merge data acquired at higher resolution with this project.

My intent is also to use the POCS platform for another project for automated spectroscopic survey (Be, planetary nebuale, maybe even exo planet through radial velocity, who knows :slight_smile: )

Don’t hesitate to ask for any help :slight_smile: I’d be happy to contribute with work or example already done.

Olivier,
the ASIAIR module seems to be a ZWO branded version of the StellarMate = https://www.stellarmate.com/products/get-stellarmate.html
Hardware is the same box, firmware probably adapted.
For both ASIAIR and StellarMate there is an APP ( android playstore) to control the module wireless using WIFI.
Ivo

Hi Wilfred,
I do not have a data set available yet. I just started investigating my options to start being involved in astro-photometry: that’s how i found the Panoptes project.
But if this would be helpfull for the project, I can use my network to find amateurs that can deliver data. But then of course we need clear requirements and target areas for the pictures to be taken.
As an example for these requirements: the HOYS-CAPS project is looking for slow variations in the brightness of young stars, variations as a result from dusk clouds around the star ( new planets haven’t cleared their orbits yet), so it does not make sense to make more than one picture per week. Or: they want the pictures stacked if taken in the same week, in order to see more faint stars. The target area’s are defined on their website. I think with this kind of requirements made clear, there is a real opportunity to have amateurs delivering usefull data. Most of these amateurs however are not programmers; they are not able to deduce requirements from reading software code. That’s why I am suggesting to split the project in subprojects. This would also be an opportunity to speed up the development of the data processing pipeline (and get faster scientific results). For me there is kind of a bootstrap issue: people will hesitate jumping into the hardware part (and expenses) of the project, if they are not fully convinced about the scientific significance of the project.

Ivo