General questions about analysis and hardware relationship

I recently stumbled on PANOPTES and am thoroughly impressed, it’s a very cool project! I had a few questions about how the hardware and analysis/software interact, and how much they can diverge from the “base model”.

I know Huntsman has a module/plugin for POCS which allows them a certain level of customization, and given they use 400mm Canon / 620mm RASA I suspect POCS is pretty flexible. But Huntsman doesn’t contribute data to PANOPTES network (I don’t think anyway?), so they probably have more flexibility in altering things. I had a few specific questions:

  • Even though POCS can be tweaked for different hardware/resolution/focal length and operate as an automated observatory (ala Huntsman), can different hardware report back to the PANOPTES network? E.g. if I had a different FL with an entirely different field of view, can all the plate solving and light-curve analysis software handle it? Or would it require larger changes to POCS/panoptes-network?
  • How sensitive is lightcurve analysis to optical aberrations, particularly chromatic? It seems like photometry would be pretty robust to aberrations as long as they are consistent, since you just need the relative changes. I come from an amateur astrophoto background where “looking pretty” is most important, but I suspect it’s much less important here?
  • On a similar note, how sensitive is analysis to tracking errors? I’m guessing it is less tolerant of poor tracking, since it can lead to less deterministic trails than optical aberrations?
  • I noticed the default setup (Canon SL1, 85mm lens) is very undersampled at ~10.5"/pixel. Was this on purpose (to help photometry or something?) and/or recovered later with drizzling? Or just an artifact of using affordable “off the shelf” hardware?

The gist of these questions is that I’ve been tinkering with a low-cost robotic setup on my own, and was curious if it could be adapted to contribute to PANOPTES. It makes different assumptions and has different (in most cases worse) optical characteristics though.

Some background

A long time ago I read about Harvard’s “MicroObservatory” project, which built a few weather-proof, entirely automated refractor telescopes as an educational outreach tool. I’ve been enamored with the idea of a cheap/simple/weatherproof design ever since. Particularly with wider FOV to do surveys, NEO observations, etc.

More recently I’ve been reading about the Dragonfly telescope array and the innovative use of multiple apertures/cameras to achieve a very low effective focal ratio. Which led to reading about Huntsman, which is how I found PANOPTES :slight_smile:

I’ve been fiddling with designs for aggressively cheap, multi-aperture robotic imaging setup which seems to coincide with PANOPTES very well. Some of the design goals are:

  • Reduce cost as much as possible with 3D printed parts (FDM for larger structural components, SLA for tight-tolerance parts). Use mass-produced structural components (aluminum extrusion, carbon fiber rods) where 3D printed doesn’t cut it. Fallback to machined parts as last resort
  • DIY optical train using simple achromat lens. These can be obtained reasonably cheaply out of china as long as the aperture stays smallish (~60mm). Common focal ratios are around ~3-5 so CA is prevalent but not disasterous. Opinons vary on that :slight_smile:
  • Effective focal ratio can be improved by ganging multiple apertures ala Dragonfly/Huntsman. This keeps cost down by using smaller lenses
  • Individual RPi and CMOS camera per aperture. CMOS allow aggresively reducing exposure time, are cheap, and interface with RPi easily. CMOS like the RPi’s IMX219 have very small pixels and high gain, which means 1"/pixel is achievable with fast optics (keeping everything cheap and small). A concern is the limited exposure time (10s) but I think with high gain and many exposures that should be sufficient in most areas where people are limited by sky glow anyway.
  • Smallish FL, large effective focal ratio and fast exposures mean tracking can be sloppier, which is a boon to 3D printed components

That’s the goal anyhow, and it likely won’t succeed in all aspects. But it is a fun project to tinker with and stumbling onto PANOPTES has only increased my desire to get it working, seems like it would work well with PANOPTES. I work on distributed software as my day job, so a cluster of SBCs running cameras is a fun idea for me :slight_smile: