> > |
META TOPICPARENT |
name="IPTFZTFWorkshopMay2016" |
Marshals and Scanning
Attendance: Leo, Ofer, Rahman, Jan, Uli, Mansi, Joel, Suvi
Initial conversation
- Who takes over marshals?
- Who develops and maintains them?
- How much gets reused from iPTF?
- Desirable for someone with long-term sight to take over marshal.
- Possible to give students and postdocs such a task, but small return for them. Also they come and disappear.
- Advantage of students: they have engagement in day-to-day follow-up and know what tasks need to be automated.
- Marshal as platform maintained by staff, but designed for easily plugging in robots written by grad students and postdocs?
- Automated classification
Main issues for ZTF Marshal
- Scalability
- Needed components
- Separate marshals for different science targets (galactic, extragalactic, TOO, asteroid)? Or all together?
- Who develops and maintains them?
- Infrastructure: databases, computing
- Maximizing uptime: disperse maintenance expertise across time zones
- Code accessible to whole collaboration. Open source?
- Single objects vs. science feeds
- Access control, visibility of proprietary data
Scanning
- iPTF numbers: average 200 candidates per night, average 4 human-hours per day
- Slow down speed of refresh: scanners can check every hour instead of every 15 minutes if robots are reliable
- Things that leak through: bright stars, diffraction spikes
- Active learning
- Go back to citizen science? Mechanical Turk?
Summary
- Want human monitoring.
- List of candidates requiring human intervention should remain around 200 per night.
- NO GENERAL PURPOSE SCANNING TEAM. Should separate scanning for each science group.
Action items
- Identify developers
- E-mail list for scanning and marshals for ZTF
- Regular reports on scanning issues on weekly extragalactic calls
- Prototype w/ API
- Understand (human, application) interfaces with IPAC database, alerts, public data, community time
-- LeoSinger - 22 May 2016 |