|
META TOPICPARENT |
name="IPTFZTFWorkshopMay2016" |
Marshals and Scanning
Attendance: Leo, Ofer, Rahman, Jan, Ulrich, Mansi, Joel, Suvi |
|
< < | Initial conversation |
> > | High-level summary
The main theme was scalability: how to scale scanning efforts to the increased survey speed, how to scale the marshal to diverse science interests, and how to scale maintenance and interaction of many pipelines that are written by students and postdocs. We concluded that human scanning will continue to be important in ZTF. In order to reach a factor of >10 improvement in the false-positive fraction (assuming a 10-15 growth in survey speed), a chain of improvements must take place - from the subtraction pipelines, through improved RB mechanism, through better handling of cuts and filters on the scanning page (whether for unified scanning or by science groups), ...BUT on top of all those, we must also make sure to have in place efficient automatic saving, alerting and follow-up triggering of the candidates that are clearly good, without having to pass those through human monitoring, which should be busy reviewing the less trivial (but still with low false-positive fraction) candidates. We discussed the possibility of transitioning from a single catch-all scanning team to a model where each science group is responsible for doing its own scanning, though this idea needs further consideration. For marshals, we discussed borrowing the modular LIGO model ("GraceDB") of having a simplified central database/portal, and doing most science case-specific processing in robots that communicate remotely with the marshal via broadcast alerts (e.g., a publisher-subscriber model) and upload annotations using a well-documented web API (with Python bindings). The idea is that the central database/portal could be maintained by a permanent staff person, but the robots could be developed by students and postdocs in individual science groups. It will be important to keep code for both under version control that the whole collaboration can access. Immediate action items are to make a mailing list for discussing marshals and scanning; to ask Patrick Brady for a post-mortem of the LIGO marshal; to identify interfaces with the IPAC, the consortium, and public data; and to identify a small team to work on a simple prototype. Goals for November are to hand off development of the core marshal to a staff person and to make significant progress in porting services and data from the iPTF marshal to the new one.
Minutes
Initial conversation |
|
- Who takes over marshals?
- Who develops and maintains them?
|
|
- Marshal as platform maintained by staff, but designed for easily plugging in robots written by grad students and postdocs?
- Automated classification
|
|
< < | Main issues for ZTF Marshal |
> > | Main issues for ZTF Marshal |
|
- Scalability
- Needed components
|
|
- Single objects vs. science feeds
- Access control, visibility of proprietary data
|
|
< < | Scanning |
> > | Scanning |
|
- iPTF numbers: average 200 candidates per night, average 4 human-hours per day
- Slow down speed of refresh: scanners can check every hour instead of every 15 minutes if robots are reliable
|
|
- List of candidates requiring human intervention should remain around 200 per night.
- NO GENERAL PURPOSE SCANNING TEAM. Should separate scanning for each science group.
|
|
< < | Action items |
> > | Action items |
|
- Identify developers
- E-mail list for scanning and marshals for ZTF
|