Subject: Re: NGAO Observing Models Trade Study Date: Sunday, February 25, 2007 1:35 AM From: Bob Goodrich To: David LeMignant Conversation: NGAO Observing Models Trade Study Hi David, I read through your study, and have some general comments. There are some good ideas, but I would suggest some changes in the presentation and approach. I also have some concerns about the specific models discussed. A teleconference with Joe and Jason would be good if it can be arranged. Tuesday, Wednesday, and Thursday mornings are good for me this week, and there are other times I can be available if you can schedule a time with Jason and Joe. General comments: 1. You include discussion about data reduction tools and data archives. I think these are separable from the issue of how the data is taken. In principle you could view the Observatory as responsible up to the point that the appropriate data (science and calibration) are taken, and anything after that (including publications) is outside the Observatory's control. The one aspect of data reduction tools that would help with the data-taking would be quick-look reduction tools, that can be run right after taking data. Even in this case, however, any observing model could benefit (I would say equally) from having quick look tools. 2. You also mention data archives. Again, this seems to me to be independent of what model is used to take the data. Presumably classical observers take good enough calibrations for their observations, and so do queue and service observers. If anything, in fact, it is probably more likely that classical observers take more and better calibrations. 3. The write-up is lacking in resource estimates. In fact it should be quite easy to compare classical observing costs to service observing costs, and I think this would be an ideal opportunity to do this. For instance, we use an estimate of 6 hours per night for first observer nights, and 2 hours for on-call nights in the classical observing mode. I plan for 4750 hours total including night support and pre-/post-observing (two telescopes). For service observing we have estimated 14 hours/night for the observer, and probably another 8 hours for constructing the queue, doing daytime calibrations, etc. The latter could be a little high. Presumably service observers need to respond to pre- and post-observing questions as well. Another way of calculating this would be to just use the number of SAs minus their research time. You'd have to use their actual research time, not that promised. At Keck this would be 10%. I'm not sure what the average is for Gemini, but it might not be that much higher from what I hear. Lacking a real comparison between classical and queue observing makes it impossible to choose between models that involve one or the other. Queue has significant advantages, and if it is cheaper as well, you maybe would always choose that mode. (You might not; see the next point on scientific quality.) But it's very likely significantly more expensive, so even though it would improve the quality and quantity of science (again, see below), you still might not be able to afford it. 4. Science impact of the different modes: when we compare science impact numbers between Keck and Gemini, we get consistently higher numbers from Keck. Gemini draws from a much larger community; one would expect that the quality of observers (that get past the TAC) maybe is higher because of that. So does this indicate that classical mode observing leads to higher science impact? 5. Your key metric includes quality of raw data, quality of reduced data, science quality of the data products, and science impact. I would argue that the science impact is the important parameter. If the raw data is of lesser quality, but in the end has higher impact, then that seems to me more important than of the raw data or calibrated data are exquisite, but lead to science with low impact. 6. In section 4.2 you argue that science projects are more likely to be completed in queue scheduling. This is true for band 1, but band 3 is actually less likely to be completed. The number of hours available for observing between classical and queue (on the same mountain) is still the same. The analysis by Crabtree that lower-ranked proposals have the same or even higher impact than the highest ranked proposals would actually indicate that the band 1 completions will lead to less science impact than completed band 3 programs, hence queue observing is actually decreasing scientific impact! 7. In the classical-backup model, it would be interesting to ask whether entire runs are typically wiped out, or only individual nights. Weather patterns tend to have some persistence, so a potential problem with this model is that of a six night run you might lose three. Then the next run you might not lose any. If there are no projects that can carry over from one run to the next (maybe because the targets aren't up, planetary features are not in the right place, etc.), you would end up with a night with nothing planned. 8. With TAC-flex, I was confused as to how this would actually operate. I'm assuming that there are two queues, high priority class A and lower priority class B. Class A is classically scheduled, and the PIs are at the telescope (maybe remotely). If conditions are inadequate for the scheduled class A program, the class B queue kicks in. Is the class B PI also expected to be present? If so, this model doubles the number of PIs that have to be at the observing. 9. In Keck-flex, it sounds like queue with the addition of the PIs encouraged to participate. This is the most expensive model of anything discussed; it has the expense of queue observing, plus takes up a maximum amount of PI time in sitting in on pieces of the night. Also, dispersal of Observatory support staff is likely to very severely decrease their effectiveness in producing good observing tools, translating successful improvements for one instrument to other instruments, etc. Would you require enough support staff at each site to be able to support all instruments? 10. In observing tools, you describe an expensive set of tools to ease simulation, observing, etc. An FTE estimate would be useful here. It's obvious that having these tools would be great; what is less obvious is what the cost is and whether that money is available, and the best use for the money if it is available. Also in that section you use the OSIRIS GUI as an example of a great planning tool. I would argue that the ability to easily script NIRC and NIRC2 is far cheaper and more flexible and powerful than the OSIRIS GUIs. A lot of people think that GUIs are absolutely necessary for quality. I find that they often get in the way, although they make people feel more comfortable. Better than whether there are GUIs or not, I would argue that consistency in the procedure from instrument to instrument would be more valuable. Otherwise, observers must learn different techniques (commands, GUIs, whatever) for different instruments. In the end you recommend that the community support the goals and minimize the implementation costs. More than implementation, they need to be concerned about long term operations costs. But in the end they need to either maximize science return (presumably science impact) within a fixed budget, maximize science return and commit to fully funding whatever it takes to get it, or maximize science return per dollar. The second of these is nearly the Gemini model, until the Senior Review reigned them in. The first is close to the current Keck planning model. I think that a clearer picture needs to be painted for the document. I look forward to further discussion, since there are some good ideas in here. In particular, I like decoupling the backup queue from the primary observations (i.e. not doing whatever the primary PI specified as a backup program). Another possibly affordable service observing option that we have tried to pursue, but have not had enough resources, is to train OAs to run the instruments. The OAs, who are already on shift, would then be able to perform backup queue observing, at no extra operational cost. (We have been stymied at getting the startup cost of training the OAs.) Well, enough discussion for tonight! Bob > From: David Le Mignant > Date: Fri, 23 Feb 2007 16:24:45 -0800 > To: Joseph Miller , "Jason X. Prochaska" > , Bob Goodrich > Cc: David Le Mignant > Conversation: NGAO Observing Models Trade Study > Subject: NGAO Observing Models Trade Study > > Hello, > > As part of the Next Generation AO Systems Design Study, I have been tasked > to lead the trade study on Observing Models. In a few words, it consists in > looking into the current operations models at Keck and elsewhere, trying to > understand and compare them, and recommend which model (classical, queue, > service?) we should consider for further study for NGAO. > > I started to work on this draft report two weeks ago, and I just finished > writing the version attached to this message. This is a work in progress. > > I value very much your experience as astronomers and observatory > scientists, and I would very much appreciate to get your expert opinion and > comments on the draft report. You might read the report, annotate it and > send it back to me and that would be great already; > In addition, we could schedule a time where we would discuss the question > and the report; this work-in-progress would certainly benefit a lot from > such meeting! > Please, let me know whether you would have an hour available within the > next weeks to discuss the draft and share your ideas, concerns, suggestions. > > Of course, I would understand if your agenda would not allow you to look > into this report; please let me know though! > > Thank you very much for your time, > > Regards, > > David > Available Mon. 26 / Wed. 28 then Thu. 8 / Fri. 9 > > You can find more material on NGAO at this url: > http://www.oir.caltech.edu/twiki_oir/bin/view.cgi/Keck/NGAO/WebHome > >