U.S. Department of Defense

 

Authors

Dean Eppler, NASA-Johnson Space CenterFollow
Byron Adams, Arizona State UniversityFollow
Doug Archer, NASA-Johnson Space Center
Greg Baiden, Penguin Consulting, Inc.
Adrian Brown, NASA-Ames Research Center
William Carey, European Space Agency
Barbara Cohen, NASA-Marshall Space Flight Center
Chris Condit, University of Massachusetts-Amherst
Cindy Evans, NASA-Johnson Space Center
Corey Fortezzo, U.s. Geological Survey-Flagstaff
Brent Garry, Planetary Science Institute
Trevor Graff, NASA-Johnson Space Center
John Gruener, NASA-Johnson Space Center
Jennifer Heldmann, NASA-Ames Research Center
Kip Hodges, Arizona State University
Friedrich Horz, NASA-Johnson Space Center
Jose Hortado, University of Texas-El Paso
Brian Hynek, University of Colorado
Peter Isaacson, Brown University
Catherine Juranek, Northern Arizona University
Kurt Klaus, The Boeing Company
David Kring, Lunar and Planetary Institute
Nina Lanza, University of New Mexico
Susan Lederer, NASA-Johnson Space Center
Gary Lofgren, NASA-Johnson Space Center
Margarita Marinova, NASA-Ames Research Center
Lisa May, NASA-Headquarters
Jonathan Meyer, University of Texas-El Paso
Doug Ming, NASA-Johnson Space Center
Brian Monteleone, Arizona State University
Caroline Morisset, NASA-Ames Research Center
Sarah Noble, NASA-Goddard Spaceflight Center
Elizabeth Rampe, Arizona State University
James Rice, NASA-Goddard Spaceflight Center
John Schutt, Antarctic Search for Meteorites Project
James Skinner, U.S. Geological Survey-Flagstaff
Carolyn M. Tewksbury-Christle, United States Air Force
Barbara J. Tewksbury, Hamilton College
Alicia Vaughan, U.S. Geological Survey-Flagstaff
Aileen Yingst, Planetary Science Institute
Kelsey Young, Arizona State UniversityFollow

Date of this Version

2013

Citation

Acta Astronautica 90, 2013

Comments

U.S. Government work

Abstract

Desert Research and Technology Studies (Desert RATS) is a multi-year series of hardware and operations tests carried out annually in the high desert of Arizona on the San Francisco Volcanic Field. These activities are designed to exercise planetary surface hardware and operations in conditions where long-distance, multi-day roving is achievable, and they allow NASA to evaluate different mission concepts and approaches in an environment less costly and more forgiving than space. The results from the RATS tests allow selection of potential operational approaches to planetary surface exploration prior to making commitments to specific flight and mission hardware development. In previous RATS operations, the Science Support Room has operated largely in an advisory role, an approach that was driven by the need to provide a loose science mission framework that would underpin the engineering tests. However, the extensive nature of the traverse operations for 2010 expanded the role of the science operations and tested specific operational approaches. Science mission operations approaches from the Apollo and Mars-Phoenix missions were merged to become the baseline for this test. Six days of traverse operations were conducted during each week of the 2-week test, with three traverse days each week conducted with voice and data communications continuously available, and three traverse days conducted with only two 1-hour communications periods per day. Within this framework, the team evaluated integrated science operations management using real-time, tactical science operations to oversee daily crew activities, and strategic level evaluations of science data and daily traverse results during a post-traverse planning shift. During continuous communications, both tactical and strategic teams were employed. On days when communications were reduced to only two communications periods per day, only a strategic team was employed. The Science Operations Team found that, if communications are good and down-linking of science data is ensured, high quality science returns is possible regardless of communications. What is absent from reduced communications is the scientific interaction between the crew on the planet and the scientists on the ground. These scientific interactions were a critical part of the science process and significantly improved mission science return over reduced communications conditions. The test also showed that the quality of science return is not measurable by simple numerical quantities but is, in fact, based on strongly non-quantifiable factors, such as the interactions between the crew and the Science Operations Teams. Although themetric evaluation data suggested some trends, there was not sufficient granularity in the data or specificity in the metrics to allow those trends to be understood on numerical data alone.

Share

COinS