Keynotes
Global-Scale Earth Science Data Analysis in the Cloud
Matt Hancher, Engineering Lead, Google Earth Engine
Wednesday, November 4
Abstract: The volume of satellite imagery and other Earth data is growing rapidly, as is the urgent demand for information that can be derived from such data to inform decisions in a range of areas including global food and water security, disaster risk management, public health, biodiversity, and climate change adaptation. Key trends in computing must influence the design of infrastructure to meet those global data analysis challenges in the decade to come. This talk will describe the trends and technologies that have informed Google's development of the Earth Engine cloud computing platform over the past six years, as well as our experiences applying that platform to computational problems related to a number of global challenges as we work towards a vision of a living, breathing dashboard of the planet.
Biography: Matt Hancher leads Google's Earth Engine engineering team, which he co-founded in 2009 to bring Google's datacenter computing expertise to bear on global challenges such as deforestation, food security, and global public health. He originally studied EE and CS at MIT, where he conducted research in robotics and embedded systems at the Media Lab until 2003. In between he was a Research Scientist at the NASA Ames Research Center, where he conducted research in robotics and computer vision, such as 3D reconstruction of the Moon and Mars from satellite imagery for robotic mission planning.
Visualization and Interactive Data Analysis
Jeffrey Heer, University of Washington
Thursday, November 5
Abstract: Data analysis is a complex process with frequent shifts among data formats and models, and among textual and graphical media. We are investigating how to better support the lifecycle of analysis by identifying critical bottlenecks and developing new methods at the intersection of data visualization, machine learning, and computer systems. Can we empower users to transform and clean data without programming? How can we support more expressive and effective visualization tools? How might we enable domain experts to guide machine learning methods to produce effective models? This talk will present selected projects that attempt to address these challenges and introduce new tools for interactive visual analysis.
Biography: Jeffrey Heer is an Associate Professor of Computer Science & Engineering at the University of Washington, where he directs the Interactive Data Lab and conducts research on data visualization, human-computer interaction and social computing. The visualization tools developed by his lab (D3.js,Vega, Protovis, Prefuse) are used by researchers, companies and thousands of data enthusiasts around the world. His group's research papers have received awards at the premier venues in Human-Computer Interaction and Information Visualization (ACM CHI, ACM UIST, IEEE InfoVis, IEEE VAST, EuroVis). Other awards include MIT Technology Review's TR35 (2009), a Sloan Foundation Research Fellowship (2012), and a Moore Foundation Data-Driven Discovery Investigator award (2014). Jeff holds BS, MS and PhD degrees in Computer Science from UC Berkeley (whom he then betrayed to go teach at Stanford from 2009 to 2013). Jeff is also a co-founder of Trifacta, a provider of interactive tools for scalable data transformation.