Jupyter Community Workshop:

Jupyter for Science User Facilities and High Performance Computing

11-13 June 2019Berkeley, California

Scientists love Jupyter because it combines documentation, visualization, data analytics, and code into a document they can share, modify, and even publish. What about using Jupyter to control experiments in real-time, or steer complex simulations on a supercomputer, or even connect experiments to high-performance computing for real-time feedback? How does one reach outside the notebook to corral external data and computational resources in a seamless manner?

This workshop, held at the National Energy Research Scientific Computing Center (NERSC) and the Berkeley Institute for Data Science (BIDS) will bring together Jupyter developers, scientists from experimental and observational science facilities, and supercomputing center staff to elevate Jupyter as the preeminent interface for distributed data-intensive workflows. We will identify best practices, share lessons learned, clarify gaps and challenges in supporting deployments, and work on new tools to make Jupyter easier to use for big science.

This workshop is a part of a series of Jupyter Community Workshops funded by Bloomberg to “bring together small groups of Jupyter community members and core contributors for high-impact strategic work and community engagement on focused topics.” Read this blog post to understand a bit more context or see this page for more details.