Dr. Sudip Dosanjh is Director of the National Energy Research Scientific Computing (NERSC) Center at Lawrence Berkeley National Laboratory. NERSC’s mission is to accelerate scientific discovery at the U.S. Department of Energy’s Office of Science through high performance computing and extreme data analysis. NERSC deploys leading-edge computational and data resources for over 4,500 users from a broad range of disciplines. NERSC will be partnering with computer companies to develop and deploy pre-exascale and exascale systems during the next decade.
Previously, Dr. Dosanjh headed extreme-scale computing at Sandia National Laboratories. He was co-director of the Los Alamos/Sandia Alliance for Computing at the Extreme-Scale from 2008-2012. He also served on the U.S. Department of Energy’s Exascale Initiative Steering Committee for several years.
Dr. Dosanjh had a key role in establishing co-design as a methodology for reaching exascale computing. He has numerous publications on exascale computing, co-design, computer architectures, massively parallel computing and computational science.
“Towards a Superfacility for Science”
Data-intensive computing has been of growing importance at the National Energy Research Scientific Computing Center (NERSC). Experimental facilities are being inundated with data due to advances in detectors, sensors and sequencers — in many cases these instruments are improving at a rate even faster than Moore’s law for semiconductors. Scientists are finding it increasingly difficult to analyze these large scientific data sets and, as a consequence, they are often transferring data to supercomputing centers like NERSC. Examples range from cosmology to particle physics to biology. Berkeley Lab is partnering with other institutions to create a Superfacility for Science through advanced networking, the development of new supercomputing technologies and advances in software and algorithms. The U.S. Department of Energy’s Energy Sciences Network (ESnet) is tying together experimental instruments, supercomputing facilities and research institutions at 100 Gbps. ESnet, which is designed to support scientific workflows, carries 100 Petabytes of data per month and has recently been extended to connect U.S. research institutions to the Large Hadron Collider located near Geneva, Switzerland. Supercomputers at NERSC are increasingly being designed to support data workflows, including the following developments:
Finally, there is considerable work on moving deep machine learning algorithms and software to supercomputers, with a special focus on scalability.
Dr John Gustafson is an applied physicist and mathematician. He is a former Director at Intel Labs and former Chief Product Architect at AMD. A pioneer in high-performance computing, he introduced cluster computing in 1985 and first demonstrated scalable massively parallel performance on real applications in 1988. This became known as Gustafson’s Law, for which he won the inaugural ACM Gordon Bell Prize. He is also a recipient of the IEEE Computer Society’s Golden Core Award.
What if we could double computer speed and storage capability without shrinking transistors? A new data type called a “posit” may make this possible. Posits are designed to be a direct drop-in replacement for IEEE Standard 754 floats. They provide compelling advantages over floats, including higher accuracy, simpler hardware implementation, larger dynamic range, better closure under arithmetic operations, and simpler exception handling. Most importantly, they provide bit-identical reproducibility and portability, something IEEE floats do not. A series of comprehensive benchmarks compares how many decimals of accuracy can be produced for a set number of bits-per-value, using various number formats. The higher accuracy means we can often use half as many bits to store numerical values, from low-precision floats used for Deep Learning to high-precision floats used for computational physics. Posits should take up less space to implement in silicon than an IEEE float. With fewer gate delays per operation as well as smaller silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPs using similar hardware resources. Storage requirements for Big Data can potentially be halved. Posit arithmetic effectively doubles computer speed and storage capability like Moore’s Law does… but without changing chip technology
Dr Di Li is the Chief Scientist of the Radio Division, National Astronomical Observatory, Chinese Academy of Sciences.
Dr. Li has lead numerous research programs, including spectroscopic and mapping projects of JVLA, GBT, Arecibo, Herschel, SOFIA, ALMA, etc. He pioneered observing and data analysis techniques, including HI narrow self-absorption (HINSA) technique and a new inversion solution to the dust temperature distribution. These techniques facilitate important measurements of star forming regions, such as their formation time scale. His work has been featured in “Nature” as a research highlight. He was awarded the National Research Council (US) “Resident Research Fellow” award and was a member of the NASA outstanding team award (2009).
He is now leading the science preparation of the Five-hundred-meter Aperture Spherical radio Telescope (FAST).
Dr Li has served on the Steering Committee of Australia Telescope National Facility (ATNF), is a co-chair of the “Cradle of Life” science working group (SWG) of SKA, a member of the Chinese Academy of Sciences Major-facilities Guidance Group, and an adviser to the Breakthrough Listen initiative.
As capabilities of data transmission, storage, and processing grow, so do such demands from new astronomical instruments. For example, the pioneering instrument in time-domain astronomy, the Large Synoptic Survey Telescope (LSST), is expected to collect ~5PB per year, a major challenge at its conception. The Five-hundred-meter Aperture Spherical radio Telescope (FAST) saw its first light on September 25, 2016 and is now in its commissioning phase. When operational, FAST will require a data rate of about 150TB per day in its full survey mode, amounting to about 15-30PB per year, already far exceed LSST. The Square Kilometer Array (SKA) will require even more, by orders of magnitude.
This presentation will report on the status of FAST and the current practical considerations in optimising transmission, archiving, and processing. The final solutions will likely to be ‘organic’, in that it will be a constantly evolving one involving dedicated fibres, internet, private and public clouds, etc.
Guy Griffiths is Director of R&D at Animal Logic, an animation studio based in Sydney that created Happy Feet, The Lego Movie and recently, The Lego Batman Movie. Since joining the company in 2000 when there were 35 artists, he has been instrumental in engineering the technology stack that supports a studio of 700+ artists from 21 craft groups working on multiple films across two continents. He leads the R&D group at Animal Logic that develops core tool sets ranging from coordination and collaboration tools, compute farm scheduling, to scene complexity management tools, and to simulation and rendering systems. Prior to Animal Logic he spent 6 years at VFX and services companies in Los Angeles in senior and executive technology roles. While at Kodak in the early 90’s he was a founding member of the team that developed Cineon, the first “flowgraph” based digital film compositing system, for which he and others where bestowed a Scientific and Engineering Academy Award.
The C3DIS 2017 will bring together researchers with computational and data science specialists from CSIRO, publicly funded research organisations and other invited institutions and organisations. This will enable attendees to share their science outcomes and learnings, and build a community of practice around Computational and Data Intensive science.
Free for CSIRO staff and invited participants