UK-led Science Data Processor consortium concludes work

News
by Mathieu Isidro on 09 May 2019
The SKA’s Science Data Processor (SDP) consortium has concluded its engineering design work, marking the end of five years’ work to design one of two supercomputers that will process the enormous amounts of data produced by the SKA’s telescopes.

The international consortium, led by the University of Cambridge in the UK, has designed the elements that will together form the “brain of the SKA”. In total, close to 40 institutions in 11 countries took part1. SDP is the second stage of processing for the masses of digitised astronomical signals collected by the telescope’s receivers, following the correlation and beamforming that takes place in the Central Signal Processor (CSP).

“It’s been a real pleasure to work with such an international team of experts, from radio astronomy but also the High-Performance Computing industry,” said Maurizio Miccolis, SDP’s Project Manager for the SKA Organisation. “We’ve worked with almost every SKA country to make this happen, which goes to show how hard what we’re trying to do is.”

The role of the consortium was to design the computing hardware platforms, software, and algorithms needed to process science data from CSP into science data products.

“SDP is where data becomes information,” said Rosie Bolton, Data Centre Scientist for the SKA Organisation “This is where we start making sense of the data and produce detailed astronomical images of the sky.”

To do this, SDP will need to ingest the data and move it through data reduction pipelines at staggering speeds, to then form data packages that will be copied and distributed to a global network of regional centres where it will be accessed by scientists around the world.

SDP itself will be composed of two supercomputers, one located in Cape Town, South Africa to process data from SKA-mid and one in Perth, Western Australia, to process data from SKA-low.

“We estimate SDP’s total compute power to be around 250 PFlops – that’s 25% faster than IBM’s Summit, the current fastest supercomputer in the world,” said Maurizio. “In total, up to 600 PB of data will be distributed around the world every year from SDP – that’s enough to fill more than a million average laptops.”

Additionally, because of the sheer quantity of data flowing into SDP – some 5 Tb/s, or 100,000 times faster than the projected global average broadband speed in 20222 – it will need to make decisions on its own in almost real-time about what is noise and what is worthwhile data to keep.

The team also designed SDP so that it can detect and remove manmade radio frequency interference (RFI) – for example from satellites and other sources – from the data.

“By pushing what’s technologically feasible and developing new software and architecture for our HPC needs, we also create opportunities to develop applications in other fields” added Maurizio.

High-Performance Computing plays an increasingly vital role in enabling research in fields such as weather forecasting, climate research, drug development and many others where cutting-edge modelling and simulations are essential.

Prof. Paul Alexander, Consortium Lead at the University of Cambridge concluded “I’d like to thank everyone involved in the consortium for their hard work over the years. Designing this supercomputer wouldn’t have been possible without such an international collaboration behind it.”

1The SDP consortium was led by the University of Cambridge and involved the following institutes: Barcelona Supercomputing Center (BSC) – Centre Nacional de Supercomputació, Spain, Beijing Normal University (BNU), China, Centre for High Performance Computing (CHPC), South Africa, Centro Supercomputación Castilla y León (FCSCL), Spain, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia, Forschungszentrum Jülich GmbH (Jülich), Germany, Fudan University (FDU), China, Guangzhou University (GZHU), China, International Centre for Radio Astronomy Research (ICRAR), Australia, Inspur Incorporation (INSPUR), China, Institute of Computing Technology (ICT), Chinese Academy of Sciences, China, Instituto de Astrofísica de Andalucía-CSIC (IAA), Spain, Institute of Space Science and Astronomy, University of Malta (ISSA), Malta, Kunming University of Science and Technology (KMUST), China, Max Planck Institute for Radio-astronomy (MPIfR), Germany, National Astronomical Observatories of China (NAOC), China, National Research Council Canada (NRC), Canada, Netherlands Institute for Radio Astronomy (ASTRON), the Netherlands, New Zealand Alliance (NZA), New Zealand, Pawsey Supercomputing Centre, Australia, Portuguese ENGAGE SKA Consortium, Portugal, Science and Technology Facilities Council (STFC), UK, Shanghai Astronomical Observatory (SHAO), China, Shanghai Advanced Research Institute (SARI), Chinese Academy of Sciences, China, Shanghai Jiao Tong University (SJTU), China, South African Radio Astronomy Observatory (SARAO), South Africa, University of Calgary, Canada, University of Cambridge, UK, University of Cape Town, South Africa, University College London (UCL), UK, University of Manchester, UK, University of Oxford (OeRC), UK, Yunnan Astronomical Observatory (YAO), China, Victoria University of Wellington (VUW), New Zealand

2Globally, the average broadband speed per household in 2022 should reach ~75Mbps. Source: CISCO

Local versions

ASTRON release

SARAO release 

STFC release

University of Cambridge release

More information

Find out more about SDP’s work, including photos and videos.

Here from some of the people who delivered the design for SDP:

Remote video URL

Watch the flow of data through the SKA in this animation [soundless video, closed captions available:

Remote video URL

Related news