Task Force on Artificial Intelligence and Machine Learning for Scientific Computing in Nuclear Engineering: Critical Heat Flux (CHF) Exercises
Scatter plot matrix of the NRC CHF database.

Background

Recent performance breakthroughs in Artificial Intelligence (AI) and Machine Learning (ML) have led to unprecedented interest in AI/ML among nuclear engineers. However, the lack of dedicated benchmark exercises for the application of AI/ML techniques in nuclear engineering analyses limits their applicability and broader usage. For this reason, the Task Force on Artificial Intelligence and Machine Learning for Scientific Computing in Nuclear Engineering was established within the Expert Group on Reactor Systems Multi-Physics (EGMUP) to design benchmark exercises that will target important AI/ML activities and that will span various computational domains of interest from single physics up to multi-scale and multi-physics.

The first set of exercises proposed by the Task Force on Artificial Intelligence and Machine Learning for Scientific Computing in Nuclear Engineering are the Critical Heat Flux (CHF) Benchmark Exercises. These exercises focus on the prediction of CHF, a phenomenon that must be prevented to ensure the integrity of the first barrier in widely adopted nuclear power plant designs and constitutes an important design limit for the safe operation of reactors. 

The US Nuclear Regulatory Commission (NRC) provided critical heat flux database, forming the cornerstone of this benchmark exercise.

Scope and objectives

The Critical Heat Flux (CHF) benchmark exercise leverages the recently published CHF database by the US Nuclear Regulatory Commission (NRC) to compare the performance of various AI/ML algorithms in predicting the CHF. This database was used to develop the 2006 Groeneveld CHF lookup tables (LUT) and includes nearly 25 000 data points rendering it the largest known CHF dataset publicly available worldwide. The data consists of measurements in uniformly heated vertical water-cooled tubes collected over a span of 60 years from 59 different sources.

The designed benchmark exercise includes four main AI/ML tasks:

  • Task 1 is optional and consists of a dimensionality analysis, where the participants are asked to perform feature selection and extraction for their ML models.
  • In Task 2, ML regression algorithms are developed and assessed, including model optimisation, training/validation and testing.
  • In Task 3, the participants evaluate their trained models by computing specific metrics and by ensuring that overfitting does not occur.
  • Finally, in Task 4 CHF predictions on a blind dataset not seen during the training/validation/testing are requested.

Organisation

The benchmark exercises are supervised by the Task Force on Artificial Intelligence and Machine Learning for Scientific Computing in Nuclear Engineering. Results are reported to the Task Force and will be presented during the annual WPRS Benchmarks Workshops.

Co-ordinators: Jean-Marie LE CORRE (Westinghouse Electric Sweden AB, Sweden), Gregory DELIPEI (North Carolina State University, USA), Xu WU (North Carolina State University, USA), Xingang ZHAO (Oak Ridge National Laboratory, USA)

NEA Secretariat: Oliver BUSS

NEA GitLab Working Area

Participation

Participation is open to all NEA member countries. Please send a signed version of the conditions form to the NEA Secretariat to join this benchmark activity.

Schedule

CHF benchmark introduction at Task Force meeting

December 2022

Phase 1 draft specification and distribution

May 2023

Presentation at the 2023 NEA WPRS Annual Workshops

May 2023

Phase 1 online kick-off meeting 

30 October 2023, 13-15 CET  

Phase 1 online Q&A meeting (optional)

December 2023

Presentation at the 2024 NEA WPRS Annual Workshops

May 2024

Phase 1 submission

August 2024

Phase 1 results draft report and online meeting

December 2024

Related news
1
result
Publications and reports
1
result