Nuclear Energy and Core AI Capabilities Ft. Matthew Dearing, Software Engineer and Technical Lead of AI at Argonne National Laboratory

Discover the latest innovations, tech practices, marketing trends, and more in our in-depth interview series, ExtraMile, by HiTechNectar. In todayโ€™s session, we are delighted to host Matthew Dearing, Software Engineer and Technical Lead of AI, at Argonne National Laboratory, a U.S. Department of Energy’s national lab for science and engineering research.

Argonne National Laboratory has been making remarkable developments in the fields of technology, including nuclear energy, core AI capabilities, and large language models. Matthew leads the AI research and development in the organization, ensuring the formulation of efficient and impactful innovations. He makes sure that his team utilizes the most advanced AI and ML models to find personalized solutions to real-life problems, boosting scientific research.

Join us in learning the key moments from Matthew’s professional journey, Argonne’s major innovations, the core AI capabilities, the importance of nuclear energy, the deep learning surrogate model, and more.

Welcome, Matthew! We’re glad you could join us today!

Q1. You attained expertise across developments in AI, ML, LLMs, NLP, and others over the years. Share the key highlights from your professional journey and experiences so far.

Matthew. A key appreciation when developing AI-related technologies โ€“ one that can be hard to wrap your head around because it is so different than traditional software engineering โ€“ is the requirement to be experimental. When approaching a new business or scientific challenge that you think you want to apply AI, itโ€™s still impossible to know up front if any machine learning algorithm or LLM prompting strategy will do the job. You just have to try it out and see how it goes. If there is a spark of a potential solution, then you keep digging in, iterate, refine, tune, and experimentally work toward a positive outcome, if it exists. This can be challenging at first to communicate with stakeholders because it is such a new way of thinking, but one that I think is catching on nicely with our many partners across Argonne National Laboratory.

Q2. Itโ€™s been a decade of your contribution to Argonne National Laboratoryโ€™s growth. Which major innovations has the organization led over these years?

Matthew. From within the IT organization at Argonne, weโ€™ve navigated through several critical evolutions in the types of software solutions we deliver to the lab. When I started, the notion of software-as-a-service was all the rage and transforming how we architected enterprise-scale solutions. Integrating with major cloud software vendors also became a new paradigm for our teams to understand and deliver, as we began the gradual shift away from customized applications fully developed in house. Then, of course, AI hit the scene. While many scientists at Argonne were already incorporating machine learning (ML), natural language processing (NLP), and deep learning, we started exploring how to bring in this tech to support lab operations back in 2019. We made some interesting progress with several new AI-enhanced tools, until the large language model (LLM) explosion that took off after the release of ChatGPT in November 2022. We havenโ€™t looked back since and continue to explore every day how we can innovate more with generative AI and LLMs.

Q3. What is your opinion on AI-as-a-service? How can it benefit businesses across industries?

Matthew. I suggest that this is the key approach for thinking about how to bring core AI capabilities at scale for an enterprise. We already see an increasing commonality in similar techniques and approaches for how AI (in its many different forms) can be incorporated into a broad range of applications, spanning scientific needs and traditional enterprise operations. If every business or science group stands up their own local, custom AI components, then we will lose out on a lot of efficiencies and standardizations that can bring significant value to an organization. Delivering those core AI features (the ones that exist today and the many more that havenโ€™t been invented yet) as a centralized service that everyone can tap into and expand upon with advanced, local custom solutions will help drive innovation faster and more competitively.

Q4. You are closely related to operational initiatives using AI. Which common challenges arise in this domain, and how do you efficiently eliminate them?

Matthew. One challenge is the speed at which the next AI idea, technique, or model comes out. Everyone wants the latest and greatest, which is understandable (I want it, too!), but things are moving at an unsustainable pace, especially for smaller organizations. Keeping up is nearly impossible โ€“ but the risks and costs of not keeping up are significant. The entire world is running in this race, and as a premier research organization, once you fall behind too far, it will be tough to catch up. So, falling behind is not an option.

Q5. Do you think nuclear energy has an edge to dominate the energy sector in the near future? What opportunities does it offer?

Matthew. I grew up with the long-term understanding that nuclear energy was under a great pause โ€“ one that may never restart. So, the drive for more AI is an interesting motivating factor to light up more nuclear energy. Whatever energy source dominates and feeds the future of AI, it has to deliver innovative efficiencies that we donโ€™t really have in place today as well as a sustainability vision and capability so that once it hits its anticipated massive scale, not yet seen on this planet, the energy solution wonโ€™t bring other critical infrastructure and natural resources to their knees.

Q6. How can the usage of large language models advance functionalities in science organizations?

Matthew. There are so many ways. In fact, we have already published a research study with the University of Chicago of how early science adopters in this space harness LLMs, and we continue to engage with scientists at the lab to help them envision how they can leverage generative AI to augment their capacity and capability to do great science. So far, we have seen LLMs help with knowledge discovery from massive datasets, brainstorm new hypotheses, support code development of complex science simulations, and even drive automated workflows integrated with experimental equipment. As LLMs continue to develop, the possible avenues for how LLMs support science will only expand in ways we have not yet imagined.

Q7. Based on one of your recent studies, we would like to know what a deep learning surrogate model is. How is it different from a traditional physics-based model?

Matthew. The specific application Iโ€™ve studied is how can we build a side model that can temporarily replicate the generally expected behavior of a high-fidelity simulation that requires significant computation and time to run. For example, simulating how multiple scientific applications simultaneously execute on a supercomputer is a critical problem to model because these programs can interfere with one another in the complex memory and communication network that makes up the supercomputer. However, these simulations โ€“ that can be calculating so many different factors and metrics across the network โ€“ can take hours just to work through a few seconds of activity on the supercomputer. If we could design a predictive model that we can switch on and off during stable periods of these complex network communications, then we can extend our simulations to longer runs more efficiently โ€“ allowing us to get closer to replicating real-world supercomputers so that we can try to learn how to design better architectures for the next generation of massive parallel computing.

Q8. What fascinates you the most about technology? Which innovations do you think will gain center stage in the upcoming decade?

Matthew. Technology is a key differentiator between our species and most others. From chiseling a rock with another rock to writing code for a deep learning AI model, weโ€™ve come a long way and have a long way to go. So, how future innovations impact human performance and capability is fascinating to track because these drive our evolution. While it might sound pretty scary right now, Iโ€™m certain weโ€™ll all be plugged in to tech, quite literally, with human-computer interfaces sooner than later. And, after we all have a personal, fine-tunable LLM running on our mobile phones, the next step of jacking in our brain with this powerhouse of portable digital knowledge will quickly take us to the next level.

Explore Our Other Insightful Interviews:

Integrating AI to Streamline IT and Cybersecurity Practices Ft. Timothy J. McGrath, President and Chief Executive Officer, Connection

Optimizing Dental Facilities with AI Receptionist: A Conversation with Burhan Syed, CEO and Co-Founder of Rondah AI

Argonne National Laboratory Reviews and Recognitions