ESL11 Workshop Program

February 9-11, 2026, Zaragoza, Spain

Registration | Program | Participants | Venue


Monday, February 9

Time Event
12:00 - 13:00 Registration
13:00 - 14:15 Lunch

Session 1

Time Speaker Title
14:30 - 14:55 Emilio Artacho Introduction to the ESL’s philosophy and aims
14:55 - 15:20 Susi Lehtola Progress on reusable libraries for quantum chemistry
15:20 - 15:45 Giovanni Pizzi Bringing powerful automation to everyone
15:45 - 16:10 Thomas Purcell Automating Computational Materials Design with Explainable AI
Time Event
16:10 - 16:40 Break

Session 2

Time Speaker Title
16:40 - 17:05 Felipe da Jornada Scaling up MBPT calculations: from massively parallel Fortran/MPI implementations to vibe coding
17:05 - 17:30 Jessica Nash Formulating Best Practices for Scientific Software Development in the Age of AI
Time Event
17:30 - 18:30 Round Table 1: Education, AI-based coding, etc.
20:30 Workshop Dinner

Tuesday, February 10

Session 3

Time Speaker Title
9:00 - 9:25 Bálint Aradi Unit Testing of Scientific Codes in the Upcoming Age of “Vibe” Coding
9:25 - 9:50 Benjamin Hourahine Possible role of automatic differentiation in ESL
9:50 - 10:15 Thomas Keal ChemShell and its libraries (Title TBC)
10:15 - 10:40 Andrew Logsdail The application of a modular data interface for hierarchical electronic structure calculations
Time Event
10:40 - 11:15 Break

Session 4

Time Speaker Title
11:15 - 11:40 Timo Reents Score-based diffusion models for accurate crystal structure inpainting and reconstruction of hydrogen positions
11:40 - 12:05 María Camarasa AI-Assisted Parameter Optimization in Density Functional Theory Calculations
12:05 - 12:30 Simone Fioccola Efficient g-Tensor Calculations from a single-point Formula: Applications to Large Supercells and Defect Modeling
12:30 - 12:55 Luca Frediani VAMPyR: combining code prototyping with frontier research in electronic structure theory
Time Event
13:00 - 14:15 Lunch

Session 5

Time Speaker Title
14:30 - 14:55 Mark van Schilfgaarde Unique features of Questaal as a testbed for standardized libraries
14:55 - 15:20 Dimitar Pashov TBD
15:20 - 15:45 Emanuel Gull Don’t be afraid of temperature!
15:45 - 16:10 James Green Use and Development of Open Source Libraries for Electronic Structure Theory Within the FHI-aims Software Package
Time Event
16:10 - 16:40 Break

Session 6

Time Speaker Title
16:40 - 17:05 Pietro Delugas Efficient GPU Utilization for Small Jobs and Multi-k-Point Workloads in Quantum ESPRESSO
17:05 - 17:20 Fabrizio Ferrari Quantum ESPRESSO’s FFTXlib: Fast Fourier Transforms for GPU driven material simulations
17:20 - 17:35 Fabio Affinito Where is the technological evolution taking us?
Time Event
17:35 - 18:30 Round Table 2: Strategies to deal with novel architectures and technology evolution

Wednesday, February 11

Session 7: Quantum Computing

Chair: Yann Pouillon

Time Speaker Title
9:00 - 9:25 Bruno Senjean Quantum algorithms for quasi-diabatic excited-state quantum chemistry
9:25 - 9:50 Akilan Rajamani Quantum Density Functional Theory within BigDFT
9:50 - 10:15 Max Rossmannek Quantum Computing for Quantum Chemists
10:15 - 10:40 Jakob Kottmann TBD
Time Event
10:40 - 11:15 Break

Session 8

Time Speaker Title
11:15 - 11:40 Ameeya Bhusan-Sahoo DFT-Optimized Orthogonal and Local Basis Sets for Quantum Simulation
11:40 - 12:05 Ask Larsen ASE: Recent developments in code and community
12:05 - 12:30 Volker Blum ELSI status and roadmap
Time Event
12:30 - 12:55 Closing Remarks / Plans
13:00 - 14:15 Lunch

Abstracts

Fabio Affinito

Where is the technological evolution taking us?

This brief talk focuses on the observation of latest technological trends and compare them with the users’ habits when it comes down to using scientific software and libraries. The increasing complexity of technology and science could become a nightmare, unless we rely on solid software engineering practises…

↑ Back to program


Bálint Aradi

Unit Testing of Scientific Codes in the Upcoming Age of “Vibe” Coding

The increasing use of artificial intelligence, particularly large language models (LLMs), in supporting the development and coding of scientific software presents new challenges. While verifying the syntactic correctness of AI-generated code is relatively straightforward, relying on compiler error messages, ensuring the semantic and algorithmic correctness of the code is considerably more complex. In this context, unit tests play a crucial role in validating the integrity of the code, making the adoption of granular unit testing for scientific applications increasingly important.

Fortuno is a flexible and extensible unit testing framework designed specifically for modern Fortran programming. It provides a simple, user-friendly interface, minimizing boilerplate code when writing tests. Emphasizing modularity and extensibility, Fortuno offers a robust foundation for creating customized testing environments. Written in Fortran 2018, it supports a range of testing scenarios, including the testing of MPI- and coarray-parallel projects.

In this presentation, I will provide an overview of the Fortuno framework’s architecture and demonstrate how its features enable efficient testing of complex scientific codes. I will also share practical insights gained from using Fortuno within the DFTB+ project. Finally, I will briefly discuss the potential of using frameworks like Fortuno to support LLM-based development in scientific coding.

↑ Back to program


Emilio Artacho

Introduction to the ESL philosophy and aims

↑ Back to program


Ameeya Bhusan-Sahoo

DFT-Optimized Orthogonal and Local Basis Sets for Quantum Simulation

This contribution focuses on the construction of many-body Hamiltonian derived from BigDFT’s wavelet-based support-function formalism and the quantitative characterization of their sparsity and locality properties. By extending the atom-centered support-function representation beyond the one-body level through explicit inclusion of Coulomb interaction tensors, we investigate how the inherent spatial localization of the basis impacts the structure, conditioning, and effective sparsity of the resulting Hamiltonian. Metrics such as norm-based sparsity measures, orbital variance, and overlap conditioning are used to characterize the Hamiltonian in view of its suitability for quantum simulation workflows. The presentation will highlight the practical methodology, numerical behavior, and implications for reducing quantum resource overhead in near-term algorithms.

↑ Back to program


Volker Blum

ELSI status and roadmap

(with help by AG and BH)

↑ Back to program


María Camarasa

AI-Assisted Parameter Optimization in Density Functional Theory Calculations

The accurate determination of the optoelectronic properties of molecules and solids has been a central goal since the initial development of first-principles methods. This need has become even more pressing as the field of quantum materials rapidly advances, enabling breakthroughs in many different fields, such as energy-storage technologies, quantum devices, and biomedical sensing. A key challenge is the reliable prediction and characterization of electronic structure, where density functional theory (DFT) has long served as the standard computational tool. However, achieving the accuracy of many-body perturbation theory remains computationally demanding.

In this work, we explore a new strategy that couples modern artificial-intelligence-based optimization techniques with traditional DFT workflows. By integrating these optimization algorithms directly into ab initio packages, we enable the automated tuning of key calculation parameters with the goal of improving both accuracy and efficiency. We present initial results obtained using various state-of-the-art optimization methods implemented in open-source libraries, demonstrating their potential to accelerate and enhance first-principles electronic-structure predictions.

↑ Back to program


Pietro Delugas

Efficient GPU Utilization for Small Jobs and Multi-k-Point Workloads in Quantum ESPRESSO

Modern GPUs offer massive parallelism, but current QE implementations optimized for large plane-wave and linear algebra operations underperform for systems with small unit cells and many k-points. We present a hybrid approach combining host multithreading with OpenACC offloading to execute multiple k-point blocks in parallel. Different strategies for diagonalization are explored: independent GPU execution, batching small eigenproblems, and CPU fallback. We also address load imbalance arising from variable task times with a lightweight library for runtime measurement and redistribution, demonstrated in band parallelization for the phonon code. Performance comparisons highlight the benefits of this approach over MPI-only and MPS-based solutions.

↑ Back to program


Fabrizio Ferrari

Quantum ESPRESSO’s FFTXlib: Fast Fourier Transforms for GPU driven material simulations

Three-dimensional FFTs are critical to plane-wave DFT simulations, and their performance is deeply tied to how data is distributed and parallelized. In this talk, I will present our work on analyzing and improving FFTXlib, the distributed 3D FFT engine used in Quantum ESPRESSO, and compare it with NVIDIA’s cuFFTMp on modern GPU-accelerated systems. I will highlight how PW-DFT features, such as energy cutoffs and multi-orbital batching, shape FFT behavior, and introduce an adaptive batching strategy that makes better use of high-bandwidth interconnects and topology-aware communication. This approach delivers robust strong scaling even at large node counts. I will show results from the Leonardo and MareNostrum 5 supercomputers and validate results through production simulations.

↑ Back to program


Simone Fioccola

Efficient g-Tensor Calculations from a single-point Formula: Applications to Large Supercells and Defect Modeling

Electron paramagnetic resonance (EPR) is a powerful technique for characterizing point defects in semiconductors, with the g-tensor acting as a sensitive probe of the defect’s electronic structure and local symmetry. However, standard linear response methods for computing g-tensors from first principles become increasingly demanding and numerically unstable when applied to the large supercells required to model isolated defects.

To overcome these limitations, we implement the single-point formula for orbital magnetization within the QE-CONVERSE module of Quantum ESPRESSO. Unlike previous converse formulations based on the covariant derivative, this approach enables efficient and stable g-tensor calculations using only Γ-point sampling, avoiding both explicit k-point summation and the solution of linear response equations.

We evaluate the accuracy and performance of the method on prototypical point defects in silicon, diamond, and α-quartz, comparing the results with those obtained from linear response implementations in CASTEP and QE-GIPAW. We show that the single-point converse method exhibits faster convergence with respect to supercell size, superior numerical stability, and substantially lower computational cost. These results establish the approach as a robust and scalable alternative for first-principles EPR studies of defects in complex materials.

↑ Back to program


Luca Frediani

VAMPyR: combining code prototyping with frontier research in electronic structure theory

Electronic structure calculations have historically been dominated by atom-centred basis sets for isolated systems and plane waves for extended systems. Recently, real-space methods have gained prominence: they provide a computationally efficient route, achieve very high precision and don’t suffer from the drawbacks of atomic orbitals on the one hand and plane waves on the other. Their most significant disadvantage (high memory footprint) is instead becoming less severe with modern HPC resources.

A significant advantage of real-space methods and in particular Multiwavelet methods is a narrow semantic gap between the theoretical formulation and the practical implementation. We have leveraged on this quality, by implementing a Python interface (VAMPyR) to our Multiwavelet library (mrcpp) which has lately allowed us to generate rapid pilot code, accelerating development significantly. In this contribution I will illustrate the design of the VAMPyR code, I will present a couple of recent applications and I will finally discuss how this initiative fits in the broader picture of the FAIR principles of open science.

↑ Back to program


James Green

Use and Development of Open Source Libraries for Electronic Structure Theory Within the FHI-aims Software Package

Open source libraries are key in keeping research cost low and efficient, if continuous maintenance and development is ensured by its community. However, committing effort and support is difficult to organize and insufficiently acknowledged by the current academic funding system.

This contribution illustrates how the FHI-aims (Fritz Haber Institute ab initio materials simulations) community contributes in developing and maintaining open-source libraries for electronic structure theory. FHI-aims is a community based all-electron, electronic-structure code used for computational molecular and materials research by a global base of developers and users in academia and industry. FHI-aims uses a number of open-source libraries, such as LibXC and s-dftd3, amongst others.

An example of a library with active contributions from the FHI-aims community is the ELSI (ELectronic Structure Infrastructure) interface, which connects electronic structure codes to different eigensolvers and density matrix solvers for the generalized Kohn-Sham eigenvalue problem. The developers of FHI-aims have a strong interest in contributing to these libraries, so that electronic structure theory ultimately remains open for new ideas and for all of its community. In outlining the support infrastructure, we aim to better connect the FHI-aims developers with the wider electronic structure development community.

↑ Back to program


Emanuel Gull

Don’t be afraid of temperature!

Conventional electronic structure techniques typically operate within a zero-temperature framework and a fixed particle number, enabling direct calculation of well-defined excitations without requiring the complete frequency profile of a quantum system. In this presentation, I will demonstrate that extending these methodologies to non-zero electronic temperature—using approaches from quantum statistical mechanics and quantum field theory—yields valuable additional insights while maintaining the precision of spectral data.

I will discuss the mathematical and computational advances developed over the past decade that facilitate these types of simulations, and introduce the MIT-licensed GREEN software package (green-phys.org) which provides public access to these tools.

↑ Back to program


Benjamin Hourahine

Possible role of automatic differentiation in ESL

In this contribution I will review some of the, many, forward/bottom-up and reverse mode/top-down (or adjoint mode) automatic differentiation packages available for the Fortran and C language families. I will also describe my work on repackaging the efficient high-order DNAOAD project (which offers arbitrary forward order at polynomial cost, that I have refitted to allow real and general precision types and added Wengert lists for adjoint differentiation).

↑ Back to program


Felipe da Jornada

Scaling up MBPT calculations: from massively parallel Fortran/MPI implementations to vibe coding

In the last two decades, first-principles methods based on many-body perturbation theory (MBPT) and interacting Green’s function have enabled an accurate and unbiased description of the excited-state properties of materials. Well-known examples of these approaches include the GW approximation to the electronic self-energy for quasiparticle excitation properties and the Bethe-Salpeter equation (BSE) for the optical response of materials. With recent developments in algorithms and massively parallel codes, first-principles MBPT calculations can now scale to systems containing several thousand atoms by utilizing leadership-class supercomputing environments.

Yet, approaching the theoretical peak performance requires carefully crafted kernels that use architecture- and vendor-specific programming models, or generic and abstract program models – often with partial vendor support – that represent a barrier for developers. We will discuss some of our successes and challenges in developing massively parallel MBPT codes, particularly within the BerkeleyGW software package. We will comment on algorithmic and metaprogramming strategies to modernize our large Fortran codebases and ease their integration with higher-level packages.

Finally, we will also share our lessons learned when creating a Python-based MBPT software package, which can outperform traditional monolithic codes thanks to rapid prototyping and object-oriented programming. We will contextualize our efforts in light of industry-supported tools (just-in-time compilation, accelerated linear algebra primitives, hardware support, documentation, etc.), highlighting some of the challenges and opportunities for lowering the barrier for developing performant electronic-structure codes.

↑ Back to program


Thomas Keal

ChemShell and its libraries

(Title TBC)

↑ Back to program


Ask Larsen

ASE: Recent developments in code and community

The Atomic Simulation Environment (ASE) community is making efforts to decentralize the project and improve project cohesion. I will talk about major developments in the project such as the establishment of a steering committee, goals for ASE 4, and the challenges of everyday maintenance.

↑ Back to program


Susi Lehtola

Progress on reusable libraries for quantum chemistry

↑ Back to program


Andrew Logsdail

The application of a modular data interface for hierarchical electronic structure calculations

New paradigms in cross-model integration are facilitating communication of large, non-native data structures, such as electron densities, between software packages. The Atomic Simulation Interface (ASI) was previously introduced as a potential transferable solution to data communication, and demonstrated for cross-model interfacing between workflows and electronic structure software packages. The on-going work to extend ASI and apply it to multiscale quantum mechanical (QM) calculations will be presented, including the realisation of a QM/QM software package, embASI, that builds on the modularity enabled by ASI.

↑ Back to program


Jessica Nash

Formulating Best Practices for Scientific Software Development in the Age of AI

For the past decade, training efforts such as those led by the Molecular Sciences Software Institute (MolSSI) have emphasized software development best practices including version control, testing, modular design, and documentation, to support sustainable scientific software. Over the past few years, however, the software development landscape has changed rapidly. AI-assisted tools are becoming part of everyday workflows for professional software developers and are likely to play an increasingly important role for scientists and engineers.

These tools can significantly accelerate software development, but they also change how decisions are made. AI-assisted development has the potential to either mitigate or exacerbate technical debt, particularly in large and legacy scientific codebases, depending on how the technology is introduced and used. This creates new challenges for both novice and experienced scientific software developers.

In this talk, I will introduce considerations for teaching and developing scientific software in the age of AI. I will discuss how AI-assisted development can both amplify and mitigate technical debt, and how traditional software engineering best practices remain essential and how they might be taught and applied differently when AI is part of the development process. The talk will highlight tools and emerging lessons for training students and researchers to use AI as a tool for understanding, maintaining, and improving scientific software, rather than simply generating new code.

↑ Back to program


Giovanni Pizzi

Bringing powerful automation to everyone: Usability-driven developments in AiiDA and AiiDAlab

Over the past decades, workflows and automation have become central across computational science, and particularly in materials science. Their importance has become even more relevant with the rise of machine learning and artificial intelligence, where large-scale automated simulations are essential. Platforms providing an ecosystem of robust workflows are therefore critical, and AiiDA (https://www.aiida.net) addresses all these needs, placing a strong emphasis both on scalability to high-throughput workflows of 100,000 or more simulations, and on reproducibility.

After more than ten years of development, AiiDA has matured into a powerful and scalable platform capable of supporting both high-throughput and HPC-level workloads. However, its flexibility and feature richness have sometimes been perceived as complex by first-time users. Over the past three years, we have undertaken focused efforts to significantly lower the entry barrier while preserving the full power, speed, and extensibility required by advanced users.

These efforts include: streamlined installation procedures; making external services such as PostgreSQL and RabbitMQ optional; the introduction of the new, lightweight WorkGraph engine for simplifying workflow writing; support for executing shell commands directly via aiida-shell without the need of custom plugins; direct loading of AiiDA databases from exported .aiida archive files; new functionality to simplify access to stored raw data without the need of knowing the AiiDA Python API, such as process-dumping tools into a folder; and the development of common workflows to execute simulations with over ten different codes, but with the same input/output interface.

In parallel, to bring robust computational workflows to a broader scientific audience, including experimental researchers, we have been developing and expanding the AiiDAlab ecosystem. In particular, the AiiDAlab Quantum ESPRESSO flagship app provides intuitive, GUI-driven access to a wide range of first-principles workflows, covering structure relaxation, band structures and (P)DOS, phonons, IR/Raman spectra, muon spectroscopy, XPS/XAS, Wannier functions, and more.

↑ Back to program


Thomas Purcell

Automating Computational Materials Design with Explainable AI

High-throughput density functional theory (DFT) calculations have the capability to quickly screen thousands of materials to identify the top candidates for myriad of energy and sustainability applications. However, despite its impressive efficiency relative to experiments, computational screening is still incapable of exploring the entirety of materials space, which is needed to find the optimal candidate structures.

Incorporating artificial intelligence (AI) models into these frameworks would further accelerate these searches by focusing on the most promising candidates as early as possible, but are often limited by a scarcity of available data. Here I will present the Purcell Lab’s recent efforts to develop new high-throughput workflows to describe various materials properties for solid state and polymer materials. In particular I will focus on how incorporating explainable AI into these frameworks allows them to not only focus the searches on the optimal regions of materials space, but also extract design rules to guide future experimental studies.

↑ Back to program


Akilan Rajamani

Quantum Density Functional Theory within BigDFT

Quantum computers have shown great promise in addressing quantum chemistry problems based on wave function theory. However, density functional theory (DFT) has not been explored as extensively in this context. In a recent study, Senjean et al. demonstrated that DFT can also benefit from quantum computing, particularly in the noisy intermediate-scale quantum era.

In the present work, we integrate a Quantum-DFT approach into the BigDFT code—an established open-source electronic structure software that employs wavelet basis sets. The BigDFT framework consists of two inner loops: the first optimizes the support functions, while the second diagonalizes the Hamiltonian. These two loops are coupled within an outer self-consistent field cycle. In our implementation, the second loop is interfaced with a quantum computer to perform the diagonalization step using the ensemble Variational Quantum Eigensolver algorithm.

↑ Back to program


Timo Reents

Score-based diffusion models for accurate crystal structure inpainting and reconstruction of hydrogen positions

Timo Reents, Arianna Cantarella, Marnik Bercx, Pietro Bonfà, and Giovanni Pizzi

Generative AI methods are rapidly evolving to speed up and improve materials discovery and crystal structure prediction. Score-based diffusion models, a particular class of generative models, can not only be adopted to generate new materials with desired properties but also to reconstruct crystal structures for which structural information is only partially available.

In this work, we build on top of Microsoft’s MatterGen, a diffusion based model originally designed to generate new stable crystal structures, and extend and apply it to reconstruct missing hydrogen sites in crystal structures reported in experimental databases. This is particularly useful as the experimental measurement of hydrogen sites with standard XRD is typically challenging due to weak scattering of hydrogen.

We show how to leverage approaches known from image inpainting in the field of computer vision, combined with universal machine learning interatomic potentials, to improve the success rate of correctly identifying the missing sites while significantly lowering the computational cost with respect to a direct DFT approach. Moreover, the adopted approaches exhibit superior performance compared to existing inpainting approaches in the context of conditional crystal structure generation.

Our benchmarking shows that the model can not only be used to reconstruct DFT optimized structures but also performs well on reconstructing the initial experimental starting structure. Furthermore, we show that our approach to conditionally reconstruct crystal structures also works for structures which are clearly beyond the training regime. Thanks to the generality of the method, further future applications range from reconstructing experimental structures as in the case of missing hydrogen positions to analyzing intercalation by predicting the most probable ion positions in DFT optimized crystal structures of cathode materials.

↑ Back to program


Max Rossmannek

Quantum Computing for Quantum Chemists

In this talk I will give an introduction to quantum computing for quantum chemists. Starting with an overview of fundamental concepts, I will showcase how the quantum computing paradigm enables tackling computational chemistry problems that lie beyond the reach of purely classical techniques.

I am going to recap the recent trend in research on integrating quantum and high-performance computing systems and discuss the software challenges that come with this. Finally, I am going to introduce a new quantum computational library for quantum chemistry called “Qiskit Fermions” which bridges the gap between the classical computational chemistry codes and the Qiskit SDK, the most widely used quantum computation software. In particular, I will emphasize how the C APIs of these two libraries can be used to implement quantum-enabled algorithms directly into existing chemistry software.

↑ Back to program


Bruno Senjean

Quantum algorithms for quasi-diabatic excited-state quantum chemistry

Quantum physics and chemistry have provided well-recognized theoretical tools to predict the behavior of molecules and materials described by the Schrödinger equation. However, many problems with high industrial and societal impact remain intractable for classical computers, urging us to reconsider our preconceptions and shift gears. The world of the infinitely small obeys the laws of quantum mechanics, suggesting the need for a machine governed by the same physics: this marks the birth of quantum computers, a new technological revolution which promises a quantum advantage (speed-up) over classical computers.

In this talk, I will present the state-averaged orbital-optimized variational quantum eigensolver (SA-OO-VQE), designed to address the electronic structure problem for excited states, essential to unravel ubiquitous ultrafast (subpicosecond) photochemical and photophysical ’energy/charge/matter/information’-transfer processes induced upon the absorption of light by molecules within the UV-visible domain. I will show that SA-OO-VQE exhibits a propensity to produce an ab initio quasidiabatic representation “for free” if considered as a least-transformed block-diagonalization procedure. These recent findings underscore the practical utility and potential of SA-OO-VQE for addressing systems with complex nonadiabatic phenomena.

↑ Back to program


Mark van Schilfgaarde

Unique features of Questaal as a testbed for standardized libraries

Mark van Schilfgaarde and Dimitar Pashov

We present some capabilities of the Questaal community code. Some of Questaal’s unique representations of internal quantities, such as the charge density and basis set, and also the wide diversity of properties it generates, e.g. representations of excitons, offer a platform that could serve as a test case for the design of a sufficiently flexible standard electronic structure library.

Questaal is an all-electron code: it has an augmented wave basis with a unique construction at the DFT level, with features akin to both pseudopotential and traditional augmented wave implementations. It has an implementation of many-body perturbation theory, in a quasiparticle self-consistent form, and can generate charge and magnetic response functions at differing levels of theory. A field-theoretic implementation of the electron-phonon interaction is near completion. Its interface to dynamical mean field theory entails other kinds of objects, and involves embedding algorithms that requires algorithms such as projectors and the ability to generate parameters for the Anderson impurity hamiltonian.

Much of the data in Questaal has been ported to hdf5 formats. Questaal has a custom interface to hdf5 libraries, which is highly portable and efficient to use, and may adapt well as support to ESL libraries.

↑ Back to program