International Policy Analysis
European Central Bank
Frankfurt am Main, Germany
Hi there!👋 I am a research analyst at the European Central Bank, where I focus on AI/ML techniques for economic research within the international policy analysis division. Prior to this, I was a doctoral researcher at Inria and Université Côte d’Azur. My PhD thesis focused on the foundations of machine learning interpretability, under the supervision of Damien Garreau and Frédéric Precioso. Previously, I earned an MSc in mathematical engineering and a BSc in applied mathematics, both from Politecnico di Torino.
Drop me a line if you’re ever in Frankfurt, Rome, or Brienza!
news
| Nov 20, 2025 | 3rd place at ESCB/SSM Hackathon: From News to Forecast: Experimenting with AI Time Series Models: we opened the black box of Chronos 2 to provide attention-based explanations for economic forecasting |
|---|---|
| Nov 10, 2025 | I am honored to be awarded the 2025 Young Researcher Prize (Victoires de la Recherche - Prix Jeune Chercheur) by Métropole Nice Côte d’Azur for my "high quality" PhD thesis |
| Sep 28, 2025 | New VoxEU column: What Corporate Earnings Calls Reveal About the AI Stock Rally is out! |
| Aug 13, 2025 | Interactive charts are now live! 📊 Track GenAI exposure & sentiment across US firms, sectors, and industries over time 👉 GenAI Talks |
| Aug 12, 2025 | My first ECB paper is out: Verba Volant, Transcripta Manent: What Corporate Earnings Calls Reveal About the AI Stock Rally has been published in the European Central Bank Working Paper Series! |
| Jul 15, 2025 | I got promoted to Research Analyst at the ECB! 🎉 I’ll continue working on the intersection of AI and economics within the International Policy Analysis Division |
| May 27, 2025 | I presented our working paper "Verba Volant, Transcripta Manent: What Corporate Earnings Calls Reveal About the AI Stock Rally" at the ECB IPA Economic Seminar |
| Mar 31, 2025 | I presented our work on the financial impact of artificial intelligence at the 2nd ECB AI in Economics workshop |
| Feb 13, 2025 | Check out Hack the Act!: a RAG-based chatbot designed to demystify the EU AI Act |
| Jan 15, 2025 | My PhD thesis on the Foundations of Machine Learning interpretability is publicly available! |
selected publications
- What Corporate Earnings Calls Reveal About the AI Stock RallyMichele Ca’ Zorzi, Gianluigi Lopardo, and Ana-Simona Manu2025
The launch of ChatGPT in late 2022 marked a turning point in how firms and investors view generative artificial intelligence. This column measures the extent and tone of firms’ discussions of GenAI in earnings calls, and finds that early engagement with AI topics boosted stock market performance beyond the immediate impact on expected earnings.
@article{voxeu2025genai, title = {{What Corporate Earnings Calls Reveal About the AI Stock Rally}}, author = {Ca’ Zorzi, Michele and Lopardo, Gianluigi and Manu, Ana-Simona}, year = {2025}, institution = {Centre for Economic Policy Research}, type = {VoxEU column}, keywords = {artificial intelligence; ChatGPT; earnings call; equity returns; generative AI} } - Verba Volant, Transcripta Manent: What Corporate Earnings Calls Reveal About the AI Stock RallyMichele Ca’ Zorzi, Gianluigi Lopardo, and Ana-Simona Manu2025
This paper investigates the economic impact of technological innovation, focusing on generative AI (GenAI) following ChatGPT’s release in November 2022. We propose a novel framework leveraging large language models to analyze earnings call transcripts. Our method quantifies firms’ GenAI exposure and classifies sentiment as opportunity, adoption, or risk. Using panel econometric techniques, we assess GenAI exposure’s impact on S&P 500 firms’ financial performance over 2014-2023. We find two main results. First, GenAI exposure rose sharply after ChatGPT’s release, particularly in IT, Consumer Services, and Consumer Discretionary sectors, coinciding with sentiment shifts toward adoption. Second, GenAI exposure significantly influenced stock market performance. Firms with early and high GenAI exposure saw stronger returns, though earnings expectations improved modestly. Panel regressions show a 1 percentage point increase in GenAI exposure led to 0.26% rise in quarterly excess returns. Difference-in-Difference estimates indicate 2.4% average quarterly stock price increases following ChatGPT’s release.
@article{ecb2025genai, title = {{Verba Volant, Transcripta Manent: What Corporate Earnings Calls Reveal About the AI Stock Rally}}, author = {Ca’ Zorzi, Michele and Lopardo, Gianluigi and Manu, Ana-Simona}, year = {2025}, institution = {European Central Bank}, number = {3093}, type = {Working Paper Series}, keywords = {artificial intelligence; ChatGPT; earnings call; equity returns; generative AI} } - Foundations of Machine Learning InterpretabilityGianluigi LopardoUniversité Côte d’Azur, 2024
The rising use of complex Machine Learning (ML) models, especially in critical applications, has highlighted the urgent need for interpretability methods. Despite the variety of solutions proposed to explain automated algorithmic decisions, understanding their decision-making process remains a challenge. This manuscript investigates the interpretability of ML models, using mathematical analysis and empirical evaluation to compare existing methods and propose novel solutions.Our main focus is on post-hoc interpretability methods, which provide insights into the decision-making process of ML models post-training, independent of specific model architectures. We delve into Natural Language Processing (NLP), exploring techniques for explaining text models.We address a key challenge: interpretability methods can yield varied explanations even for simple models. This highlights a critical issue: the absence of a robust theoretical foundation for these methods. To address this issue, we use a rigorous theoretical framework to formally analyze existing interpretability techniques, assessing their behavior and limitations.Building on this, we propose a novel explainer to provide a more faithful and robust approach to interpreting text data models. We also engage with the debate on the effectiveness of attention weights as explanatory tools within powerful transformer architectures.Through this analysis, we expose the strengths and limitations of existing interpretability methods and pave the way for more reliable, theoretically grounded approaches. This will lead to a deeper understanding of how complex models make decisions, fostering trust and responsible deployment in critical ML applications.
@phdthesis{lopardo2024foundation, title = {{Foundations of Machine Learning Interpretability}}, author = {Lopardo, Gianluigi}, year = {2024}, school = {Université Côte d'Azur}, } - Attention Meets Post-hoc Interpretability: A Mathematical PerspectiveGianluigi Lopardo, Frederic Precioso, and Damien GarreauIn International Conference on Machine Learning (ICML), 2024
Attention-based architectures, in particular transformers, are at the heart of a technological revolution. Interestingly, in addition to helping obtain state-of-the-art results on a wide range of applications, the attention mechanism intrinsically provides meaningful insights on the internal behavior of the model. Can these insights be used as explanations? Debate rages on. In this paper, we mathematically study a simple attention-based architecture and pinpoint the differences between post-hoc and attention-based explanations. We show that they provide quite different results, and that, despite their limitations, post-hoc methods are capable of capturing more useful insights than merely examining the attention weights.
@inproceedings{lopardo2024attention, title = {{Attention Meets Post-hoc Interpretability: A Mathematical Perspective}}, author = {Lopardo, Gianluigi and Precioso, Frederic and Garreau, Damien}, year = {2024}, booktitle = {{International Conference on Machine Learning (ICML)}}, organization = {PMLR}, } - A Sea of Words: An In-Depth Analysis of Anchors for Text DataGianluigi Lopardo, Frederic Precioso, and Damien GarreauIn International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Anchors (Ribeiro et al., 2018) is a post-hoc, rule-based interpretability method. For text data, it proposes to explain a decision by highlighting a small set of words (an anchor) such that the model to explain has similar outputs when they are present in a document. In this paper, we present the first theoretical analysis of Anchors, considering that the search for the best anchor is exhaustive. After formalizing the algorithm for text classification, we present explicit results on different classes of models when the vectorization step is TF-IDF, and words are replaced by a fixed out-of-dictionary token when removed. Our inquiry covers models such as elementary if-then rules and linear classifiers. We then leverage this analysis to gain insights on the behavior of Anchors for any differentiable classifiers. For neural networks, we empirically show that the words corresponding to the highest partial derivatives of the model with respect to the input, reweighted by the inverse document frequencies, are selected by Anchors.
@inproceedings{lopardo2022anchors, title = {{A Sea of Words: An In-Depth Analysis of Anchors for Text Data}}, author = {Lopardo, Gianluigi and Precioso, Frederic and Garreau, Damien}, year = {2023}, booktitle = {{International Conference on Artificial Intelligence and Statistics (AISTATS)}}, } - Faithful and Robust Local Interpretability for Textual PredictionsGianluigi Lopardo, Frederic Precioso, and Damien GarreauarXiv preprint arXiv:2311.01605, 2023
Interpretability is essential for machine learning models to be trusted and deployed in critical domains. However, existing methods for interpreting text models are often complex, lack mathematical foundations, and their performance is not guaranteed. In this paper, we propose FRED (Faithful and Robust Explainer for textual Documents), a novel method for interpreting predictions over text. FRED offers three key insights to explain a model prediction: (1) it identifies the minimal set of words in a document whose removal has the strongest influence on the prediction, (2) it assigns an importance score to each token, reflecting its influence on the model’s output, and (3) it provides counterfactual explanations by generating examples similar to the original document, but leading to a different prediction. We establish the reliability of FRED through formal definitions and theoretical analyses on interpretable classifiers. Additionally, our empirical evaluation against state-of-the-art methods demonstrates the effectiveness of FRED in providing insights into text models.
@article{lopardo2023fred, title = {{Faithful and Robust Local Interpretability for Textual Predictions}}, author = {Lopardo, Gianluigi and Precioso, Frederic and Garreau, Damien}, year = {2023}, journal = {arXiv preprint arXiv:2311.01605}, } - SMACE: A New Method for the Interpretability of Composite Decision SystemsGianluigi Lopardo, Damien Garreau, Frederic Precioso, and 1 more authorIn Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD), 2022
Interpretability is a pressing issue for decision systems. Many post hoc methods have been proposed to explain the predictions of a single machine learning model. However, business processes and decision systems are rarely centered around a unique model. These systems combine multiple models that produce key predictions, and then apply rules to generate the final decision. To explain such decisions, we propose the Semi-Model-Agnostic Contextual Explainer (SMACE), a new interpretability method that combines a geometric approach for decision rules with existing interpretability methods for machine learning models to generate an intuitive feature ranking tailored to the end user. We show that established model-agnostic approaches produce poor results on tabular data in this setting, in particular giving the same importance to several features, whereas SMACE can rank them in a meaningful way.
@inproceedings{lopardo2022smace, title = {{SMACE: A New Method for the Interpretability of Composite Decision Systems}}, author = {Lopardo, Gianluigi and Garreau, Damien and Precioso, Frederic and Ottosson, Greger}, year = {2022}, booktitle = {Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD)}, pages = {325--339}, organization = {Springer}, }