I'm an Assistant Professor at NYUAD (New York University Abu Dhabi, UAE), a Global Network Assistant Professor at NYU (New York University, USA) and a Research Affiliate at MIT (Massachusetts Institute of Technology, USA). Before joining NYUAD, I was a postdoctoral researcher at MIT. I got my PhD from INRIA/Sorbonne UPMC (Paris 6, France) and my master degree from ESI (Algeria).
My research interests include:
The intersection of applied machine learning and compilers
Building compilers for deep learning (including deep learning hardware accelerators).
Using machine learning in compilers (e.g., to enable automatic code optimization and to design compiler heuristics).
Compilers and programming models for high performance computing and compute intensive areas.
Tiramisu: a polyhedral compiler for expressing fast and portable data parallel algorithms including deep learning, tensor operations, image processing, ...
A Deep Learning Model for Loop Interchange.
Lina Mezdour, Khadidja Kadem, Massinissa Merouani, Amina Selma Haichour, Saman Amarasinghe, Riyadh Baghdadi.
ACM SIGPLAN 2023 International Conference on Compiler Construction (CC), February 2023.
[PDF]
[Bibtex]
[Code]
Q-gym: An Equality Saturation Framework for DNN Inference Exploiting Weight Repetition.
Cheng Fu, Hanxian Huang, Bram Wasti, Chris Cummins, Riyadh Baghdadi, Kim Hazelwood, Yuandong Tian, Jishen Zhao, Hugh Leather.
International Conference on Parallel Architectures and Compilation Techniques (PACT), October 2022.
[PDF]
Caviar: an e-graph based TRS for automatic code optimization.
Smail Kourta, Adel Abderahmane Namani, Fatima Benbouzid-Si Tayeb, Kim Hazelwood, Chris Cummins, Hugh Leather, Riyadh Baghdadi.
Proceedings of the 31st ACM SIGPLAN International Conference on Compiler Construction.
[PDF]
[ArXiv]
[Code]
Progress Report: A Deep Learning Guided Exploration of Affine Unimodular Loop Transformations.
Massinissa Merouani, Khaled Afif Boudaoud, Iheb Nassim Aouadj, Nassim Tchoulak, Fatima Benbouzid-Sitayeb, Karima Benatchba, Hugh Leather, Riyadh Baghdadi.
12th International Workshop on Polyhedral Compilation Techniques.
[PDF]
[ArXiv]
A Deep Learning Based Cost Model For Automatic Code Optimization.
R. Baghdadi, M. Merouani, M. H. Leghettas, K. Abdous, T. Arbaoui, K. Benatchba, S. Amarasinghe.
Proceedings of the Fourth Conference on Machine Learning and Systems, San Jose, CA, USA, 2021.
[PDF][Bibtex]
[ArXiv]
[Code]
(Outstanding Paper Award)
A variational study of two-nucleon systems with lattice QCD.
Saman Amarasinghe, Riyadh Baghdadi, Zohreh Davoudi, William Detmold, Marc Illa, Assumpta Parreno, Andrew V Pochinsky, Phiala E Shanahan, Michael L Wagman.
Proceedings for The 38th International Symposium on Lattice Field Theory, LATTICE2021, 26th-30th July, 2021.
[ArXiv]
Hardware acceleration of sparse and irregular tensor computations of ml models: A survey and insights.
Shail Dave, Riyadh Baghdadi, Tony Nowatzki, Sasikanth Avancha, Aviral Shrivastava, Baoxin Li.
Proceedings of the IEEE, 2021.
[ArXiv]
Tiramisu: A Polyhedral Compiler for Dense and Sparse Deep Learning
R. Baghdadi, A. N. Debbagh, K. Abdous, F. Z. Benhamida, A. Renda, J. E. Frankle, M. Carbin, S. Amarasinghe
Workshop on Systems for ML at NeurIPS 2019, December 13, 2019.
[PDF].
SALSA: A Domain Specific Architecture for Sequence Alignment.
L. D. Tucciy, R. Baghdadi, S. Amarasinghe, M. D. Santambrogio.
27th RAW (Reconfigurable Architectures Workshop) at IPDPS 2020, May, 2020, New Orleans, Louisiana, USA.
[PDF]
Learning to Optimize Halide with Tree Search and Random Programs.
Andrew A., Karima M., Luke A., Tzu-Mao L., Michael G., R. Baghdadi,
Steven J., Benoit S., Jonathan R., Fredo D.
SIGGRAPH 2019.
[PDF][WebSite]
Seq: A high-performance language for computational biology.
A. Shajii, I. Numanagic, R. Baghdadi, B. Berger, S. Amarasingh.
OOPSLA 2019.
Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code.
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo, Abdurrahman Akkas,
Yunming Zhang, Patricia Suriana, Shoaib Kamil, Saman Amarasinghe.
In Proceedings of the 2019 International Symposium on Code Generation and Optimization (CGO 2019).
Washington DC, USA. Feb., 2019.
[PDF][Bibtex][WebSite]
GraphIt - A High-Performance DSL for Graph Analytics.
Yunming Zhang, Mengjiao Yang, Riyadh Baghdadi, Shoaib Kamil, Julian Shun, Saman
Amarasinghe.
Object-Oriented Programming, Systems, Languages and Applications (OOPSLA), 2018.
[PDF][WebSite]
A Unified Backend for Targeting FPGAs from DSLs.
E. D. Sozzo, Riyadh Baghdadi, S. Amarasinghe, and M. D. Santambrogio. 2018.
The 29th Annual IEEE International Conference on Application-specific
Systems, Architectures and Processors.
Milan, Italy, Jul, 2018.
[PDF]
A Common Backend for Hardware Acceleration on FPGA.
E. D. Sozzo, Riyadh Baghdadi, S. Amarasinghe, and M. D. Santambrogio. 2017.
In 35th IEEE International Conference on Computer Design (ICCD'17),
Boston, MA, USA, Nov, 2017.
[PDF]
PENCIL: a Platform-Neutral Compute Intermediate Language for Accelerator Programming.
Riyadh Baghdadi, U. Beaugnon, A. Cohen, T. Grosser, M. Kruse, C.
Reddy, S. Verdoolaege, J. Absar, S. v. Haastregt, A. Kravets, A.
Lokhmotov, A. Betts, J. Ketema, A. F.~Donaldson, R. David, E. Hajiyev.
The 24th International Conference on Parallel
Architectures and Compilation Techniques, San Francisco, CA, USA, Oct, 2015.
[PDF]
PENCIL Language Specification.
Riyadh Baghdadi, A. Cohen, S. Verdoolaege, T. Grosser, J. Absar, S.
v. Haastregt, A. Kravets, A. Lokhmotov, A. F.~Donaldson.
Research Report RT-8706, INRIA, Paris-Rocquencourt, May. 2015.
[PDF]
VOBLA: A Vehicle for Optimized Basic Linear Algebra.
U.Beaugnon, A. Kravets, S. V. Haastregt, Riyadh
Baghdadi, D. Tweed, J.Absar, A. Lokhmotov,
LCTES'14, Edinburgh, UK, 2014. [PDF]
Improved Loop Tiling Based on the Removal of Spurious False Dependences.
Riyadh Baghdadi, A. Cohen, S. Verdoolaege, K. Trifunovic.
ACM Transactions on Architecture and Code Optimization
(TACO), 2013. [PDF]
Pencil: Towards a Platform-Neutral Compute Intermediate Language for DSLs.
Riyadh Baghdadi, A. Cohen, S. Guelton, S. Verdoolaege, J. Inoue,
T. Grosser, G. Kouveli, A. Kravets, A. Lokhmotov, C. Nugteren, F.
Waters, A. F.~Donaldson.
WOLFHPC'12, Salte Lake, 2012.[PDF]
Riyadh baghdadi, “A Deep Learning Based Cost Model for Automatic Code Optimization”.
ML for Systems workshop 2022 (Neurips'22), Dec. 2022.
Riyadh baghdadi, “A Deep Learning Based Cost Model for Automatic Code Optimization”.
Huawei, Feb. 2022.
Riyadh baghdadi, “A Deep Learning Based Cost Model for Automatic Code Optimization”.
Google Brain, Feb. 2021.
Riyadh baghdadi, “an expressive polyhedral compiler for deep learning”.
Microsoft, Feb. 2020.
Riyadh baghdadi, “an expressive polyhedral compiler for deep learning”.
Nvidia, Feb. 2019.
Riyadh baghdadi, “an expressive polyhedral compiler for deep learning”.
MIT, FastCode Seminar, Oct. 2019.
Riyadh baghdadi, “an expressive polyhedral compiler for deep learning”.
src techcon 2019, september 2019, austin, tx, united states.
Riyadh baghdadi, “an expressive polyhedral compiler for deep learning”.
Bigstream, Aug. 2019.
Riyadh baghdadi, “an expressive polyhedral compiler for deep learning”.
Cerebras, Aug. 2019.
Riyadh Baghdadi, Michael Wagman, Andrew Pochinsky, Saman Amarasinghe, William Detmold. “Accelerating LQCD Calculations Using the Tiramisu Compiler”.
2019 Scientific Discovery through Advanced Computing Principal Investigator (PI) Meeting, July 2019, Rockville, MD.
Riyadh Baghdadi, “A Compiler for Deep Learning”,
2019 MIT Alliances annual meeting, June 2019, Cambridge, MA, USA.
Riyadh Baghdadi, “Tiramisu: A High-Performance Compiler for
Domain-Specific Architectures”,
2019 ADA Annual Symposium (Center for Application Driven Architectures), April
2019. Ann Arbor, MI, USA.
Riyadh Baghdadi, “A Platform for Exploring Machine Learning Based AutoScheduling”,
Workshop on Optimization, Modeling, Analysis and Space Exploration, Feb. 2019,
Washington DC, USA.
Riyadh Baghdadi, “The Tiramisu Compiler for Deep Learning”,
2018 MIT Alliances annual meeting, June 2018, Cambridge, MA, USA.
Riyadh baghdadi, “an expressive polyhedral compiler for deep learning”.
Apple, Sep. 2018.
Riyadh Baghdadi, “A relaxed permutability criterion”, Dixiemes
rencontres de la communaute francaise de compilation, Sep 2015,
Banyuls-sur-Mer, France.
Riyadh Baghdadi, “PENCIL: a subset of C99 for Accelerator
Programming”, Seminaire LAMIH, Universite de Valenciennes, Sep 2015,
Valenciennes, France.
Riyadh Baghdadi, “PENCIL: a Platform-Neutral Compute Intermediate
Language for DSL Compilers and for Accelerator Programming”, MIT
Seminar - Massachusetts Institute of Technology, May 2015, Cambridge,
Massachusetts. [Link]
Riyadh Baghdadi, “Language Support For Better Polyhedral Compilation
Targeting Accelerators”, Journees Nationales du Groupement de Recherche
Genie de la Programmation et du Logiciel, Juin 2015, Bordeaux.
Riyadh Baghdadi, “Better code generation for DSLs using PENCIL IL
and faster scheduling using statement clustering”. 2015 ACM Student
Research Competition, International Symposium on Code Generation and
Optimization, San Francisco, February, 2015.
Riyadh Baghdadi, “Generating Highly Optimized CUDA and OpenCL from Domain Specific Languages ”.
Google PhD Student Summit, Decembre 2014, Munich, Germany.
Riyadh Baghdadi, Javed Absar, Adam Betts, Ulysse Beaugnon, Albert
Cohen, Robert David, Alastair Donaldson, Tobias Grosser, Sven Van
Haastregt, Elnar Hajiyev, Alexey Kravets, Jeroen Ketema, Michael Kruse,
Anton Lokhmotov, Chandan Reddy, and Sven Verdoolaege. Pencil- a
platform-neutral compute intermediate language for dsl compilers. In
Workshop on High Performance, Predictable Embedded Systems for
Cognitive Applications (HiPPES4CogApp, associated with HiPEAC),
Amsterdam, The Netherlands, January 2015.
U. Beaugnon, Riyadh Baghdadi, J. Absar, A. Betts,
A. Cohen, A. Donaldson, T. Grosser, S. V. Haastregt,
Y. Hu, J. Ketema, A. Kravets, A. Lokhmotov, S. Verdoolaege.
“PENCIL: A platform-neutral intermediate language for
the parallelizing compilation of DSLs”.
Second Workshop on Domain Specific Languages Design and
Implementation (DSLDI),
Portland, USA, Oct 2014.
Riyadh Baghdadi, S. Verdoolaege, U. Beaugnon, A. Cohen,
R. David and E. Hajiyev, “Language support for polyhedral
compilation: evaluation on image processing benchmark”.
8'th meeting of the french compiler community,
Nice, July 2014.
Riyadh Baghdadi, “Putting Polyhedral Optimization Techniques to
Work in Production Compilers: Progresses in Scalability and Memory
Management”, ACM Student Research Competition 2012 (SRC'12).
California, 2012.
Riyadh Baghdadi, A. Cohen, C. Bastoul, L-N. Pouchet and L.
Rauchwerger, “The Potential of Synergistic Static, Dynamic and
Speculative Loop Nest Optimizations for Automatic Parallelization”,
PESPMA 2010, France. [PDF]
Riyadh Baghdadi, S. Niar, “Enhancing Image Processing Capability
by Memory Compression”, 2nd IEEE International conference on Signals,
Circuits & Systems (SCS’08), Hammamet, Tunisia, 7-9 Nov 2008.
Teaching assistant - Introduction to imperative programming in C (LI115)
Teaching assistant - GPGPU programming : CUDA
Student mentoring: 18 students.
Reviews and Program Committee Membership
MLSys'23 (Sixth Conference on Machine Learning and Systems).
IPDPS 2023 (37th IEEE International Parallel and Distributed Processing Symposium).
ECOOP 2023 (European Conference on Object-Oriented Programming).
PACT'21 (The 2021 International Conference on Parallel Architectures and Compilation Techniques).
ACM Transactions on Architecture and Code Optimization (TACO).
ACM Transactions on Parallel Computing (TOPC).
Parallel Computing (PARCO), Elsevier.
Journal of Parallel and Distributed Computing, (JPDC), Elsevier.
International Journal of Parallel Programming (IJPP), Springer.
IEEE Access Journal.
IMPACT'21 (11th International Workshop on Polyhedral Compilation Techniques).
RWDSL'18 Workshop (3rd International Workshop on Real World Domain Specific Languages 2018).
GPGPU'10 Workshop (General-Purpose GPU 2017 workshop, co-located with PPoPP'17).
Workshop Organization
12th International Workshop on Polyhedral Compilation Techniques, June 20th, 2022, Budapest, Hungary, In conjunction
with HiPEAC 2022.
The 2nd International Workshop on Machine Learning for Software Hardware Co-Design (MLSH’21), September
26’th, 2021, Virtual, In conjunction with PACT21.
The 1st InternationalWorkshop on Machine Learning for Software Hardware Co-Design (MLSH’20), October 2nd,
2020, Virtual, In conjunction with PACT20.