1
2
Fork
You've already forked spidr
0
Accelerated machine learning with dependent types
  • Idris 73.9%
  • C++ 19.7%
  • C 3.6%
  • Starlark 1.8%
  • Shell 0.8%
  • Other 0.2%
Find a file
Joel Berkeley 5a9755ea38
All checks were successful
ci/woodpecker/push/publish-docs Pipeline was successful
test grad for constant functions ( #38 )
- includes test for cast from `U64`
Co-authored-by: Joel Berkeley <joelberkeley@noreply.codeberg.org>
Co-committed-by: Joel Berkeley <joelberkeley@noreply.codeberg.org>
2025年08月13日 00:19:18 +02:00
.woodpecker add tutorial on how to run a tensor program ( #23 ) 2025年08月02日 01:12:01 +02:00
pjrt-plugins fix XLA CUDA plugin postinstall ( #20 ) 2025年07月19日 21:38:02 +02:00
spidr add uniform sampling of U64 ( #25 ) 2025年08月03日 23:08:34 +02:00
test test grad for constant functions ( #38 ) 2025年08月13日 00:19:18 +02:00
tutorials add tutorial on how to run a tensor program ( #23 ) 2025年08月02日 01:12:01 +02:00
.gitignore migrate to OpenXLA ( #393 ) 2024年06月14日 15:48:11 +01:00
dev.sh add automatic differentiation of scalar-valued functions ( #8 ) 2025年07月19日 00:25:13 +02:00
LICENSE change license to AGPL-3.0-or-later ( #462 ) 2025年07月06日 19:43:33 +01:00
openxla-dev.Dockerfile fix XLA CUDA plugin build ( #459 ) 2025年07月05日 01:53:08 +01:00
pack.toml migrate to OpenXLA ( #393 ) 2024年06月14日 15:48:11 +01:00
readme.ipkg migrate to OpenXLA ( #393 ) 2024年06月14日 15:48:11 +01:00
README.md add tutorial on how to run a tensor program ( #23 ) 2025年08月02日 01:12:01 +02:00
XLA_VERSION fix XLA CUDA plugin build ( #459 ) 2025年07月05日 01:53:08 +01:00

spidr

Accelerated machine learning with dependent types

spidr is a research project, expect breaking changes.

See the online reference for API documentation, and the tutorials for extended discussion of spidr's architecture. In particular, make sure to read Nuisances in the Tensor API.

Install

Installation requires curl and pack. To install spidr, install a PJRT plugin. A plugin executes a spidr program, and determines what hardware your program will run on. You can install the CPU plugin with

pack install pjrt-plugin-xla-cpu

or read the plugin documentation for the CUDA-enabled GPU plugin and custom plugin builds.

We have tested spidr on Ubuntu Linux 24.04. It may work on other Linux distributions. If spidr doesn't work on your platform, get in touch.

Motivation

We made spidr to try out modern programming language capabilities in machine learning systems. To this end, we chose Idris for the API; Idris is a general-purpose purely functional programming language with a particularly expressive type system. We also wanted to build something performant enough for working machine learning practitioners. Implementing efficient low-level linear algebra is not one of the project goals, so we opted to use OpenXLA's PJRT, an abstract interface for machine learning compilers for hardware accelerators.

What can spidr do?

Catch errors before runtime

If your spidr program compiles, and your hardware can run it, then it will run. (Exceptions to this are documented or marked partial.) This is primarily because Idris checks tensor shapes during compilation. For example, this will compile

x : Tensor [3] S32
x = tensor [1, 2, 3]
y : Tensor [3] S32
y = x + tensor [0, 1, 2]

but this won't

failing "elaboration"
 z : Tensor [3] S32
 z = x + tensor [0, 1]

because you can't add a vector of length two to a vector of length three. Shape manipulation extends beyond comparing literal dimension sizes to arbitrary symbolic manipulation

append : Tensor [m, p] F64 -> Tensor [n, p] F64 -> Tensor [m + n, p] F64
append x y = concat 0 x y

As a bonus, spidr programs are reproducible. Any one graph will always produce the same result when run on the same hardware.

Execute on hardware accelerators

You can run spidr programs on any hardware for which there's a PJRT plugin. CPU and CUDA plugins are provided out of the box. You can also create your own plugins with minimal code, see the guide for instructions. The libraries required to build a plugin exist for ROCm-enabled GPUs, specialised machine learning accelerators, and more.

Optimize graph compilation

Each PJRT plugin contains a graph compiler, and there are several compilers available. The plugins we provide out of the box use XLA, which offers substantial performance benefits via e.g. CSE and operator fusion.

Acknowledgements

I'd like to thank the Idris community for their frequent guidance and Idris itself, Stefan Höck for the Idris package manager Pack, the Numerical Elixir team for their early binaries, Secondmind colleagues for discussions around machine learning design, friends and family for their support, Google and OpenXLA for the compiler stack, and Codeberg, and formerly GitHub, for code hosting.

Contact

To ask for new features or to report bugs, make a new Codeberg issue. For any other questions or comments, message @joelb on the Idris community discord.