libtorch-ffi: Haskell bindings for PyTorch
This package provides Haskell bindings to libtorch, the C++ library underlying PyTorch, specifically designed for the Hasktorch ecosystem.
[Skip to Readme]
Modules
[Index] [Quick Jump]
- Torch
- Internal
- Torch.Internal.Cast
- Torch.Internal.Class
- Torch.Internal.Const
- Torch.Internal.GC
- Managed
- Torch.Internal.Managed.Autograd
- Torch.Internal.Managed.Cast
- Torch.Internal.Managed.Native
- Torch.Internal.Managed.Native.Extra
- Torch.Internal.Managed.Native.Native0
- Torch.Internal.Managed.Native.Native1
- Torch.Internal.Managed.Native.Native10
- Torch.Internal.Managed.Native.Native11
- Torch.Internal.Managed.Native.Native12
- Torch.Internal.Managed.Native.Native13
- Torch.Internal.Managed.Native.Native14
- Torch.Internal.Managed.Native.Native15
- Torch.Internal.Managed.Native.Native2
- Torch.Internal.Managed.Native.Native3
- Torch.Internal.Managed.Native.Native4
- Torch.Internal.Managed.Native.Native5
- Torch.Internal.Managed.Native.Native6
- Torch.Internal.Managed.Native.Native7
- Torch.Internal.Managed.Native.Native8
- Torch.Internal.Managed.Native.Native9
- Torch.Internal.Managed.Optim
- Torch.Internal.Managed.Serialize
- Torch.Internal.Managed.TensorFactories
- Type
- Torch.Internal.Managed.Type.C10Dict
- Torch.Internal.Managed.Type.C10List
- Torch.Internal.Managed.Type.C10Tuple
- Torch.Internal.Managed.Type.Context
- Torch.Internal.Managed.Type.Dimname
- Torch.Internal.Managed.Type.DimnameList
- Torch.Internal.Managed.Type.Extra
- Torch.Internal.Managed.Type.Generator
- Torch.Internal.Managed.Type.IValue
- Torch.Internal.Managed.Type.IValueList
- Torch.Internal.Managed.Type.IntArray
- Torch.Internal.Managed.Type.Module
- Torch.Internal.Managed.Type.Scalar
- Torch.Internal.Managed.Type.StdArray
- Torch.Internal.Managed.Type.StdOptional
- Torch.Internal.Managed.Type.StdString
- Torch.Internal.Managed.Type.StdVector
- Torch.Internal.Managed.Type.Storage
- Torch.Internal.Managed.Type.Symbol
- Torch.Internal.Managed.Type.Tensor
- Torch.Internal.Managed.Type.TensorIndex
- Torch.Internal.Managed.Type.TensorList
- Torch.Internal.Managed.Type.TensorOptions
- Torch.Internal.Managed.Type.Tuple
- Torch.Internal.Objects
- Torch.Internal.Type
- Unmanaged
- Torch.Internal.Unmanaged.Autograd
- Torch.Internal.Unmanaged.Native
- Torch.Internal.Unmanaged.Native.Extra
- Torch.Internal.Unmanaged.Native.Native0
- Torch.Internal.Unmanaged.Native.Native1
- Torch.Internal.Unmanaged.Native.Native10
- Torch.Internal.Unmanaged.Native.Native11
- Torch.Internal.Unmanaged.Native.Native12
- Torch.Internal.Unmanaged.Native.Native13
- Torch.Internal.Unmanaged.Native.Native14
- Torch.Internal.Unmanaged.Native.Native15
- Torch.Internal.Unmanaged.Native.Native2
- Torch.Internal.Unmanaged.Native.Native3
- Torch.Internal.Unmanaged.Native.Native4
- Torch.Internal.Unmanaged.Native.Native5
- Torch.Internal.Unmanaged.Native.Native6
- Torch.Internal.Unmanaged.Native.Native7
- Torch.Internal.Unmanaged.Native.Native8
- Torch.Internal.Unmanaged.Native.Native9
- Torch.Internal.Unmanaged.Optim
- Torch.Internal.Unmanaged.Serialize
- Torch.Internal.Unmanaged.TensorFactories
- Type
- Torch.Internal.Unmanaged.Type.C10Dict
- Torch.Internal.Unmanaged.Type.C10List
- Torch.Internal.Unmanaged.Type.C10Tuple
- Torch.Internal.Unmanaged.Type.Context
- Torch.Internal.Unmanaged.Type.Dimname
- Torch.Internal.Unmanaged.Type.DimnameList
- Torch.Internal.Unmanaged.Type.Extra
- Torch.Internal.Unmanaged.Type.Generator
- Torch.Internal.Unmanaged.Type.IValue
- Torch.Internal.Unmanaged.Type.IValueList
- Torch.Internal.Unmanaged.Type.IntArray
- Torch.Internal.Unmanaged.Type.Module
- Torch.Internal.Unmanaged.Type.Scalar
- Torch.Internal.Unmanaged.Type.StdArray
- Torch.Internal.Unmanaged.Type.StdOptional
- Torch.Internal.Unmanaged.Type.StdString
- Torch.Internal.Unmanaged.Type.StdVector
- Torch.Internal.Unmanaged.Type.Storage
- Torch.Internal.Unmanaged.Type.Symbol
- Torch.Internal.Unmanaged.Type.Tensor
- Torch.Internal.Unmanaged.Type.TensorIndex
- Torch.Internal.Unmanaged.Type.TensorList
- Torch.Internal.Unmanaged.Type.TensorOptions
- Torch.Internal.Unmanaged.Type.Tuple
- Internal
Flags
Manual Flags
| Name | Description | Default |
|---|---|---|
| cuda | A flag to link libtorch_cuda. | Disabled |
| rocm | A flag to link libtorch_hip. | Disabled |
| gcc | A flag to use gcc on macos | Disabled |
Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info
Downloads
- libtorch-ffi-2.0.1.10.tar.gz [browse] (Cabal source package)
- Package description (as included in the package)
Maintainer's Corner
For package maintainers and hackage trustees
Candidates
| Versions [RSS] | 2.0.0.0, 2.0.0.1, 2.0.1.0, 2.0.1.1, 2.0.1.2, 2.0.1.3, 2.0.1.5, 2.0.1.6, 2.0.1.7, 2.0.1.8, 2.0.1.9, 2.0.1.10 |
|---|---|
| Dependencies | async (>=2.2.5 && <2.3), base (>=4.7 && <5), bytestring (>=0.11.5 && <0.13), containers (>=0.6.7 && <0.8), inline-c (>=0.9.1.10 && <0.10), inline-c-cpp (>=0.5.0.2 && <0.6.0.0), libtorch-ffi-helper (>=2.0.0 && <2.1), safe-exceptions (>=0.1.7 && <0.2), sysinfo (>=0.1.1 && <0.2), template-haskell (>=2.20.0 && <2.24), text (>=2.0.2 && <2.2) [details] |
| License | BSD-3-Clause |
| Copyright | 2018 Austin Huang |
| Author | Austin Huang |
| Maintainer | hasktorch@gmail.com |
| Category | Codegen |
| Home page | https://github.com/hasktorch/hasktorch#readme |
| Uploaded | by junjihashimoto at 2025年12月03日T15:19:30Z |
| Distributions | NixOS:2.0.1.9 |
| Reverse Dependencies | 1 direct, 3 indirect [details] |
| Downloads | 271 total (29 in the last 30 days) |
| Rating | (no votes yet) [estimated by Bayesian average] |
| Your Rating |
|
| Status | Docs available [build log] Last success reported on 2025年12月03日 [all 1 reports] |
Readme for libtorch-ffi-2.0.1.10
[back to package description]libtorch-ffi
This package provides FFI bindings to PyTorch's libtorch C++ library.
Setup
The package automatically downloads and configures libtorch during the build process. You can customize the setup using environment variables.
Environment Variables
LIBTORCH_VERSION
- Default:
2.5.0 - Description: Specifies the version of libtorch to download and use
- Example:
export LIBTORCH_VERSION=2.5.0
LIBTORCH_HOME
- Default: XDG cache directory (
~/.cache/libtorchon Linux/macOS) - Description: Base directory where libtorch will be downloaded and stored
- Example:
export LIBTORCH_HOME=/opt/libtorch
LIBTORCH_CUDA_VERSION
- Default:
cpu - Description: CUDA version for GPU support
- Options:
cpu- CPU-only version (default)cu117- CUDA 11.7cu118- CUDA 11.8cu121- CUDA 12.1- Any other CUDA version string supported by PyTorch
- Example:
export LIBTORCH_CUDA_VERSION=cu118
LIBTORCH_SKIP_DOWNLOAD
- Default: Not set
- Description: When set (to any value), skips the automatic download of libtorch
- Use case: When you have libtorch already installed system-wide
- Example:
export LIBTORCH_SKIP_DOWNLOAD=1
Directory Structure
The downloaded libtorch is stored in a platform-specific directory structure:
$LIBTORCH_HOME/
└── <version>/
└── <platform>/
└── <cuda-flavor>/
├── lib/
├── include/
└── .ok
Where:
<version>is the libtorch version (e.g.,2.5.0)<platform>is one of:macos-arm64- macOS on Apple Siliconmacos-x86_64- macOS on Intellinux-x86_64- Linux on x86_64
<cuda-flavor>is the CUDA version (e.g.,cpu,cu118)
Build Process
-
Pre-configuration: The package checks if it's running in a Nix sandbox. If not, it proceeds with the download process.
-
Download: If libtorch is not found in the cache directory, it will be automatically downloaded from PyTorch's official servers.
-
Configuration: The build system automatically:
- Adds the libtorch library directory to the library search path
- Adds the include directories for C++ headers
- Sets up proper runtime library paths (rpath) for dynamic linking
- On macOS, adds the
-ld_classicflag for compatibility
Platform-Specific Notes
macOS
- Uses rpath for dynamic library loading
- Automatically adds
-ld_classicflag for linker compatibility - Supports both Apple Silicon (arm64) and Intel (x86_64) architectures
- Since libtorch-ffi's rpath is propagated, it doesn't matter whether hasktorch is a static link or a shared link
Linux
- Uses rpath for dynamic library loading
- Supports x86_64 architecture
- Multiple CUDA versions available for GPU support
- Since libtorch-ffi's rpath is not propagated, hasktorch must be a shared link
Linking Configuration
Due to rpath propagation differences between platforms, Linux requires shared linking. Add the following configuration:
For Cabal (cabal.project)
shared: True
executable-dynamic: True
For Stack (stack.yaml)
configure-options:
$targets:
- --enable-executable-dynamic
- --enable-shared
Nix Support
The package detects when it's being built in a Nix sandbox and skips the automatic download. In this case, libtorch should be provided through Nix derivation inputs.
Troubleshooting
-
Download failures: Check your internet connection and ensure the PyTorch download servers are accessible.
-
Missing libraries: The
.okmarker file indicates a successful download. If this file is missing but the directory exists, delete the directory and let the setup download again. -
CUDA version mismatch: Ensure your system CUDA version matches the
LIBTORCH_CUDA_VERSIONyou've specified. -
Custom libtorch installation: Set
LIBTORCH_SKIP_DOWNLOAD=1and ensure your system's libtorch is properly configured in your build environment.
Example Usage
# Use CPU-only version
cabal build libtorch-ffi
# Use CUDA 11.8 version
export LIBTORCH_CUDA_VERSION=cu118
cabal build libtorch-ffi
# Use a specific version
export LIBTORCH_VERSION=2.4.0
cabal build libtorch-ffi
# Use existing system libtorch
export LIBTORCH_SKIP_DOWNLOAD=1
cabal build libtorch-ffi