Dask Array is used across a wide variety of applications — anywhere where working with large array dataset.
This Analyzing the National Water Model with Xarray, Dask, and Coiled example process 6 TB of geospatial data on a cluster using Xarray and Dask Array. The cluster in this example is deployed with Coiled, but there are many options for managing and deploying Dask. See our Deploy Dask Clusters documentation for more information on deployment options.
You can also visit https://examples.dask.org/array.html for a collection of additional examples.
Dask arrays coordinate many NumPy arrays (or "duck arrays" that are sufficiently NumPy-like in API such as CuPy or Sparse arrays) arranged into a grid. These arrays may live on disk or on other machines.
New duck array chunk types (types below Dask on
NEP-13’s type-casting hierarchy) can be registered via
register_chunk_type(). Any other duck array types that are
not registered will be deferred to in binary operations and NumPy
ufuncs/functions (that is, Dask will return NotImplemented). Note, however,
that any ndarray-like type can be inserted into a Dask Array using
from_array().
Dask Array is used in fields like atmospheric and oceanographic science, large scale imaging, genomics, numerical algorithms for optimization or statistics, and more.
Dask arrays support most of the NumPy interface like the following:
Arithmetic and scalar mathematics: +, *, exp, log, ...
Reductions along axes: sum(), mean(), std(), sum(axis=0), ...
Tensor contractions / dot products / matrix multiply: tensordot
Axis reordering / transpose: transpose
Slicing: x[:100, 500:100:-2]
Fancy indexing along single axes with lists or NumPy arrays: x[:, [10, 1, 5]]
Array protocols like __array__ and __array_ufunc__
Some linear algebra: svd, qr, solve, solve_triangular, lstsq
...
However, Dask Array does not implement the entire NumPy interface. Users expecting this will be disappointed. Notably, Dask Array lacks the following features:
Much of np.linalg has not been implemented.
This has been done by a number of excellent BLAS/LAPACK implementations,
and is the focus of numerous ongoing academic research projects
Arrays with unknown shapes do not support all operations
Operations like sort which are notoriously
difficult to do in parallel, and are of somewhat diminished value on very
large data (you rarely actually need a full sort).
Often we include parallel-friendly alternatives like topk
Dask Array doesn’t implement operations like tolist that would be very
inefficient for larger datasets. Likewise, it is very inefficient to iterate
over a Dask array with for loops
Dask development is driven by immediate need, hence many lesser used functions have not been implemented. Community contributions are encouraged
See the dask.array API for a more extensive list of functionality.
By default, Dask Array uses the threaded scheduler in order to avoid data transfer costs, and because NumPy releases the GIL well. It is also quite effective on a cluster using the dask.distributed scheduler.