Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
forked from dmlc/cxxnet

fast, concise, distributed deep learning framework

License

Notifications You must be signed in to change notification settings

voidException/cxxnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

History

682 Commits

Repository files navigation

cxxnet

CXXNET is a fast, concise, distributed deep learning framework.

Contributors: https://github.com/antinucleon/cxxnet/graphs/contributors

Feature Highlights

  • Lightweight: small but sharp knife
    • cxxnet contains concise implementation of state-of-art deep learning models
    • The project maintains a minimum dependency that makes it portable and easy to build
  • Scale beyond single GPU and single machine
    • The library works on multiple GPUs, with nearly linear speedup
    • THe library works distributedly backed by disrtibuted parameter server
  • Easy extensibility with no requirement on GPU programming
    • cxxnet is build on mshadow
    • developer can write numpy-style template expressions to extend the library only once
    • mshadow will generate high performance CUDA and CPU code for users
    • It brings concise and readable code, with performance matching hand crafted kernels
  • Convenient interface for other languages
    • Python interface for training from numpy array, and prediction/extraction to numpy array
    • Matlab interface (TODO)

Backbone Library

CXXNET is built on MShadow: Lightweight CPU/GPU Tensor Template Library

  • MShadow is an efficient, device invariant and simple tensor library
    • MShadow allows user to write expressions for machine learning while still provides
    • This means developer do not need to have knowledge on CUDA kernels to extend cxxnet.
  • MShadow also provides a parameter interface for Multi-GPU and distributed deep learning
    • Improvements to cxxnet can naturally run on Multiple GPUs and being distributed

Build

  • Copy make/config.mk to root foler of the project
  • Modify the config to adjust your enviroment settings
  • Type ./build.sh to build cxxnet

About

fast, concise, distributed deep learning framework

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 90.3%
  • Python 4.0%
  • C 2.0%
  • ApacheConf 2.0%
  • Makefile 1.0%
  • Cuda 0.5%
  • Shell 0.2%

AltStyle によって変換されたページ (->オリジナル) /