Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework
Compile Requirements for C++
- Google Protocol Buffer
- Python (2.7, 64bit) | Anaconda (2.7, 64bit)
- CUDA [Optional]
- CUDNN [Optional]
- OpenMPI [Optional]
Runtime Requirements for Python
- Package: protobuf
- Package: lmdb
Installation
Clone this repository
-
(Optional) Download and install CUDA
(Optional) Download and install CUDNN
-
(Optional) Download 3rdparty.zip and unzip to Dragon/3rdparty (Out of source code dir)
Win64 (OpenBLAS / Protobuf for VS2013 / CUDNN v6 / Microsoft MPI)
Linux64 (OpenMPI)
-
Configure Dragon/CMakeLists.txt
- Select optional libraries [CUDA / CUDNN / BLAS / SSE / MPI / MPI_CUDA_AWARE / CUDA_FP16]
- Set 3rdparty path (recommend to keep defualt)
- Set python & numpy root path
- Set cuda compiling architectures if necessary
- GCC version(4.8+, 5.0-) should add
-std=c++11
toCUDA_NVCC_FLAGS
, ifnullptr
is not found.
-
Environment Variables
-
Create dragon.conf
sudo vim /etc/ld.so.conf.d/dragon.conf
- Append 1 line for libraries dir of your 3rdparty, e.g. :
- /home/Dragon/3rdparty/lib
- Append 1 line for libraries dir of your 3rdparty, e.g. :
-
rebuild the scaning cache
sudo ldconfig
- add binary directionary to system environment variables, e.g. :
- PATH=........;C:\Dragon\3rdparty\bin;
-
-
Setup MPI [Optional]
- We use OpenMPI which support "cuda-aware-mpi"
- See more:
-
Run 3rdparty/setup_mpi.sh
./setup_mpi.sh
-
Install
sudo cp openmpi/install/bin/mpirun /usr/bin
We use Microsoft MPI which can perfectly run at lastest Windows10
Microsoft MPI is intergrated into 3rdparty and you should do nothing
-
Compile
-
Install cmake
sudo apt-get install cmake
-
Make
cd Dragon mkdir build cd build cmake .. make install -j16
- Install cmake-gui
- Mkdir Dragon/build
- Configure and generate MSVC project in Dragon/build
- Open Dragon/build/Dragon.sln
- Compile and generate for "INSTALL" solution
-
Deploy
-
Install Dragon
cd Dragon python setup.py install
Hint
: If you do not have permission, try as follows:cd Dragon python setup.py install --user
-
Install protobuf
pip install protobuf
-
Install lmdb
pip install lmdb
Usage
Import
import dragon
Virtual DL Frameworks
import dragon.vm.theano as theano
import dragon.vm.caffe as caffe
import dragon.vm.tensorflow as tf
Tutorials
[IPython Notebook] -> (https://github.com/PhyscalX/Tutorials)
We will revise several classical examples, covering both CV, NLP and RL.
Device
import dragon.config
dragon.config.EnableCPU()
dragon.config.EnableCUDA(device_id, use_cudnn=True)
Automatic Memory Optimization(AMC)
import dragon.config
dragon.config.SetDebugMode(False)
This option will make all gradients share a global tensor(debugging is intractable).
which prefers a 50% memory-usage and 15% slower solution during training phase.
Scope
- NameScope
import dragon
from dragon.core.tensor import Tensor
with dragon.name_scope(prefix='conv1'):
w = Tensor('weight').Variable() # named as conv1/weight
b = Tensor('bias').Variable() # named as conv1/bias
- DeviceScope
import dragon
with dragon.deive_scope(deivce='gpu', id=0, use_cudnn=True):
x = ops.Add(a, b) # use /gpu:0 and cuDNN
- PhaseScope
import dragon
import dragon.vm.theano as theano
with dragon.phase_scope(phase='train'):
f = theano.function(outputs=y) # force the training phase even without gradients computation
License and Citation
Dragon is released under the BSD 2-Clause license.
Please cite Dragon in your publications if it helps your research:
@article{pan2017dragon,
Author = {Pan, Ting},
Journal = {arXiv preprint arXiv:1707.08265},
Title = {Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework},
Year = {2017}
}