Commit d8f612c8 by Ting PAN

Init sphinx documentation for C++ API

Summary:
This commit uses sphinx to generate C++ API documentation
whose style and theme are consistent with the Python API.
1 parent 8dbb73a7
Showing with 2215 additions and 2693 deletions
FROM ubuntu:16.04 FROM ubuntu:16.04
RUN \ RUN \
apt-get update && apt-get install -y --no-install-recommends \ apt-get update && apt-get install -y \
--no-install-recommends \
--allow-change-held-packages \
build-essential \ build-essential \
cmake \ cmake \
git \ git \
......
...@@ -2,7 +2,9 @@ FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04 ...@@ -2,7 +2,9 @@ FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04
RUN \ RUN \
rm /etc/apt/sources.list.d/cuda.list && \ rm /etc/apt/sources.list.d/cuda.list && \
apt-get update && apt-get install -y --no-install-recommends \ apt-get update && apt-get install -y \
--no-install-recommends \
--allow-change-held-packages \
build-essential \ build-essential \
cmake \ cmake \
git \ git \
......
...@@ -3,27 +3,43 @@ Building Dragon Documentation ...@@ -3,27 +3,43 @@ Building Dragon Documentation
This page will help you to build the following documentations: This page will help you to build the following documentations:
Dragon C++ API: https://dragon.seetatech.com/api/cc Python API: https://dragon.seetatech.com/api/python
Dragon Python API: https://dragon.seetatech.com/api/python C++ API: https://dragon.seetatech.com/api/cc
Build Documentation of C++ API Requirements
------------------------------ ------------
- sphinx >= 3.0.2
```bash ```bash
cd dragon/docs/api/cc pip install sphinx
doxygen Doxyfile
``` ```
Then, open the ```docs/api/cc/html/index.html``` in your browser. - sphinx_seeta_theme
```bash
pip install sphinx_seeta_theme
```
- doxygen (C++ API only)
See: http://www.doxygen.org/download.html
Build Documentation of Python API Build Documentation of Python API
--------------------------------- ---------------------------------
```bash ```bash
pip install sphinx_seeta_theme cd dragon/docs/api/python && make html
cd dragon/docs/api/python ```
make html
Then, open the ``docs/_build/api/python/index.html`` in your browser.
Build Documentation of C++ API
------------------------------
```bash
cd dragon/docs/api/cc && make doxygen && make html
``` ```
Then, open the ```docs/api/python/index.html``` in your browser. Then, open the ``docs/_build/api/cc/index.html`` in your browser.
...@@ -32,7 +32,7 @@ DOXYFILE_ENCODING = UTF-8 ...@@ -32,7 +32,7 @@ DOXYFILE_ENCODING = UTF-8
# title of most generated pages and in a few other places. # title of most generated pages and in a few other places.
# The default value is: My Project. # The default value is: My Project.
PROJECT_NAME = "Dragon - C++ API" PROJECT_NAME =
# The PROJECT_NUMBER tag can be used to enter a project or revision number. This # The PROJECT_NUMBER tag can be used to enter a project or revision number. This
# could be handy for archiving the generated documentation or if some version # could be handy for archiving the generated documentation or if some version
...@@ -44,21 +44,21 @@ PROJECT_NUMBER = ...@@ -44,21 +44,21 @@ PROJECT_NUMBER =
# for a project that appears at the top of each page and should give viewer a # for a project that appears at the top of each page and should give viewer a
# quick idea about the purpose of the project. Keep the description short. # quick idea about the purpose of the project. Keep the description short.
PROJECT_BRIEF = "A Computation Graph Virtual Machine Based Deep Learning Framework" PROJECT_BRIEF =
# With the PROJECT_LOGO tag one can specify a logo or an icon that is included # With the PROJECT_LOGO tag one can specify a logo or an icon that is included
# in the documentation. The maximum height of the logo should not exceed 55 # in the documentation. The maximum height of the logo should not exceed 55
# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy # pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
# the logo to the output directory. # the logo to the output directory.
PROJECT_LOGO = images/logo.png PROJECT_LOGO =
# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
# into which the generated documentation will be written. If a relative path is # into which the generated documentation will be written. If a relative path is
# entered, it will be relative to the location where doxygen was started. If # entered, it will be relative to the location where doxygen was started. If
# left blank the current directory will be used. # left blank the current directory will be used.
OUTPUT_DIRECTORY = "" OUTPUT_DIRECTORY = "../../_build/api/cc_doxygen"
# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub- # If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
# directories (in 2 levels) under the output directory of each output format and # directories (in 2 levels) under the output directory of each output format and
...@@ -143,7 +143,7 @@ ALWAYS_DETAILED_SEC = NO ...@@ -143,7 +143,7 @@ ALWAYS_DETAILED_SEC = NO
# operators of the base classes will not be shown. # operators of the base classes will not be shown.
# The default value is: NO. # The default value is: NO.
INLINE_INHERITED_MEMB = NO INLINE_INHERITED_MEMB = YES
# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path # If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
# before files name in the file list and in the header files. If set to NO the # before files name in the file list and in the header files. If set to NO the
...@@ -1044,7 +1044,7 @@ VERBATIM_HEADERS = YES ...@@ -1044,7 +1044,7 @@ VERBATIM_HEADERS = YES
# generated with the -Duse-libclang=ON option for CMake. # generated with the -Duse-libclang=ON option for CMake.
# The default value is: NO. # The default value is: NO.
CLANG_ASSISTED_PARSING = NO # CLANG_ASSISTED_PARSING = NO
# If clang assisted parsing is enabled you can provide the compiler with command # If clang assisted parsing is enabled you can provide the compiler with command
# line options that you would normally use when invoking the compiler. Note that # line options that you would normally use when invoking the compiler. Note that
...@@ -1052,7 +1052,7 @@ CLANG_ASSISTED_PARSING = NO ...@@ -1052,7 +1052,7 @@ CLANG_ASSISTED_PARSING = NO
# specified with INPUT and INCLUDE_PATH. # specified with INPUT and INCLUDE_PATH.
# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES. # This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.
CLANG_OPTIONS = # CLANG_OPTIONS =
# If clang assisted parsing is enabled you can provide the clang parser with the # If clang assisted parsing is enabled you can provide the clang parser with the
# path to the compilation database (see: # path to the compilation database (see:
...@@ -1063,7 +1063,7 @@ CLANG_OPTIONS = ...@@ -1063,7 +1063,7 @@ CLANG_OPTIONS =
# generated with the -Duse-libclang=ON option for CMake. # generated with the -Duse-libclang=ON option for CMake.
# The default value is: 0. # The default value is: 0.
CLANG_COMPILATION_DATABASE_PATH = 0 # CLANG_COMPILATION_DATABASE_PATH = 0
#--------------------------------------------------------------------------- #---------------------------------------------------------------------------
# Configuration options related to the alphabetical class index # Configuration options related to the alphabetical class index
...@@ -1098,7 +1098,7 @@ IGNORE_PREFIX = ...@@ -1098,7 +1098,7 @@ IGNORE_PREFIX =
# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output # If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
# The default value is: YES. # The default value is: YES.
GENERATE_HTML = YES GENERATE_HTML = NO
# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
...@@ -1930,7 +1930,7 @@ MAN_LINKS = NO ...@@ -1930,7 +1930,7 @@ MAN_LINKS = NO
# captures the structure of the code including all documentation. # captures the structure of the code including all documentation.
# The default value is: NO. # The default value is: NO.
GENERATE_XML = NO GENERATE_XML = YES
# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a # The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
...@@ -2083,9 +2083,7 @@ INCLUDE_FILE_PATTERNS = ...@@ -2083,9 +2083,7 @@ INCLUDE_FILE_PATTERNS =
# recursively expanded use the := operator instead of the = operator. # recursively expanded use the := operator instead of the = operator.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES. # This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
PREDEFINED = WITH_MPI \ PREDEFINED = DRAGON_API= USE_MPI USE_CUDA USE_CUDNN USE_NCCL
WITH_CUDA \
WITH_CUDNN \
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this # If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
# tag can be used to specify a list of macro names that should be expanded. The # tag can be used to specify a list of macro names that should be expanded. The
......
# Makefile for Sphinx documentation
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = ../../_build/api/cc
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed.)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
NUMBER_OF_PROCESSORS:=$(shell getconf _NPROCESSORS_ONLN)
.PHONY: help clean html latex latexpdf
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " doxygen to make Doxygen XML files"
@echo " html to make standalone HTML files"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
clean:
rm -rf $(BUILDDIR)/*
doxygen:
mkdir -p $(BUILDDIR)_doxygen && doxygen
@echo
@echo "Build finished. The Doxygen XML files are in $(BUILDDIR)_doxygen/xml."
html:
$(SPHINXBUILD) -b html -j ${NUMBER_OF_PROCESSORS} $(ALLSPHINXOPTS) $(BUILDDIR)
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)-latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)-latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)-latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)-latex."
# ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
"""Sphinx configuration for C++ API."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from sphinx_seeta_theme import HTMLTranslator
from sphinx_seeta_theme import HTMLTranslatorV2
from sphinx_seeta_theme import setup as setup_v1
def path_to(href, index=False):
if index:
if len(href) == 0:
return 'index.html'
return href + '/index.html'
else:
return href + '.html'
# Basic
html_static_path = ['../_static']
exclude_patterns = ['../_build']
master_doc = 'index'
source_suffix = '.rst'
# Extension
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinx.ext.napoleon',
'sphinxcontrib.katex',
'breathe',
]
napoleon_use_rtype = False
# Project
project = 'dragon'
copyright = 'Copyright (c) 2017-present, SeetaTech, Co.,Ltd'
author = 'SeetaTech'
with open('../../../dragon/version.txt', 'r') as f:
version = f.read().strip()
# Sphinx
c_id_attributes = ['DRAGON_API']
cpp_id_attributes = ['DRAGON_API']
# Breathe
breathe_projects = {'dragon': '../../_build/api/cc_doxygen/xml/'}
breathe_default_project = 'dragon'
# HTML
html_theme = 'seeta'
html_title = ''
html_short_title = ''
html_logo = '../_static/images/dragon.png'
html_favicon = '../_static/favicon.ico'
html_copy_source = False
html_show_sourcelink = False
html_show_sphinx = False
html_show_copyright = False
html_scaled_image_link = False
html_theme_options = {
'navbar_links': {
'Install': path_to('../../install', 1),
'API': [
('master', path_to('../../api/python', 1)),
('versions...', path_to('../../versions', 1)),
],
'Github': 'https://github.com/seetaresearch/dragon',
},
'navbar_logo_link': path_to('../..', 1),
'sidebar_title': 'C++ v{}'.format(version),
'sidebar_title_link': path_to('../../versions', 1),
'breadcrumb_links': [
('Dragon', path_to('../..', 1)),
('API', path_to('../../versions', 1)),
('Dragon v{}'.format(version.replace('a0', '-a0')), path_to('../../api', 1)),
('C++', path_to('', 1)),
],
}
html_sidebars = {
'index': ['localtoc.html'],
'dragon': ['localtoc.html'],
'dragon/**': ['localtoc.html'],
'_modules/**': ['localtoc.html'],
'search': ['localtoc.html'],
}
# LaTex
latex_documents = [(
master_doc,
'dragon.tex',
'Dragon - C++ API',
author,
'manual',
)]
latex_elements = {
'utf8extra': '',
'inputenc': '',
'babel': r'''\usepackage[english]{babel}''',
'preamble': r'''
\usepackage{enumitem}
\usepackage{tocloft}
\renewcommand{\cfttoctitlefont}{\huge\bfseries}
\usepackage{fontspec}
\setmainfont{Source Serif Pro}
\setsansfont{Source Serif Pro}
\setmonofont{Source Serif Pro}
\setcounter{tocdepth}{2}
\usepackage[draft]{minted}
\fvset{breaklines=true, breakanywhere=true}
\setlength{\headheight}{13.6pt}
\setlength{\itemindent}{-1pt}
\makeatletter
\renewcommand*\l@subsection{\@dottedtocline{2}{3.8em}{3.8em}}
\fancypagestyle{normal}{
\fancyhf{}
\fancyfoot[LE,RO]{{\py@HeaderFamily\thepage}}
\fancyfoot[LO]{{\py@HeaderFamily\nouppercase{\rightmark}}}
\fancyfoot[RE]{{\py@HeaderFamily\nouppercase{\leftmark}}}
\fancyhead[LE,RO]{{\py@HeaderFamily}}
}
\makeatother
''',
'maketitle': r'''
\pagenumbering{Roman} %% % to avoid page 1 conflict with actual page 1
\makeatletter
\begin{titlepage}
\noindent\rule[0.25\baselineskip]{\textwidth}{1pt}
\vspace*{5mm}
\begin{figure}[!h]
\raggedleft
\includegraphics[scale=0.3]{logo.png}
\end{figure}
\raggedleft
\vspace*{5mm}
\textbf{\Huge \@title}
\vspace*{40mm}
\LARGE \@author
\end{titlepage}
\makeatother
\pagenumbering{arabic}
''',
'pointsize': '10pt',
'figure_align': 'H',
'printindex': '',
'sphinxsetup': ' \
hmargin={0.75in,0.75in}, \
vmargin={0.5in,1in}, \
verbatimhintsturnover=false, \
verbatimsep=0.75em, \
verbatimhintsturnover=false, \
verbatimwithframe=false, \
VerbatimColor={rgb}{0.949,0.949,0.949}, \
HeaderFamily=\\rmfamily\\bfseries',
}
latex_domain_indices = False
latex_engine = 'xelatex'
latex_logo = '../_static/images/logo.png'
# Application API
class HTMLTranslatorV3(HTMLTranslatorV2):
"""Custom html translator."""
def depart_desc_content(self, node):
"""Remove the sub classees."""
HTMLTranslatorV2.depart_desc_content(self, node)
para_start, para_end = -1, -1
for i, text in enumerate(self.body):
if para_start > 0 and text.startswith('</p>'):
para_end = i
break
if text.startswith('<p>') and \
self.body[i + 1].startswith('Subclassed by'):
para_start = i
if para_start > 0 and para_end > 0:
self.body = self.body[:para_start] + self.body[para_end + 1:]
def depart_desc_parameterlist(self, node):
"""Remove the trailing newline to match the google c++ style."""
HTMLTranslator.depart_desc_parameterlist(self, node)
def setup(app):
"""Custom application setup."""
return setup_v1(app, HTMLTranslatorV3)
@font-face{font-family:'Lato';src:url("/static/fonts/LatoLatin-Italic.woff2") format("woff2"),url("/static/fonts/LatoLatin-Italic.woff") format("woff");font-weight:normal;font-style:italic}@font-face{font-family:'Lato';src:url("/static/fonts/LatoLatin-Black.woff2") format("woff2"),url("/static/fonts/LatoLatin-Black.woff") format("woff");font-weight:900;font-style:normal}@font-face{font-family:'Lato';src:url("/static/fonts/LatoLatin-BlackItalic.woff2") format("woff2"),url("/static/fonts/LatoLatin-BlackItalic.woff") format("woff");font-weight:900;font-style:italic}@font-face{font-family:'Lato';src:url("/static/fonts/LatoLatin-Light.woff2") format("woff2"),url("/static/fonts/LatoLatin-Light.woff") format("woff");font-weight:300;font-style:normal}@font-face{font-family:'Lato';src:url("/static/fonts/LatoLatin-Regular.woff2") format("woff2"),url("/static/fonts/LatoLatin-Regular.woff") format("woff");font-weight:normal;font-style:normal}html,body,div,span,applet,object,iframe,h1,h2,h3,h4,h5,h6,p,blockquote,pre,a,abbr,acronym,address,big,cite,code,del,dfn,em,img,ins,kbd,q,s,samp,small,strike,strong,sub,sup,tt,var,b,u,i,center,dl,dt,dd,ol,ul,li,fieldset,form,label,legend,table,caption,tbody,tfoot,thead,tr,th,td,article,aside,canvas,details,embed,figure,figcaption,footer,header,hgroup,menu,nav,output,ruby,section,summary,time,mark,audio,video{margin:0;padding:0;border:0;font-size:100%;font:inherit;vertical-align:baseline}article,aside,details,figcaption,figure,footer,header,hgroup,menu,nav,section{display:block}body{line-height:1}ol,ul{list-style:none}blockquote,q{quotes:none}blockquote:before,blockquote:after,q:before,q:after{content:'';content:none}table{border-collapse:collapse;border-spacing:0}body{background:#FFF;color:#303846;font:normal 18px/1.4em "Lato",Calibri,Arial,sans-serif;height:100vh;text-align:left;text-rendering:optimizeLegibility}img{max-width:100%}article p img{max-width:100%;display:block;margin-left:auto;margin-right:auto}a{border-bottom:1px dotted #a67b5b;color:#303846;text-decoration:none;-webkit-transition:all 0.3s;transition:all 0.3s}blockquote{padding:15px 30px 15px 15px;margin:20px 0 15px 10px;background-color:rgba(204,122,111,0.1);border-left:10px solid rgba(191,87,73,0.2)}#fb_oss a{border:0}h1,h2,h3,h4{font-family:"Lato","Helvetica Neue",Arial,sans-serif;font-weight:900}.navPusher{border-top:58px solid #FFF;height:100%;left:0;position:relative;z-index:99}.homeContainer{background:#FFF;color:#4d4d4d;text-align:center}.homeContainer a{color:#a67b5b}.homeContainer .homeSplashFade{color:white}.homeContainer .homeWrapper{padding:3em 10px;text-align:center}.homeContainer .homeWrapper .wrapper{margin:0px auto;max-width:900px;padding:0 20px}.homeContainer .homeWrapper .projectLogo img{height:100px;margin-bottom:0px}.homeContainer .homeWrapper h1#project_title{font-family:"Lato","Helvetica Neue",Arial,sans-serif;font-size:300%;letter-spacing:-0.08em;line-height:1em;margin-bottom:80px}.homeContainer .homeWrapper h2#project_tagline{font-family:"Lato","Helvetica Neue",Arial,sans-serif;font-size:200%;letter-spacing:-0.04em;line-height:1em;color:#99424f}.wrapper{margin:0px auto;max-width:900px;padding:0 10px}.projectLogo{display:none}.projectLogo img{height:100px;margin-bottom:0px}section#intro{margin:10px 0;color:#999}section#intro p{line-height:1.5;padding-bottom:20px}section#intro ul{list-style:disc}section#intro ol,section#intro ul{padding-left:24px}section#intro ol li,section#intro ul li{padding-bottom:8px;padding-left:6px}section#intro strong,section#intro b{font-weight:bold}.fbossFontLight{font-family:"Lato",Calibri,Arial,sans-serif;font-weight:300;font-style:normal}.fb-like{display:block;margin-bottom:20px;width:100%}.center{display:block;text-align:center}.mainContainer{background:#FFF;overflow:auto}.mainContainer .mainWrapper{padding:4vh 10px;text-align:left}.mainContainer .mainWrapper .allShareBlock{padding:10px 0}.mainContainer .mainWrapper .allShareBlock .pluginBlock{margin:12px 0;padding:0}.mainContainer .mainWrapper :not(.gist-meta)>a:hover,.mainContainer .mainWrapper :not(.gist-meta)>a:focus{background:#FFF;color:#4d4d4d}.mainContainer .mainWrapper em,.mainContainer .mainWrapper i{font-style:italic}.mainContainer .mainWrapper strong,.mainContainer .mainWrapper b{font-weight:bold}.mainContainer .mainWrapper h1{font-size:300%;line-height:1em;padding:1.4em 0 1em;text-align:left}.mainContainer .mainWrapper h2{font-size:250%;line-height:1em;margin-bottom:20px;padding:1.4em 0 20px;text-align:left}.mainContainer .mainWrapper h2{border-bottom:1px solid #e6e6e6;font-size:22px;padding:10px 0}.mainContainer .mainWrapper h2.blockHeader{border-bottom:1px solid white;color:white;font-size:22px;margin-bottom:20px;padding:10px 0}.mainContainer .mainWrapper h3{font-size:150%;line-height:1.2em;padding:1em 0 0.8em}.mainContainer .mainWrapper h4{font-size:130%;line-height:1.2em;padding:1em 0 0.8em}.mainContainer .mainWrapper code{color:#999;display:inline-block}.mainContainer .mainWrapper p{padding:0.8em 0}.mainContainer .mainWrapper ul{list-style:disc}.mainContainer .mainWrapper ol,.mainContainer .mainWrapper ul{padding-left:24px}.mainContainer .mainWrapper ol li,.mainContainer .mainWrapper ul li{padding-bottom:4px;padding-left:6px}.mainContainer .mainWrapper strong{font-weight:bold}.mainContainer .mainWrapper .post{position:relative}.mainContainer .mainWrapper .post .katex{font-weight:700}.mainContainer .mainWrapper .post.basicPost{margin-top:30px}.mainContainer .mainWrapper .post :not(.gist-meta)>a{color:#a67b5b}.mainContainer .mainWrapper .post :not(.gist-meta)>a:hover,.mainContainer .mainWrapper .post :not(.gist-meta)>a:focus{color:#4d4d4d}.mainContainer .mainWrapper .post h2{border-bottom:4px solid #FFF;font-size:130%}.mainContainer .mainWrapper .post h3{border-bottom:1px solid #FFF;font-size:110%}.mainContainer .mainWrapper .post h4{border-bottom:1px solid #FFF;font-size:90%}.mainContainer .mainWrapper .post ol{list-style:decimal outside none}.mainContainer .mainWrapper .post .post-header{padding:1em 0}.mainContainer .mainWrapper .post .post-header h1{font-size:150%;line-height:1em;padding:0.4em 0 0}.mainContainer .mainWrapper .post .post-header h1 a{border:none}.mainContainer .mainWrapper .post .post-header .post-meta{color:#a67b5b;font-family:"Lato","Helvetica Neue",Arial,sans-serif;text-align:center}.mainContainer .mainWrapper .post .postSocialPlugins{padding-top:1em}.mainContainer .mainWrapper .post .docPagination{background:#FFF;bottom:0px;left:0px;position:absolute;right:0px}.mainContainer .mainWrapper .post .docPagination .pager{display:inline-block;width:50%}.mainContainer .mainWrapper .post .docPagination .pagingNext{float:right;text-align:right}.mainContainer .mainWrapper .post .docPagination :not(.gist-meta)>a{border:none;color:#a67b5b;display:block;padding:4px 12px}.mainContainer .mainWrapper .post .docPagination :not(.gist-meta)>a:hover{background-color:#4d4d4d;color:#303846}.mainContainer .mainWrapper .post .docPagination :not(.gist-meta)>a .pagerLabel{display:inline}.mainContainer .mainWrapper .post .docPagination :not(.gist-meta)>a .pagerTitle{display:none}.mainContainer .mainWrapper .posts .post{margin-bottom:6vh}#integrations_title{font-size:250%;margin:80px 0}.ytVideo{height:0;overflow:hidden;padding-bottom:53.4%;padding-top:25px;position:relative}.ytVideo iframe,.ytVideo object,.ytVideo embed{height:100%;left:0;position:absolute;top:0;width:100%}@media only screen and (min-width: 480px){h1#project_title{font-size:500%}h2#project_tagline{font-size:250%;color:#999}.projectLogo img{margin-bottom:10px;height:200px}.homeContainer .homeWrapper{padding-left:10px;padding-right:10px}.mainContainer .mainWrapper .post h2{font-size:180%}.mainContainer .mainWrapper .post h3{font-size:120%}.mainContainer .mainWrapper .post h4{font-size:100%}.mainContainer .mainWrapper .post .docPagination a .pagerLabel{display:none}.mainContainer .mainWrapper .post .docPagination a .pagerTitle{display:inline}}@media only screen and (min-width: 900px){.homeContainer .homeWrapper{position:relative}.homeContainer .homeWrapper .projectLogo{align-items:center;bottom:0;display:flex;justify-content:flex-end;left:0;padding:2em 20px 4em;position:absolute;right:20px;top:0}.homeContainer .homeWrapper .projectLogo img{height:100%;max-height:250px}}@media only screen and (min-width: 1024px){.mainContainer .mainWrapper .post{box-sizing:border-box;display:block}.mainContainer .mainWrapper .post ul#markdown-toc{font-size:14px;list-style-type:none;display:block}.mainContainer .mainWrapper .post ul#markdown-toc li{text-align:right;width:30%;float:left;margin-bottom:-1px}.mainContainer .mainWrapper .post .post-header h1{font-size:250%}.mainContainer .mainWrapper .posts .post{margin-bottom:4vh;width:100%}}@media only screen and (min-width: 1200px){.wrapper{max-width:1100px}}@media only screen and (min-width: 1500px){.wrapper{max-width:1400px}}.fixedHeaderContainer{background:#a67b5b;color:#4d4d4d;height:40px;padding:10px 0 8px;position:fixed;width:100%;z-index:9999}.fixedHeaderContainer a{align-items:center;border:0;color:#4d4d4d;display:flex;flex-flow:row nowrap;height:40px}.fixedHeaderContainer header{display:flex;flex-flow:row nowrap;position:relative;text-align:left}.fixedHeaderContainer header img{height:50px;padding-right:4px}.fixedHeaderContainer header h2{display:block;font-family:"Lato","Helvetica Neue",Arial,sans-serif;font-weight:900;line-height:18px;position:relative;font-size:22px;color:#191919;letter-spacing:1px}.navigationFull{height:34px;margin-left:auto}.navigationFull nav{position:relative}.navigationFull nav ul{display:flex;flex-flow:row nowrap;margin:0 -10px}.navigationFull nav ul li{padding:0 10px;display:block}.navigationFull nav ul li a{border-bottom:2px solid transparent;color:#fff;font-size:16px;font-weight:400;line-height:1.2em}.navigationFull nav ul li a:hover{border-bottom:2px solid #4d4d4d;color:#4d4d4d}.navigationFull nav ul li.navItemActive a{color:#4d4d4d}input[type="search"]{-moz-appearance:none;-webkit-appearance:none}.navSearchWrapper{align-self:center;position:relative}.navSearchWrapper::before{border:3px solid #ccc;border-radius:50%;content:" ";display:block;height:6px;left:15px;width:6px;position:absolute;top:4px;z-index:1}.navSearchWrapper::after{background:#ccc;content:" ";height:7px;left:24px;position:absolute;transform:rotate(-45deg);top:12px;width:3px;z-index:1}.navSearchWrapper .aa-dropdown-menu{background:#FFF;border:3px solid rgba(48,56,70,0.25);color:#303846;font-size:14px;left:auto !important;line-height:1.2em;right:0 !important}.navSearchWrapper .aa-dropdown-menu .algolia-docsearch-suggestion--category-header{background:#a67b5b;color:#FFF}.navSearchWrapper .aa-dropdown-menu .algolia-docsearch-suggestion--category-header .algolia-docsearch-suggestion--highlight{background-color:#FFF;color:#a67b5b}.navSearchWrapper .aa-dropdown-menu .algolia-docsearch-suggestion--title .algolia-docsearch-suggestion--highlight,.navSearchWrapper .aa-dropdown-menu .algolia-docsearch-suggestion--subcategory-column .algolia-docsearch-suggestion--highlight{color:#a67b5b}.navSearchWrapper .aa-dropdown-menu .algolia-docsearch-suggestion__secondary,.navSearchWrapper .aa-dropdown-menu .algolia-docsearch-suggestion--subcategory-column{border-color:rgba(48,56,70,0.3)}input#search_input{padding-left:25px;font-size:14px;line-height:20px;border-radius:20px;background-color:rgba(153,153,153,0.25);border:none;color:rgba(153,153,153,0);outline:none;position:relative;transition:background-color 0.2s cubic-bezier(0.68, -0.55, 0.265, 1.55),width 0.2s cubic-bezier(0.68, -0.55, 0.265, 1.55),color 0.2s ease;width:200px}input#search_input:focus,input#search_input:active{background-color:#FFF;color:#303846;width:240px}.navigationSlider .navSearchWrapper::before{left:6px;top:6px}.navigationSlider .navSearchWrapper::after{left:15px;top:14px}.navigationSlider input#search_input_react{box-sizing:border-box;padding-left:25px;font-size:14px;line-height:20px;border-radius:20px;background-color:rgba(153,153,153,0.25);border:none;color:#303846;outline:none;position:relative;transition:background-color 0.2s cubic-bezier(0.68, -0.55, 0.265, 1.55),width 0.2s cubic-bezier(0.68, -0.55, 0.265, 1.55),color 0.2s ease;width:100%}.navigationSlider input#search_input_react:focus,.navigationSlider input#search_input_react:active{background-color:#FFF;color:#4d4d4d}.navigationSlider .algolia-docsearch-suggestion--subcategory-inline{display:none}.navigationSlider>span{width:100%}.navigationSlider .aa-dropdown-menu{background:#FFF;border:0px solid #FFF;color:#303846;font-size:12px;line-height:2em;max-height:140px;min-width:auto;overflow-y:scroll;-webkit-overflow-scrolling:touch;padding:0;border-radius:0;position:relative !important;width:100%}.rougeHighlight{background-color:#e9e9e9;color:#a67b5b}.rougeHighlight .c{color:#586e75}.rougeHighlight .err{color:#a67b5b}.rougeHighlight .g{color:#a67b5b}.rougeHighlight .k{color:#859900}.rougeHighlight .l{color:#a67b5b}.rougeHighlight .n{color:#a67b5b}.rougeHighlight .o{color:#859900}.rougeHighlight .x{color:#cb4b16}.rougeHighlight .p{color:#a67b5b}.rougeHighlight .cm{color:#586e75}.rougeHighlight .cp{color:#859900}.rougeHighlight .c1{color:#72c02c}.rougeHighlight .cs{color:#859900}.rougeHighlight .gd{color:#2aa198}.rougeHighlight .ge{color:#a67b5b;font-style:italic}.rougeHighlight .gr{color:#dc322f}.rougeHighlight .gh{color:#cb4b16}.rougeHighlight .gi{color:#859900}.rougeHighlight .go{color:#a67b5b}.rougeHighlight .gp{color:#a67b5b}.rougeHighlight .gs{color:#a67b5b;font-weight:bold}.rougeHighlight .gu{color:#cb4b16}.rougeHighlight .gt{color:#a67b5b}.rougeHighlight .kc{color:#cb4b16}.rougeHighlight .kd{color:#268bd2}.rougeHighlight .kn{color:#859900}.rougeHighlight .kp{color:#859900}.rougeHighlight .kr{color:#268bd2}.rougeHighlight .kt{color:#dc322f}.rougeHighlight .ld{color:#a67b5b}.rougeHighlight .m{color:#2aa198}.rougeHighlight .s{color:#2aa198}.rougeHighlight .na{color:#a67b5b}.rougeHighlight .nb{color:#B58900}.rougeHighlight .nc{color:#268bd2}.rougeHighlight .no{color:#cb4b16}.rougeHighlight .nd{color:#268bd2}.rougeHighlight .ni{color:#cb4b16}.rougeHighlight .ne{color:#cb4b16}.rougeHighlight .nf{color:#268bd2}.rougeHighlight .nl{color:#a67b5b}.rougeHighlight .nn{color:#a67b5b}.rougeHighlight .nx{color:#a67b5b}.rougeHighlight .py{color:#a67b5b}.rougeHighlight .nt{color:#268bd2}.rougeHighlight .nv{color:#268bd2}.rougeHighlight .ow{color:#859900}.rougeHighlight .w{color:#a67b5b}.rougeHighlight .mf{color:#2aa198}.rougeHighlight .mh{color:#2aa198}.rougeHighlight .mi{color:#2aa198}.rougeHighlight .mo{color:#2aa198}.rougeHighlight .sb{color:#586e75}.rougeHighlight .sc{color:#2aa198}.rougeHighlight .sd{color:#a67b5b}.rougeHighlight .s2{color:#2aa198}.rougeHighlight .se{color:#cb4b16}.rougeHighlight .sh{color:#a67b5b}.rougeHighlight .si{color:#2aa198}.rougeHighlight .sx{color:#2aa198}.rougeHighlight .sr{color:#dc322f}.rougeHighlight .s1{color:#2aa198}.rougeHighlight .ss{color:#2aa198}.rougeHighlight .bp{color:#268bd2}.rougeHighlight .vc{color:#268bd2}.rougeHighlight .vg{color:#268bd2}.rougeHighlight .vi{color:#268bd2}.rougeHighlight .il{color:#2aa198}.highlighter-rouge{color:#5e9f24;font:800 12px/1.5em Hack, monospace;max-width:100%}.highlighter-rouge .rougeHighlight{border-radius:3px;margin:20px 0;padding:0px;overflow-x:scroll;-webkit-overflow-scrolling:touch}.highlighter-rouge .rougeHighlight table{background:none;border:none}.highlighter-rouge .rougeHighlight table tbody tr{background:none;display:flex;flex-flow:row nowrap}.highlighter-rouge .rougeHighlight table tbody tr td{display:block;flex:1 1}.highlighter-rouge .rougeHighlight table tbody tr td.gutter{border-right:1px solid #fff;color:#c1a38d;margin-right:10px;max-width:40px;padding-right:10px}.highlighter-rouge .rougeHighlight table tbody tr td.gutter pre{max-width:20px}p>.highlighter-rouge,li>.highlighter-rouge,a>.highlighter-rouge{font-size:16px;font-weight:400;line-height:inherit}a:hover .highlighter-rouge{color:white}.promoSection{display:flex;flex-flow:column wrap;font-size:125%;line-height:1.6em;margin:-10px 0;position:relative;z-index:99}.promoSection .promoRow{padding:10px 0}.promoSection .promoRow .pluginWrapper{display:block}.promoSection .promoRow .pluginWrapper.ghWatchWrapper,.promoSection .promoRow .pluginWrapper.ghStarWrapper{height:28px}.promoSection .promoRow .pluginRowBlock{display:flex;flex-flow:wrap;justify-content:center;margin:0 -2px}.promoSection .promoRow .pluginRowBlock .pluginWrapper{padding:0 2px}iframe.pluginIframe{height:500px;margin-top:20px;width:100%}.iframeContent{display:none}.iframePreview{display:inline-block;margin-top:20px}@media only screen and (min-width: 1024px){.iframeContent{display:block}.iframePreview{display:none}}.button{border:1px solid #FFF;border-radius:3px;color:#FFF;display:inline-block;font-size:14px;font-weight:900;line-height:1.2em;padding:10px;text-transform:uppercase;transition:background 0.3s, color 0.3s}.button:hover{background:#FFF;color:#4d4d4d}.homeContainer .button{border-color:#99424f;border-width:1px;color:#99424f}.homeContainer .button:hover{background:#99424f;color:#FFF}.blockButton{display:block}.edit-page-link{float:right;font-size:14px;font-weight:normal;line-height:20px;opacity:0.6;transition:opacity 0.5s}.edit-page-link:hover{opacity:1}.gridBlockWrapper{background:#f9f9f9}.gridBlockWrapper.alternateBackground{background:#e9e9e9}.gridBlock{margin:0px auto;padding:0 10px;padding-top:100px;padding-bottom:50px;max-width:1200px}.gridBlock h3{width:100%;text-align:left;color:#999;font-size:20px;margin-top:-40px}.gridBlock .blockElement{padding:5px 0;align-items:center}.gridBlock .blockElement img{max-width:100%}.gridBlock .blockElement h3{font-size:40px;margin:0;padding:10px 0}.gridBlock .gridClear{clear:both}.gridBlock .alignCenter{text-align:center}.gridBlock .alignRight{text-align:right}.gridBlock .imageAlignSide{justify-content:center;align-items:center;display:flex;flex-flow:row wrap}.blockImage{max-width:900px;width:50%}.imageAlignTop .blockImage{margin-bottom:20px}.imageAlignTop.alignCenter .blockImage{margin-left:auto;margin-right:auto}.imageAlignSide p{margin-bottom:40px;max-width:560px;margin:0}.imageAlignSide .blockImage{flex:0 1 400px;margin-right:100px}.imageAlignSide .blockContent{flex:1 1}.imageAlignSide .blockContent p{padding:0}@media only screen and (max-width: 1023px){.responsiveList .blockContent{position:relative}.responsiveList .blockContent>div{padding-left:20px}.responsiveList .blockContent::before{content:"\2022";position:absolute}}@media only screen and (min-width: 1024px){.gridBlock{display:flex;flex-direction:row;flex-wrap:wrap}.gridBlock .oneByGridBlock{box-sizing:border-box;flex:1 0 100%;padding:10px}.gridBlock .twoByGridBlock{box-sizing:border-box;flex:1 0 50%;padding:10px}.gridBlock .fourByGridBlock{box-sizing:border-box;flex:1 0 25%;padding:10px}h2+.gridBlock{padding-top:20px}}@media only screen and (min-width: 1400px){.gridBlock{display:flex;flex-direction:row;flex-wrap:wrap}.gridBlock .oneByGridBlock{box-sizing:border-box;flex:1 0 100%;padding:10px 20px}.gridBlock .twoByGridBlock{box-sizing:border-box;flex:1 0 50%;padding:10px 20px}.gridBlock .fourByGridBlock{box-sizing:border-box;flex:1 0 25%;padding:10px 20px}}.poweredByContainer{background:#FFF;color:#4d4d4d;margin-bottom:20px}.poweredByContainer a{color:#4d4d4d}.poweredByContainer .poweredByWrapper h2{border-color:#999;color:#999}.poweredByContainer .poweredByMessage{color:#999;font-size:14px;padding-top:20px}.poweredByItems{display:flex;flex-flow:row wrap;margin:0 -10px}.poweredByItem{box-sizing:border-box;flex:1 0 50%;line-height:1.1em;padding:5px 10px}.poweredByItem.itemLarge{flex-basis:100%;padding:10px;text-align:center}.poweredByItem.itemLarge:nth-child(4){padding-bottom:20px}.poweredByItem.itemLarge img{max-height:30px}@media only screen and (min-width: 480px){.itemLarge{flex-basis:50%;max-width:50%}}@media only screen and (min-width: 1024px){.poweredByItem{flex-basis:25%;max-width:25%}.poweredByItem.itemLarge{padding-bottom:20px;text-align:left}}.footerContainer{background:#FFF;color:#a67b5b;overflow:hidden;padding:0 10px;text-align:left}.footerContainer .footerWrapper{border-top:1px solid #a67b5b;padding:0}.footerContainer .footerWrapper .footerBlocks{align-items:center;align-content:center;display:flex;flex-flow:row wrap;margin:0 -20px;padding:10px 0}.footerContainer .footerWrapper .footerSection{box-sizing:border-box;flex:1 1 25%;font-size:14px;min-width:275px;padding:0px 20px}.footerContainer .footerWrapper .footerSection a{border:0;color:inherit;display:inline-block;line-height:1.2em}.footerContainer .footerWrapper .footerSection .footerLink{padding-right:20px}.footerContainer .footerWrapper .fbOpenSourceFooter{align-items:center;display:flex;flex-flow:row nowrap;max-width:25%}.footerContainer .footerWrapper .fbOpenSourceFooter .facebookOSSLogoSvg{flex:0 0 31px;height:30px;margin-right:10px;width:31px}.footerContainer .footerWrapper .fbOpenSourceFooter .facebookOSSLogoSvg path{fill:#a67b5b}.footerContainer .footerWrapper .fbOpenSourceFooter .facebookOSSLogoSvg .middleRing{opacity:0.7}.footerContainer .footerWrapper .fbOpenSourceFooter .facebookOSSLogoSvg .innerRing{opacity:0.45}.footerContainer .footerWrapper .fbOpenSourceFooter h2{display:block;font-weight:900;line-height:1em}@media only screen and (min-width: 900px){.footerSection.rightAlign{margin-left:auto;max-width:25%;text-align:right}}.navigationFull{display:none}.navigationSlider{position:absolute;right:0px}.navigationSlider .navSlideout{cursor:pointer;padding-top:4px;position:absolute;right:10px;top:0;transition:top 0.3s;z-index:101}.navigationSlider .slidingNav{background:#a67b5b;box-sizing:border-box;height:0px;overflow-x:hidden;padding:0;position:absolute;right:0px;top:0;transition:height 0.3s cubic-bezier(0.68, -0.55, 0.265, 1.55),width 0.3s cubic-bezier(0.68, -0.55, 0.265, 1.55);width:0}.navigationSlider .slidingNav ul{flex-flow:column nowrap;list-style:none;padding:10px}.navigationSlider .slidingNav ul li{margin:0;padding:2px 0}.navigationSlider .slidingNav ul li a{color:#FFF;display:inline;margin:3px 5px;padding:2px 0px;transition:background-color 0.3s}.navigationSlider .slidingNav ul li a:focus,.navigationSlider .slidingNav ul li a:hover{border-bottom:2px solid #FFF}.navigationSlider .navSlideoutActive .slidingNav{height:auto;padding-top:48px;width:300px}.navigationSlider .navSlideoutActive .navSlideout{top:-2px}.navigationSlider .navSlideoutActive .navSlideout .menuExpand span:nth-child(1){background-color:#303846;top:16px;transform:rotate(45deg)}.navigationSlider .navSlideoutActive .navSlideout .menuExpand span:nth-child(2){opacity:0}.navigationSlider .navSlideoutActive .navSlideout .menuExpand span:nth-child(3){background-color:#303846;transform:rotate(-45deg)}.menuExpand{display:flex;flex-flow:column nowrap;height:20px;justify-content:space-between}.menuExpand span{background:#4d4d4d;border-radius:3px;display:block;flex:0 0 4px;height:4px;position:relative;top:0;transition:background-color 0.3s, top 0.3s, opacity 0.3s, transform 0.3s;width:20px}.navPusher{border-top:58px solid #FFF;position:relative;left:0;z-index:99;height:100%}.navPusher::after{position:absolute;top:0;right:0;width:0;height:0;background:rgba(0,0,0,0.4);content:'';opacity:0;-webkit-transition:opacity 0.5s, width 0.1s 0.5s, height 0.1s 0.5s;transition:opacity 0.5s, width 0.1s 0.5s, height 0.1s 0.5s}.sliderActive .navPusher::after{width:100%;height:100%;opacity:1;-webkit-transition:opacity 0.5s;transition:opacity 0.5s;z-index:100}@media only screen and (min-width: 1024px){.navigationFull{display:block}.navigationSlider{display:none}}.docsNavContainer{background:#d9d9d9;height:35px;left:0;position:fixed;width:100%;z-index:100}.docMainWrapper .wrapper.mainWrapper{padding-left:0;padding-right:0;padding-top:10px}.docsSliderActive .docsNavContainer{box-sizing:border-box;height:100%;overflow-y:auto;-webkit-overflow-scrolling:touch;padding-bottom:50px}.docsSliderActive .mainContainer{display:none}.navBreadcrumb{box-sizing:border-box;display:flex;flex-flow:row nowrap;font-size:12px;height:35px;overflow:hidden;padding:5px 10px}.navBreadcrumb a,.navBreadcrumb span{border:0;color:#303846}.navBreadcrumb i{padding:0 3px}nav.toc{position:relative}nav.toc section{padding:0px;position:relative}nav.toc section .navGroups{display:none;padding:40px 10px 10px}nav.toc .toggleNav{background:#d9d9d9;color:#303846;position:relative;transition:background-color 0.3s, color 0.3s}nav.toc .toggleNav .navToggle{cursor:pointer;height:24px;margin-right:10px;position:relative;text-align:left;width:18px}nav.toc .toggleNav .navToggle::before,nav.toc .toggleNav .navToggle::after{content:"";position:absolute;top:50%;left:0;left:8px;width:3px;height:6px;border:5px solid #303846;border-width:5px 0;margin-top:-8px;transform:rotate(45deg);z-index:1}nav.toc .toggleNav .navToggle::after{transform:rotate(-45deg)}nav.toc .toggleNav .navToggle i::before,nav.toc .toggleNav .navToggle i::after{content:"";position:absolute;top:50%;left:2px;background:transparent;border-width:0 5px 5px;border-style:solid;border-color:transparent #303846;height:0;margin-top:-7px;opacity:1;width:5px;z-index:10}nav.toc .toggleNav .navToggle i::after{border-width:5px 5px 0;margin-top:2px}nav.toc .toggleNav .navGroup{background:#bfbfbf;margin:1px 0}nav.toc .toggleNav .navGroup ul{display:none}nav.toc .toggleNav .navGroup h3{background:#bfbfbf;color:#303846;font-size:20px;font-weight:600;line-height:1.2em;padding:10px;transition:color 0.2s}nav.toc .toggleNav .navGroup h3 i:not(:empty){width:16px;height:16px;display:inline-block;box-sizing:border-box;text-align:center;color:rgba(48,56,70,0.5);margin-right:10px;transition:color 0.2s}nav.toc .toggleNav .navGroup.navGroupActive{background:#f2f2f2;color:#303846}nav.toc .toggleNav .navGroup.navGroupActive ul{display:block;padding-bottom:10px;padding-top:10px}nav.toc .toggleNav .navGroup.navGroupActive h3{background:#f2f2f2;color:#4d4d4d}nav.toc .toggleNav .navGroup.navGroupActive h3 i{display:none}nav.toc .toggleNav ul{padding-left:0;padding-right:24px}nav.toc .toggleNav ul li{list-style-type:none;padding-bottom:0;padding-left:0}nav.toc .toggleNav ul li a{border:none;color:#303846;display:inline-block;font-size:14px;line-height:1.1em;margin:2px 10px 5px;padding:5px 0 2px;transition:color 0.3s}nav.toc .toggleNav ul li a:hover,nav.toc .toggleNav ul li a:focus{color:#FFF}nav.toc .toggleNav ul li a.navItemActive{color:#a67b5b}nav.toc .toggleNavActive .navBreadcrumb{background:#d9d9d9;margin-bottom:20px;position:fixed;width:100%}nav.toc .toggleNavActive section .navGroups{display:block}nav.toc .toggleNavActive .navToggle::before,nav.toc .toggleNavActive .navToggle::after{border-width:6px 0;height:0px;margin-top:-6px}nav.toc .toggleNavActive .navToggle i{opacity:0}.docsNavVisible .navPusher .mainContainer{padding-top:35px}@media only screen and (min-width: 900px){.navBreadcrumb{padding:5px 0}nav.toc section .navGroups{padding:40px 0 0}}@media only screen and (min-width: 1024px){.navToggle{display:none}.docsSliderActive .mainContainer{display:block}.docsNavVisible .navPusher .mainContainer{padding-top:0}.docsNavContainer{background:none;box-sizing:border-box;height:auto;margin:40px 40px 0 0;overflow-y:auto;position:relative;width:300px}nav.toc section .navGroups{display:block;padding-top:0px}nav.toc .toggleNavActive .navBreadcrumb{margin-bottom:0;position:relative}.docMainWrapper{display:flex;flex-flow:row nowrap;margin-bottom:40px}.docMainWrapper .wrapper{padding-left:0;padding-right:0}.docMainWrapper .wrapper.mainWrapper{padding-top:0}.navBreadcrumb{display:none}.navBreadcrumb h2{padding:0 10px}}.blogContainer .posts{margin-top:60px}.blogContainer .posts .post{border:1px solid #FFF;border-radius:3px;padding:10px 20px 20px}.blogContainer .lonePost{margin-top:60px}.blogContainer .lonePost .post{padding:10px 0px 0px}.blogContainer .post-header h1{text-align:center}.blogContainer .post-header .post-authorName{color:rgba(48,56,70,0.7);font-size:14px;font-weight:900;margin-top:0;padding:0;text-align:center}.blogContainer .post-header .authorPhoto{border-radius:50%;height:50px;left:50%;margin-left:-25px;overflow:hidden;position:absolute;top:-25px;width:50px}table{background:#F8F8F8;border:1px solid #B0B0B0;position:relative;margin:10px auto;padding:0;width:100%;height:auto;border-collapse:collapse;text-align:center;table-layout:fixed}table thead{border-bottom:1px solid #B0B0B0;display:table-header-group}table tbody{display:table-row-group}table tr{display:table-row}table tr:nth-of-type(odd){background:#E8E8E8}table tr th,table tr td{border-right:1px dotted #B0B0B0;display:table-cell;font-size:14px;line-height:1.3em;padding:10px;text-align:left;vertical-align:top}table tr th:last-of-type,table tr td:last-of-type{border-right:0}table tr th code,table tr td code{color:#97dccf;display:inline-block;font-size:12px}table tr th{color:#000000;font-weight:bold;font-family:"Lato","Helvetica Neue",Arial,sans-serif;text-transform:uppercase}.mainContainer .mainWrapper .post .toggler :not(.gist-meta)>a{color:#99424f}.mainContainer .mainWrapper .toggler :not(.gist-meta)>a:hover,.mainContainer .mainWrapper .toggler :not(.gist-meta)>a:focus{background:#a67b5b;color:#99424f}.toggler a{display:inline-block;padding:10px 5px;margin:2px;border:1px solid #05A5D1;border-radius:3px;text-decoration:none !important}.toggler table{border-collapse:collapse;margin-top:50px}.toggler table,td,th{border:0}.toggler strong{font-size:24px;color:#a67b5b}.display-platform-mac .toggler .button-mac,.display-platform-ubuntu .toggler .button-ubuntu,.display-platform-centos .toggler .button-centos,.display-platform-windows .toggler .button-windows,.display-platform-ios .toggler .button-ios,.display-platform-android .toggler .button-android,.display-configuration-compile .toggler .button-compile,.display-configuration-prebuilt .toggler .button-prebuilt,.display-configuration-docker .toggler .button-docker,.display-configuration-cloud .toggler .button-cloud{background-color:#a67b5b}block{display:none}.display-platform-mac.display-configuration-prebuilt .mac.prebuilt,.display-platform-ubuntu.display-configuration-prebuilt .ubuntu.prebuilt,.display-platform-centos.display-configuration-prebuilt .centos.prebuilt,.display-platform-windows.display-configuration-prebuilt .windows.prebuilt,.display-platform-ios.display-configuration-prebuilt .ios.prebuilt,.display-platform-android.display-configuration-prebuilt .android.prebuilt,.display-platform-mac.display-configuration-compile .mac.compile,.display-platform-ubuntu.display-configuration-compile .ubuntu.compile,.display-platform-centos.display-configuration-compile .centos.compile,.display-platform-windows.display-configuration-compile .windows.compile,.display-platform-ios.display-configuration-compile .ios.compile,.display-platform-android.display-configuration-compile .android.compile,.display-platform-mac.display-configuration-docker .mac.docker,.display-platform-ubuntu.display-configuration-docker .ubuntu.docker,.display-platform-centos.display-configuration-docker .centos.docker,.display-platform-windows.display-configuration-docker .windows.docker,.display-platform-ios.display-configuration-docker .ios.docker,.display-platform-android.display-configuration-docker .android.docker,.display-platform-mac.display-configuration-cloud .mac.cloud,.display-platform-ubuntu.display-configuration-cloud .ubuntu.cloud,.display-platform-centos.display-configuration-cloud .centos.cloud,.display-platform-windows.display-configuration-cloud .windows.cloud,.display-platform-ios.display-configuration-cloud .ios.cloud,.display-platform-android.display-configuration-cloud .android.cloud{display:block}a.anchor{position:absolute;margin-top:-58px}.header-link{position:absolute;margin-left:0.2em;opacity:0;-webkit-transition:opacity 0.2s ease-in-out 0.1s;-moz-transition:opacity 0.2s ease-in-out 0.1s;-ms-transition:opacity 0.2s ease-in-out 0.1s}h2:hover .header-link,h3:hover .header-link,h4:hover .header-link,h5:hover .header-link,h6:hover .header-link{opacity:1}.operator_search{width:90%;margin:2%;font-size:18px;font-family:"Lato",Calibri,Arial,sans-serif;border:1px #888 solid;border-radius:4px;outline:none}
\ No newline at end of file
/* The standard CSS for doxygen 1.8.14 */
body, table, div, p, dl {
font: 400 14px/22px Roboto,sans-serif;
}
p.reference, p.definition {
font: 400 14px/22px Roboto,sans-serif;
}
/* @group Heading Levels */
h1.groupheader {
font-size: 150%;
}
.title {
font: 400 14px/28px Roboto,sans-serif;
font-size: 150%;
font-weight: bold;
margin: 10px 2px;
}
h2.groupheader {
border-bottom: 1px solid #324770;
color: #223354;
font-size: 150%;
font-weight: normal;
margin-top: 1.75em;
padding-top: 8px;
padding-bottom: 4px;
width: 100%;
}
h3.groupheader {
font-size: 100%;
}
h1, h2, h3, h4, h5, h6 {
-webkit-transition: text-shadow 0.5s linear;
-moz-transition: text-shadow 0.5s linear;
-ms-transition: text-shadow 0.5s linear;
-o-transition: text-shadow 0.5s linear;
transition: text-shadow 0.5s linear;
margin-right: 15px;
}
h1.glow, h2.glow, h3.glow, h4.glow, h5.glow, h6.glow {
text-shadow: 0 0 15px cyan;
}
dt {
font-weight: bold;
}
div.multicol {
-moz-column-gap: 1em;
-webkit-column-gap: 1em;
-moz-column-count: 3;
-webkit-column-count: 3;
}
p.startli, p.startdd {
margin-top: 2px;
}
p.starttd {
margin-top: 0px;
}
p.endli {
margin-bottom: 0px;
}
p.enddd {
margin-bottom: 4px;
}
p.endtd {
margin-bottom: 2px;
}
/* @end */
caption {
font-weight: bold;
}
span.legend {
font-size: 70%;
text-align: center;
}
h3.version {
font-size: 90%;
text-align: center;
}
div.qindex, div.navtab{
background-color: #EBEFF6;
border: 1px solid #A3B4D7;
text-align: center;
}
div.qindex, div.navpath {
width: 100%;
line-height: 140%;
}
div.navtab {
margin-right: 15px;
}
/* @group Link Styling */
a {
color: #3D578C;
font-weight: normal;
text-decoration: none;
}
.contents a:visited {
color: #4665A2;
}
a:hover {
text-decoration: underline;
}
a.qindex {
font-weight: bold;
}
a.qindexHL {
font-weight: bold;
background-color: #9CAFD4;
color: #ffffff;
border: 1px double #869DCA;
}
.contents a.qindexHL:visited {
color: #ffffff;
}
a.el {
font-weight: bold;
}
a.elRef {
}
a.code, a.code:visited, a.line, a.line:visited {
color: #4665A2;
}
a.codeRef, a.codeRef:visited, a.lineRef, a.lineRef:visited {
color: #4665A2;
}
/* @end */
dl.el {
margin-left: -1cm;
}
pre.fragment {
border: 1px solid #C4CFE5;
background-color: #FBFCFD;
padding: 4px 6px;
margin: 4px 8px 4px 2px;
overflow: auto;
word-wrap: break-word;
font-size: 9pt;
line-height: 125%;
font-family: monospace, fixed;
font-size: 105%;
}
div.fragment {
padding: 0px;
margin: 4px 8px 4px 2px;
background-color: #FBFCFD;
border: 1px solid #C4CFE5;
}
div.line {
font-family: monospace, fixed;
font-size: 13px;
min-height: 13px;
line-height: 1.0;
text-wrap: unrestricted;
white-space: -moz-pre-wrap; /* Moz */
white-space: -pre-wrap; /* Opera 4-6 */
white-space: -o-pre-wrap; /* Opera 7 */
white-space: pre-wrap; /* CSS3 */
word-wrap: break-word; /* IE 5.5+ */
text-indent: -53px;
padding-left: 53px;
padding-bottom: 0px;
margin: 0px;
-webkit-transition-property: background-color, box-shadow;
-webkit-transition-duration: 0.5s;
-moz-transition-property: background-color, box-shadow;
-moz-transition-duration: 0.5s;
-ms-transition-property: background-color, box-shadow;
-ms-transition-duration: 0.5s;
-o-transition-property: background-color, box-shadow;
-o-transition-duration: 0.5s;
transition-property: background-color, box-shadow;
transition-duration: 0.5s;
}
div.line:after {
content:"\000A";
white-space: pre;
}
div.line.glow {
background-color: cyan;
box-shadow: 0 0 10px cyan;
}
span.lineno {
padding-right: 4px;
text-align: right;
border-right: 2px solid #0F0;
background-color: #E8E8E8;
white-space: pre;
}
span.lineno a {
background-color: #D8D8D8;
}
span.lineno a:hover {
background-color: #C8C8C8;
}
.lineno {
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
div.ah, span.ah {
background-color: black;
font-weight: bold;
color: #ffffff;
margin-bottom: 3px;
margin-top: 3px;
padding: 0.2em;
border: solid thin #333;
border-radius: 0.5em;
-webkit-border-radius: .5em;
-moz-border-radius: .5em;
box-shadow: 2px 2px 3px #999;
-webkit-box-shadow: 2px 2px 3px #999;
-moz-box-shadow: rgba(0, 0, 0, 0.15) 2px 2px 2px;
background-image: -webkit-gradient(linear, left top, left bottom, from(#eee), to(#000),color-stop(0.3, #444));
background-image: -moz-linear-gradient(center top, #eee 0%, #444 40%, #000 110%);
}
div.classindex ul {
list-style: none;
padding-left: 0;
}
div.classindex span.ai {
display: inline-block;
}
div.groupHeader {
margin-left: 16px;
margin-top: 12px;
font-weight: bold;
}
div.groupText {
margin-left: 16px;
font-style: italic;
}
body {
background-color: white;
color: black;
margin: 0;
}
div.contents {
margin-top: 10px;
margin-left: 12px;
margin-right: 8px;
}
td.indexkey {
background-color: #EBEFF6;
font-weight: bold;
border: 1px solid #C4CFE5;
margin: 2px 0px 2px 0;
padding: 2px 10px;
white-space: nowrap;
vertical-align: top;
}
td.indexvalue {
background-color: #EBEFF6;
border: 1px solid #C4CFE5;
padding: 2px 10px;
margin: 2px 0px;
}
tr.memlist {
background-color: #EEF1F7;
}
p.formulaDsp {
text-align: center;
}
img.formulaDsp {
}
img.formulaInl {
vertical-align: middle;
}
div.center {
text-align: center;
margin-top: 0px;
margin-bottom: 0px;
padding: 0px;
}
div.center img {
border: 0px;
}
address.footer {
text-align: right;
padding-right: 12px;
}
img.footer {
border: 0px;
vertical-align: middle;
}
/* @group Code Colorization */
span.keyword {
color: #008000
}
span.keywordtype {
color: #604020
}
span.keywordflow {
color: #e08000
}
span.comment {
color: #800000
}
span.preprocessor {
color: #806020
}
span.stringliteral {
color: #002080
}
span.charliteral {
color: #008080
}
span.vhdldigit {
color: #ff00ff
}
span.vhdlchar {
color: #000000
}
span.vhdlkeyword {
color: #700070
}
span.vhdllogic {
color: #ff0000
}
blockquote {
background-color: #F7F8FB;
border-left: 2px solid #9CAFD4;
margin: 0 24px 0 4px;
padding: 0 12px 0 16px;
}
/* @end */
/*
.search {
color: #003399;
font-weight: bold;
}
form.search {
margin-bottom: 0px;
margin-top: 0px;
}
input.search {
font-size: 75%;
color: #000080;
font-weight: normal;
background-color: #e8eef2;
}
*/
td.tiny {
font-size: 75%;
}
.dirtab {
padding: 4px;
border-collapse: collapse;
border: 1px solid #A3B4D7;
}
th.dirtab {
background: #EBEFF6;
font-weight: bold;
}
hr {
height: 0px;
border: none;
border-top: 1px solid #4A6AAA;
}
hr.footer {
height: 1px;
}
/* @group Member Descriptions */
table.memberdecls {
border-spacing: 0px;
padding: 0px;
}
.memberdecls td, .fieldtable tr {
-webkit-transition-property: background-color, box-shadow;
-webkit-transition-duration: 0.5s;
-moz-transition-property: background-color, box-shadow;
-moz-transition-duration: 0.5s;
-ms-transition-property: background-color, box-shadow;
-ms-transition-duration: 0.5s;
-o-transition-property: background-color, box-shadow;
-o-transition-duration: 0.5s;
transition-property: background-color, box-shadow;
transition-duration: 0.5s;
}
.memberdecls td.glow, .fieldtable tr.glow {
background-color: cyan;
box-shadow: 0 0 15px cyan;
}
.mdescLeft, .mdescRight,
.memItemLeft, .memItemRight,
.memTemplItemLeft, .memTemplItemRight, .memTemplParams {
background-color: #F9FAFC;
border: none;
margin: 4px;
padding: 1px 0 0 8px;
}
.mdescLeft, .mdescRight {
padding: 0px 8px 4px 8px;
color: #555;
}
.memSeparator {
border-bottom: 1px solid #DEE4F0;
line-height: 1px;
margin: 0px;
padding: 0px;
}
.memItemLeft, .memTemplItemLeft {
white-space: nowrap;
}
.memItemRight {
width: 100%;
}
.memTemplParams {
color: #4665A2;
white-space: nowrap;
font-size: 80%;
}
/* @end */
/* @group Member Details */
/* Styles for detailed member documentation */
.memtitle {
padding: 8px;
border-top: 1px solid #A8B8D9;
border-left: 1px solid #A8B8D9;
border-right: 1px solid #A8B8D9;
border-top-right-radius: 4px;
border-top-left-radius: 4px;
margin-bottom: -1px;
background-image: url('nav_f.png');
background-repeat: repeat-x;
background-color: #E2E8F2;
line-height: 1.25;
font-weight: 300;
float:left;
}
.permalink
{
font-size: 65%;
display: inline-block;
vertical-align: middle;
}
.memtemplate {
font-size: 80%;
color: #4665A2;
font-weight: normal;
margin-left: 9px;
}
.memnav {
background-color: #EBEFF6;
border: 1px solid #A3B4D7;
text-align: center;
margin: 2px;
margin-right: 15px;
padding: 2px;
}
.mempage {
width: 100%;
}
.memitem {
padding: 0;
margin-bottom: 10px;
margin-right: 5px;
-webkit-transition: box-shadow 0.5s linear;
-moz-transition: box-shadow 0.5s linear;
-ms-transition: box-shadow 0.5s linear;
-o-transition: box-shadow 0.5s linear;
transition: box-shadow 0.5s linear;
display: table !important;
width: 100%;
}
.memitem.glow {
box-shadow: 0 0 15px cyan;
}
.memname {
font-weight: 400;
margin-left: 6px;
}
.memname td {
vertical-align: bottom;
}
.memproto, dl.reflist dt {
border-top: 1px solid #A8B8D9;
border-left: 1px solid #A8B8D9;
border-right: 1px solid #A8B8D9;
padding: 6px 0px 6px 0px;
color: #253555;
font-weight: bold;
text-shadow: 0px 1px 1px rgba(255, 255, 255, 0.9);
background-color: #DFE5F1;
/* opera specific markup */
box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
border-top-right-radius: 4px;
/* firefox specific markup */
-moz-box-shadow: rgba(0, 0, 0, 0.15) 5px 5px 5px;
-moz-border-radius-topright: 4px;
/* webkit specific markup */
-webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
-webkit-border-top-right-radius: 4px;
}
.overload {
font-family: "courier new",courier,monospace;
font-size: 65%;
}
.memdoc, dl.reflist dd {
border-bottom: 1px solid #A8B8D9;
border-left: 1px solid #A8B8D9;
border-right: 1px solid #A8B8D9;
padding: 6px 10px 2px 10px;
background-color: #FBFCFD;
border-top-width: 0;
background-image:url('nav_g.png');
background-repeat:repeat-x;
background-color: #FFFFFF;
/* opera specific markup */
border-bottom-left-radius: 4px;
border-bottom-right-radius: 4px;
box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
/* firefox specific markup */
-moz-border-radius-bottomleft: 4px;
-moz-border-radius-bottomright: 4px;
-moz-box-shadow: rgba(0, 0, 0, 0.15) 5px 5px 5px;
/* webkit specific markup */
-webkit-border-bottom-left-radius: 4px;
-webkit-border-bottom-right-radius: 4px;
-webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
}
dl.reflist dt {
padding: 5px;
}
dl.reflist dd {
margin: 0px 0px 10px 0px;
padding: 5px;
}
.paramkey {
text-align: right;
}
.paramtype {
white-space: nowrap;
}
.paramname {
color: #602020;
white-space: nowrap;
}
.paramname em {
font-style: normal;
}
.paramname code {
line-height: 14px;
}
.params, .retval, .exception, .tparams {
margin-left: 0px;
padding-left: 0px;
}
.params .paramname, .retval .paramname {
font-weight: bold;
vertical-align: top;
}
.params .paramtype {
font-style: italic;
vertical-align: top;
}
.params .paramdir {
font-family: "courier new",courier,monospace;
vertical-align: top;
}
table.mlabels {
border-spacing: 0px;
}
td.mlabels-left {
width: 100%;
padding: 0px;
}
td.mlabels-right {
vertical-align: bottom;
padding: 0px;
white-space: nowrap;
}
span.mlabels {
margin-left: 8px;
}
span.mlabel {
background-color: #728DC1;
border-top:1px solid #5373B4;
border-left:1px solid #5373B4;
border-right:1px solid #C4CFE5;
border-bottom:1px solid #C4CFE5;
text-shadow: none;
color: white;
margin-right: 4px;
padding: 2px 3px;
border-radius: 3px;
font-size: 7pt;
white-space: nowrap;
vertical-align: middle;
}
/* @end */
/* these are for tree view inside a (index) page */
div.directory {
margin: 10px 0px;
border-top: 1px solid #9CAFD4;
border-bottom: 1px solid #9CAFD4;
width: 100%;
}
.directory table {
border-collapse:collapse;
}
.directory td {
margin: 0px;
padding: 0px;
vertical-align: top;
}
.directory td.entry {
white-space: nowrap;
padding-right: 6px;
padding-top: 3px;
}
.directory td.entry a {
outline:none;
}
.directory td.entry a img {
border: none;
}
.directory td.desc {
padding-left: 6px;
padding-right: 6px;
padding-top: 3px;
border-left: 1px solid rgba(0,0,0,0.05);
}
.directory tr.even {
padding-left: 6px;
background-color: #F7F8FB;
}
.directory img {
vertical-align: -30%;
}
.directory .levels {
white-space: nowrap;
width: 100%;
text-align: right;
font-size: 9pt;
}
.directory .levels span {
cursor: pointer;
padding-left: 2px;
padding-right: 2px;
color: #3D578C;
}
.arrow {
color: #9CAFD4;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
cursor: pointer;
font-size: 80%;
display: inline-block;
width: 16px;
height: 22px;
}
.icon {
font-family: Arial, Helvetica;
font-weight: bold;
font-size: 12px;
height: 14px;
width: 16px;
display: inline-block;
background-color: #728DC1;
color: white;
text-align: center;
border-radius: 4px;
margin-left: 2px;
margin-right: 2px;
}
.icona {
width: 24px;
height: 22px;
display: inline-block;
}
.iconfopen {
width: 24px;
height: 18px;
margin-bottom: 4px;
background-image:url('folderopen.png');
background-position: 0px -4px;
background-repeat: repeat-y;
vertical-align:top;
display: inline-block;
}
.iconfclosed {
width: 24px;
height: 18px;
margin-bottom: 4px;
background-image:url('folderclosed.png');
background-position: 0px -4px;
background-repeat: repeat-y;
vertical-align:top;
display: inline-block;
}
.icondoc {
width: 24px;
height: 18px;
margin-bottom: 4px;
background-image:url('doc.png');
background-position: 0px -4px;
background-repeat: repeat-y;
vertical-align:top;
display: inline-block;
}
table.directory {
font: 400 14px Roboto,sans-serif;
}
/* @end */
div.dynheader {
margin-top: 8px;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
address {
font-style: normal;
color: #2A3D61;
}
table.doxtable caption {
caption-side: top;
}
table.doxtable {
border-collapse:collapse;
margin-top: 4px;
margin-bottom: 4px;
}
table.doxtable td, table.doxtable th {
border: 1px solid #2D4068;
padding: 3px 7px 2px;
}
table.doxtable th {
background-color: #374F7F;
color: #FFFFFF;
font-size: 110%;
padding-bottom: 4px;
padding-top: 5px;
}
table.fieldtable {
/*width: 100%;*/
margin-bottom: 10px;
border: 1px solid #A8B8D9;
border-spacing: 0px;
-moz-border-radius: 4px;
-webkit-border-radius: 4px;
border-radius: 4px;
-moz-box-shadow: rgba(0, 0, 0, 0.15) 2px 2px 2px;
-webkit-box-shadow: 2px 2px 2px rgba(0, 0, 0, 0.15);
box-shadow: 2px 2px 2px rgba(0, 0, 0, 0.15);
}
.fieldtable td, .fieldtable th {
padding: 3px 7px 2px;
}
.fieldtable td.fieldtype, .fieldtable td.fieldname {
white-space: nowrap;
border-right: 1px solid #A8B8D9;
border-bottom: 1px solid #A8B8D9;
vertical-align: top;
}
.fieldtable td.fieldname {
padding-top: 3px;
}
.fieldtable td.fielddoc {
border-bottom: 1px solid #A8B8D9;
/*width: 100%;*/
}
.fieldtable td.fielddoc p:first-child {
margin-top: 0px;
}
.fieldtable td.fielddoc p:last-child {
margin-bottom: 2px;
}
.fieldtable tr:last-child td {
border-bottom: none;
}
.fieldtable th {
background-image:url('nav_f.png');
background-repeat:repeat-x;
background-color: #E2E8F2;
font-size: 90%;
color: #253555;
padding-bottom: 4px;
padding-top: 5px;
text-align:left;
font-weight: 400;
-moz-border-radius-topleft: 4px;
-moz-border-radius-topright: 4px;
-webkit-border-top-left-radius: 4px;
-webkit-border-top-right-radius: 4px;
border-top-left-radius: 4px;
border-top-right-radius: 4px;
border-bottom: 1px solid #A8B8D9;
}
.tabsearch {
top: 0px;
left: 10px;
height: 36px;
background-image: url('tab_b.png');
z-index: 101;
overflow: hidden;
font-size: 13px;
}
.navpath ul
{
font-size: 11px;
background-image:url('tab_b.png');
background-repeat:repeat-x;
background-position: 0 -5px;
height:30px;
line-height:30px;
color:#8AA0CC;
border:solid 1px #C2CDE4;
overflow:hidden;
margin:0px;
padding:0px;
}
.navpath li
{
list-style-type:none;
float:left;
padding-left:10px;
padding-right:15px;
background-image:url('bc_s.png');
background-repeat:no-repeat;
background-position:right;
color:#364D7C;
}
.navpath li.navelem a
{
height:32px;
display:block;
text-decoration: none;
outline: none;
color: #283A5D;
font-family: 'Lucida Grande',Geneva,Helvetica,Arial,sans-serif;
text-shadow: 0px 1px 1px rgba(255, 255, 255, 0.9);
text-decoration: none;
}
.navpath li.navelem a:hover
{
color:#6884BD;
}
.navpath li.footer
{
list-style-type:none;
float:right;
padding-left:10px;
padding-right:15px;
background-image:none;
background-repeat:no-repeat;
background-position:right;
color:#364D7C;
font-size: 8pt;
}
div.summary
{
float: right;
font-size: 8pt;
padding-right: 5px;
width: 50%;
text-align: right;
}
div.summary a
{
white-space: nowrap;
}
table.classindex
{
margin: 10px;
white-space: nowrap;
margin-left: 3%;
margin-right: 3%;
width: 94%;
border: 0;
border-spacing: 0;
padding: 0;
}
div.ingroups
{
font-size: 8pt;
width: 50%;
text-align: left;
}
div.ingroups a
{
white-space: nowrap;
}
div.header
{
background-image:url('nav_h.png');
background-repeat:repeat-x;
background-color: #F9FAFC;
margin: 0px;
border-bottom: 1px solid #C4CFE5;
}
div.headertitle
{
padding: 5px 5px 5px 10px;
}
dl
{
padding: 0 0 0 10px;
}
/* dl.note, dl.warning, dl.attention, dl.pre, dl.post, dl.invariant, dl.deprecated, dl.todo, dl.test, dl.bug */
dl.section
{
margin-left: 0px;
padding-left: 0px;
}
dl.note
{
margin-left:-7px;
padding-left: 3px;
border-left:4px solid;
border-color: #D0C000;
}
dl.warning, dl.attention
{
margin-left:-7px;
padding-left: 3px;
border-left:4px solid;
border-color: #FF0000;
}
dl.pre, dl.post, dl.invariant
{
margin-left:-7px;
padding-left: 3px;
border-left:4px solid;
border-color: #00D000;
}
dl.deprecated
{
margin-left:-7px;
padding-left: 3px;
border-left:4px solid;
border-color: #505050;
}
dl.todo
{
margin-left:-7px;
padding-left: 3px;
border-left:4px solid;
border-color: #00C0E0;
}
dl.test
{
margin-left:-7px;
padding-left: 3px;
border-left:4px solid;
border-color: #3030E0;
}
dl.bug
{
margin-left:-7px;
padding-left: 3px;
border-left:4px solid;
border-color: #C08050;
}
dl.section dd {
margin-bottom: 6px;
}
#projectlogo
{
vertical-align: middle;
border-collapse: separate;
}
#projectlogo img
{
border: 0px none;
}
#projectalign
{
vertical-align: middle;
color: black;
text-shadow: 1px 1px 2px #7e94f9;
}
#projectname
{
font: 300% Tahoma, Arial,sans-serif;
margin: 0px;
padding: 2px 0px;
}
#projectbrief
{
font: 120% Tahoma, Arial,sans-serif;
margin: 10px 1px 5px 35px;
padding-bottom: 7px;
}
#projectnumber
{
font: 50% Tahoma, Arial,sans-serif;
margin: 0px;
padding: 0px;
}
#titlearea
{
padding: 0px;
margin: 0px;
width: 100%;
border-bottom: 1px solid #5373B4;
}
.image
{
text-align: center;
}
.dotgraph
{
text-align: center;
}
.mscgraph
{
text-align: center;
}
.plantumlgraph
{
text-align: center;
}
.diagraph
{
text-align: center;
}
.caption
{
font-weight: bold;
}
div.zoom
{
border: 1px solid #90A5CE;
}
dl.citelist {
margin-bottom:50px;
}
dl.citelist dt {
color:#334975;
float:left;
font-weight:bold;
margin-right:10px;
padding:5px;
}
dl.citelist dd {
margin:2px 0;
padding:5px 0;
}
div.toc {
padding: 14px 25px;
background-color: #F4F6FA;
border: 1px solid #D8DFEE;
border-radius: 7px 7px 7px 7px;
float: right;
height: auto;
margin: 0 8px 10px 10px;
width: 200px;
}
div.toc li {
background: url("bdwn.png") no-repeat scroll 0 5px transparent;
font: 10px/1.2 Verdana,DejaVu Sans,Geneva,sans-serif;
margin-top: 5px;
padding-left: 10px;
padding-top: 2px;
}
div.toc h3 {
font: bold 12px/1.2 Arial,FreeSans,sans-serif;
color: #4665A2;
border-bottom: 0 none;
margin: 0;
}
div.toc ul {
list-style: none outside none;
border: medium none;
padding: 0px;
}
div.toc li.level1 {
margin-left: 0px;
}
div.toc li.level2 {
margin-left: 15px;
}
div.toc li.level3 {
margin-left: 30px;
}
div.toc li.level4 {
margin-left: 45px;
}
.inherit_header {
font-weight: bold;
color: gray;
cursor: pointer;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
.inherit_header td {
padding: 6px 0px 2px 5px;
}
.inherit {
display: none;
}
tr.heading h2 {
margin-top: 12px;
margin-bottom: 4px;
}
/* tooltip related style info */
.ttc {
position: absolute;
display: none;
}
#powerTip {
cursor: default;
white-space: nowrap;
background-color: white;
border: 1px solid gray;
border-radius: 4px 4px 4px 4px;
box-shadow: 1px 1px 7px gray;
display: none;
font-size: smaller;
max-width: 80%;
opacity: 0.9;
padding: 1ex 1em 1em;
position: absolute;
z-index: 2147483647;
}
#powerTip div.ttdoc {
color: grey;
font-style: italic;
}
#powerTip div.ttname a {
font-weight: bold;
}
#powerTip div.ttname {
font-weight: bold;
}
#powerTip div.ttdeci {
color: #006318;
}
#powerTip div {
margin: 0px;
padding: 0px;
font: 12px/16px Roboto,sans-serif;
}
#powerTip:before, #powerTip:after {
content: "";
position: absolute;
margin: 0px;
}
#powerTip.n:after, #powerTip.n:before,
#powerTip.s:after, #powerTip.s:before,
#powerTip.w:after, #powerTip.w:before,
#powerTip.e:after, #powerTip.e:before,
#powerTip.ne:after, #powerTip.ne:before,
#powerTip.se:after, #powerTip.se:before,
#powerTip.nw:after, #powerTip.nw:before,
#powerTip.sw:after, #powerTip.sw:before {
border: solid transparent;
content: " ";
height: 0;
width: 0;
position: absolute;
}
#powerTip.n:after, #powerTip.s:after,
#powerTip.w:after, #powerTip.e:after,
#powerTip.nw:after, #powerTip.ne:after,
#powerTip.sw:after, #powerTip.se:after {
border-color: rgba(255, 255, 255, 0);
}
#powerTip.n:before, #powerTip.s:before,
#powerTip.w:before, #powerTip.e:before,
#powerTip.nw:before, #powerTip.ne:before,
#powerTip.sw:before, #powerTip.se:before {
border-color: rgba(128, 128, 128, 0);
}
#powerTip.n:after, #powerTip.n:before,
#powerTip.ne:after, #powerTip.ne:before,
#powerTip.nw:after, #powerTip.nw:before {
top: 100%;
}
#powerTip.n:after, #powerTip.ne:after, #powerTip.nw:after {
border-top-color: #ffffff;
border-width: 10px;
margin: 0px -10px;
}
#powerTip.n:before {
border-top-color: #808080;
border-width: 11px;
margin: 0px -11px;
}
#powerTip.n:after, #powerTip.n:before {
left: 50%;
}
#powerTip.nw:after, #powerTip.nw:before {
right: 14px;
}
#powerTip.ne:after, #powerTip.ne:before {
left: 14px;
}
#powerTip.s:after, #powerTip.s:before,
#powerTip.se:after, #powerTip.se:before,
#powerTip.sw:after, #powerTip.sw:before {
bottom: 100%;
}
#powerTip.s:after, #powerTip.se:after, #powerTip.sw:after {
border-bottom-color: #ffffff;
border-width: 10px;
margin: 0px -10px;
}
#powerTip.s:before, #powerTip.se:before, #powerTip.sw:before {
border-bottom-color: #808080;
border-width: 11px;
margin: 0px -11px;
}
#powerTip.s:after, #powerTip.s:before {
left: 50%;
}
#powerTip.sw:after, #powerTip.sw:before {
right: 14px;
}
#powerTip.se:after, #powerTip.se:before {
left: 14px;
}
#powerTip.e:after, #powerTip.e:before {
left: 100%;
}
#powerTip.e:after {
border-left-color: #ffffff;
border-width: 10px;
top: 50%;
margin-top: -10px;
}
#powerTip.e:before {
border-left-color: #808080;
border-width: 11px;
top: 50%;
margin-top: -11px;
}
#powerTip.w:after, #powerTip.w:before {
right: 100%;
}
#powerTip.w:after {
border-right-color: #ffffff;
border-width: 10px;
top: 50%;
margin-top: -10px;
}
#powerTip.w:before {
border-right-color: #808080;
border-width: 11px;
top: 50%;
margin-top: -11px;
}
@media print
{
#top { display: none; }
#side-nav { display: none; }
#nav-path { display: none; }
body { overflow:visible; }
h1, h2, h3, h4, h5, h6 { page-break-after: avoid; }
.summary { display: none; }
.memitem { page-break-inside: avoid; }
#doc-content
{
margin-left:0 !important;
height:auto !important;
width:auto !important;
overflow:inherit;
display:inline;
}
}
/* @group Markdown */
/*
table.markdownTable {
border-collapse:collapse;
margin-top: 4px;
margin-bottom: 4px;
}
table.markdownTable td, table.markdownTable th {
border: 1px solid #2D4068;
padding: 3px 7px 2px;
}
table.markdownTableHead tr {
}
table.markdownTableBodyLeft td, table.markdownTable th {
border: 1px solid #2D4068;
padding: 3px 7px 2px;
}
th.markdownTableHeadLeft th.markdownTableHeadRight th.markdownTableHeadCenter th.markdownTableHeadNone {
background-color: #374F7F;
color: #FFFFFF;
font-size: 110%;
padding-bottom: 4px;
padding-top: 5px;
}
th.markdownTableHeadLeft {
text-align: left
}
th.markdownTableHeadRight {
text-align: right
}
th.markdownTableHeadCenter {
text-align: center
}
*/
table.markdownTable {
border-collapse:collapse;
margin-top: 4px;
margin-bottom: 4px;
}
table.markdownTable td, table.markdownTable th {
border: 1px solid #2D4068;
padding: 3px 7px 2px;
}
table.markdownTable tr {
}
th.markdownTableHeadLeft, th.markdownTableHeadRight, th.markdownTableHeadCenter, th.markdownTableHeadNone {
background-color: #374F7F;
color: #FFFFFF;
font-size: 110%;
padding-bottom: 4px;
padding-top: 5px;
}
th.markdownTableHeadLeft, td.markdownTableBodyLeft {
text-align: left
}
th.markdownTableHeadRight, td.markdownTableBodyRight {
text-align: right
}
th.markdownTableHeadCenter, td.markdownTableBodyCenter {
text-align: center
}
/* @end */
\ No newline at end of file
dragon/core
===========
.. only:: html
Classes
-------
`class CPUContext <core/CPUContext.html>`_
: The cpu device context.
`class CUDAContext <core/CPUContext.html>`_
: The cuda device context.
`class Graph <core/Graph.html>`_
: Graph to execute operators sequentially.
`class Operator <core/Operator.html>`_
: The base operator class with context.
`class Tensor <core/Tensor.html>`_
: The base tensor class, manage memory or not.
`class TypeMeta <core/TypeMeta.html>`_
: Metaclass for all types.
`class UnifiedMemory <core/UnifiedMemory.html>`_
: Memory to manage both the host and device data.
`class Workspace <core/Workspace.html>`_
: Sandbox to isolate the resources and computations.
.. toctree::
:hidden:
core/CPUContext
core/CUDAContext
core/Graph
core/Operator
core/Tensor
core/TypeMeta
core/UnifiedMemory
core/Workspace
.. raw:: html
<style>
h1:before {
content: "Routine: ";
color: #103d3e;
}
</style>
CPUContext
==========
.. doxygenclass:: dragon::CPUContext
Constructors
------------
.. doxygenfunction:: dragon::CPUContext::CPUContext()
.. doxygenfunction:: dragon::CPUContext::CPUContext(unsigned int random_seed)
.. doxygenfunction:: dragon::CPUContext::CPUContext(const DeviceOption &option)
Public Functions
----------------
Copy
####
.. doxygenfunction:: dragon::CPUContext::Copy
Delete
######
.. doxygenfunction:: dragon::CPUContext::Delete
FinishDeviceComputation
#######################
.. doxygenfunction:: dragon::CPUContext::FinishDeviceComputation
Memset
######
.. doxygenfunction:: dragon::CPUContext::Memset
MemsetAsync
###########
.. doxygenfunction:: dragon::CPUContext::MemsetAsync
Memcpy
######
.. doxygenfunction:: dragon::CPUContext::Memcpy
MemcpyAsync
###########
.. doxygenfunction:: dragon::CPUContext::MemcpyAsync
New
###
.. doxygenfunction:: dragon::CPUContext::New
SwitchToDevice
##############
.. doxygenfunction:: dragon::CPUContext::SwitchToDevice()
SwitchToDevice
##############
.. doxygenfunction:: dragon::CPUContext::SwitchToDevice(int stream)
device
######
.. doxygenfunction:: dragon::CPUContext::device
rand_generator
##############
.. doxygenfunction:: dragon::CPUContext::rand_generator
set_stream
##########
.. doxygenfunction:: dragon::CPUContext::set_stream
stream
######
.. doxygenfunction:: dragon::CPUContext::stream
.. raw:: html
<style>
h1:before {
content: "dragon::";
color: #103d3e;
}
</style>
CUDAContext
===========
.. doxygenclass:: dragon::CUDAContext
Constructors
------------
.. doxygenfunction:: dragon::CUDAContext::CUDAContext()
.. doxygenfunction:: dragon::CUDAContext::CUDAContext(int device)
.. doxygenfunction:: dragon::CUDAContext::CUDAContext(const DeviceOption &option)
Public Functions
----------------
Copy
####
.. doxygenfunction:: dragon::CUDAContext::Copy
Delete
######
.. doxygenfunction:: dragon::CUDAContext::Delete
FinishDeviceComputation
#######################
.. doxygenfunction:: dragon::CUDAContext::FinishDeviceComputation
Memset
######
.. doxygenfunction:: dragon::CUDAContext::Memset
MemsetAsync
###########
.. doxygenfunction:: dragon::CUDAContext::MemsetAsync
Memcpy
######
.. doxygenfunction:: dragon::CUDAContext::Memcpy(size_t n, void *dest, const void *src)
Memcpy
######
.. doxygenfunction:: dragon::CUDAContext::Memcpy(size_t n, void *dest, const void *src, int device)
MemcpyAsync
###########
.. doxygenfunction:: dragon::CUDAContext::MemcpyAsync
New
###
.. doxygenfunction:: dragon::CUDAContext::New
SwitchToDevice
##############
.. doxygenfunction:: dragon::CUDAContext::SwitchToDevice()
SwitchToDevice
##############
.. doxygenfunction:: dragon::CUDAContext::SwitchToDevice(int stream)
SynchronizeStream
#################
.. doxygenfunction:: dragon::CUDAContext::SynchronizeStream
cublas_handle
#############
.. doxygenfunction:: dragon::CUDAContext::cublas_handle
cuda_stream
###########
.. doxygenfunction:: dragon::CUDAContext::cuda_stream()
cuda_stream
###########
.. doxygenfunction:: dragon::CUDAContext::cuda_stream(int device, int stream)
cudnn_handle
############
.. doxygenfunction:: dragon::CUDAContext::cudnn_handle
curand_generator
################
.. doxygenfunction:: dragon::CUDAContext::curand_generator
rand_generator
##############
.. doxygenfunction:: dragon::CUDAContext::rand_generator
device
######
.. doxygenfunction:: dragon::CUDAContext::device
set_stream
##########
.. doxygenfunction:: dragon::CUDAContext::set_stream
stream
######
.. doxygenfunction:: dragon::CUDAContext::stream
.. raw:: html
<style>
h1:before {
content: "dragon::";
color: #103d3e;
}
</style>
Graph
=====
.. doxygenclass:: dragon::Graph
Constructors
------------
.. doxygenfunction:: dragon::Graph::Graph(const GraphDef& def, Workspace* ws)
Public Functions
----------------
Create
######
.. doxygenfunction:: dragon::Graph::Create
Run
###
.. doxygenfunction:: dragon::Graph::Run
arg
###
.. doxygenfunction:: dragon::Graph::arg
args
####
.. doxygenfunction:: dragon::Graph::args
def
###
.. doxygenfunction:: dragon::Graph::def
optimized_def
#############
.. doxygenfunction:: dragon::Graph::optimized_def
name
####
.. doxygenfunction:: dragon::Graph::name
phase
#####
.. doxygenfunction:: dragon::Graph::phase
ws
##
.. doxygenfunction:: dragon::Graph::ws
.. raw:: html
<style>
h1:before {
content: "dragon::";
color: #103d3e;
}
</style>
Operator
========
.. doxygenclass:: dragon::Operator
Constructors
------------
.. doxygenfunction:: dragon::Operator::Operator(const OperatorDef &def, Workspace *ws)
Public Functions
----------------
Arg
###
.. doxygenfunction:: dragon::Operator::Arg
Args
####
.. doxygenfunction:: dragon::Operator::Args
Buffer
######
.. doxygenfunction:: dragon::Operator::Buffer
Fuse
####
.. doxygenfunction:: dragon::Operator::Fuse
Input
#####
.. doxygenfunction:: dragon::Operator::Input
InputSize
#########
.. doxygenfunction:: dragon::Operator::InputSize
Output
######
.. doxygenfunction:: dragon::Operator::Output(int i)
MessageForUnsupported
#####################
.. doxygenfunction:: dragon::Operator::MessageForUnsupported
Output
######
.. doxygenfunction:: dragon::Operator::Output(int i, const vec32_t &inputs)
OutputSize
##########
.. doxygenfunction:: dragon::Operator::OutputSize
Run
###
.. doxygenfunction:: dragon::Operator::Run
UpdateFrom
##########
.. doxygenfunction:: dragon::Operator::UpdateFrom
data_format
###########
.. doxygenfunction:: dragon::Operator::data_format
arg
###
.. doxygenfunction:: dragon::Operator::arg
args
####
.. doxygenfunction:: dragon::Operator::args
def
###
.. doxygenfunction:: dragon::Operator::def
dtype
#####
.. doxygenfunction:: dragon::Operator::dtype
handle
######
.. doxygenfunction:: dragon::Operator::handle
name
####
.. doxygenfunction:: dragon::Operator::name
type
####
.. doxygenfunction:: dragon::Operator::type
phase
#####
.. doxygenfunction:: dragon::Operator::phase
ws
##
.. doxygenfunction:: dragon::Operator::ws
.. raw:: html
<style>
h1:before {
content: "dragon::";
color: #103d3e;
}
</style>
Tensor
======
.. doxygenclass:: dragon::Tensor
Constructors
------------
.. doxygenfunction:: dragon::Tensor::Tensor()
.. doxygenfunction:: dragon::Tensor::Tensor(const string &name)
.. doxygenfunction:: dragon::Tensor::Tensor(const vec64_t &dims)
.. doxygenfunction:: dragon::Tensor::Tensor(const vec32_t &dims)
.. doxygenfunction:: dragon::Tensor::Tensor(const TypeMeta &meta)
Public Functions
----------------
CopyFrom
########
.. doxygenfunction:: dragon::Tensor::CopyFrom(const Tensor &other, Context *ctx)
CopyFrom
########
.. doxygenfunction:: dragon::Tensor::CopyFrom(const vector<VectorType> &other)
CopyTo
######
.. doxygenfunction:: dragon::Tensor::CopyTo
DimString
#########
.. doxygenfunction:: dragon::Tensor::DimString() const
DimString
#########
.. doxygenfunction:: dragon::Tensor::DimString(const vector<int64_t> &dims)
IsType
######
.. doxygenfunction:: dragon::Tensor::IsType
Reset
#####
.. doxygenfunction:: dragon::Tensor::Reset
Reshape
#######
.. doxygenfunction:: dragon::Tensor::Reshape
ReshapeLike
###########
.. doxygenfunction:: dragon::Tensor::ReshapeLike
Share
#####
.. doxygenfunction:: dragon::Tensor::Share
SwitchToDevice
##############
.. doxygenfunction:: dragon::Tensor::SwitchToDevice
axis
####
.. doxygenfunction:: dragon::Tensor::axis
capacity
########
.. doxygenfunction:: dragon::Tensor::capacity
conut
#####
.. doxygenfunction:: dragon::Tensor::count() const
conut
#####
.. doxygenfunction:: dragon::Tensor::count(int64_t start) const
conut
#####
.. doxygenfunction:: dragon::Tensor::count(int64_t start, int64_t end) const
data
####
.. doxygenfunction:: dragon::Tensor::data
dim
###
.. doxygenfunction:: dragon::Tensor::dim
dims
####
.. doxygenfunction:: dragon::Tensor::dims
empty
#####
.. doxygenfunction:: dragon::Tensor::empty
has_memory
##########
.. doxygenfunction:: dragon::Tensor::has_memory
has_name
########
.. doxygenfunction:: dragon::Tensor::has_name
meta
####
.. doxygenfunction:: dragon::Tensor::meta
memory
######
.. doxygenfunction:: dragon::Tensor::memory
memory_state
############
.. doxygenfunction:: dragon::Tensor::memory_state
mutable_data
############
.. doxygenfunction:: dragon::Tensor::mutable_data
name
####
.. doxygenfunction:: dragon::Tensor::name
nbytes
######
.. doxygenfunction:: dragon::Tensor::nbytes
ndim
####
.. doxygenfunction:: dragon::Tensor::ndim
raw_data
########
.. doxygenfunction:: dragon::Tensor::raw_data
raw_mutable_data
################
.. doxygenfunction:: dragon::Tensor::raw_mutable_data()
raw_mutable_data
################
.. doxygenfunction:: dragon::Tensor::raw_mutable_data(const TypeMeta &meta)
size
####
.. doxygenfunction:: dragon::Tensor::size
stride
######
.. doxygenfunction:: dragon::Tensor::stride
strides
#######
.. doxygenfunction:: dragon::Tensor::strides
version
#######
.. doxygenfunction:: dragon::Tensor::version
.. raw:: html
<style>
h1:before {
content: "dragon::";
color: #103d3e;
}
</style>
TypeMeta
========
.. doxygenclass:: dragon::TypeMeta
Constructors
------------
.. doxygenfunction:: dragon::TypeMeta::TypeMeta()
.. doxygenfunction:: dragon::TypeMeta::TypeMeta(const TypeMeta &src)
Public Functions
----------------
Copy
####
.. doxygenfunction:: dragon::TypeMeta::Copy
Ctor
####
.. doxygenfunction:: dragon::TypeMeta::Ctor
Dtor
####
.. doxygenfunction:: dragon::TypeMeta::Dtor
Id
##
.. doxygenfunction:: dragon::TypeMeta::Id
Itemsize
########
.. doxygenfunction:: dragon::TypeMeta::Itemsize
Make
####
.. doxygenfunction:: dragon::TypeMeta::Make
Match
#####
.. doxygenfunction:: dragon::TypeMeta::Match
copy
####
.. doxygenfunction:: dragon::TypeMeta::copy
ctor
####
.. doxygenfunction:: dragon::TypeMeta::ctor
dtor
####
.. doxygenfunction:: dragon::TypeMeta::dtor
id
##
.. doxygenfunction:: dragon::TypeMeta::id
itemsize
########
.. doxygenfunction:: dragon::TypeMeta::itemsize
.. raw:: html
<style>
h1:before {
content: "dragon::";
color: #103d3e;
}
</style>
UnifiedMemory
=============
.. doxygenclass:: dragon::UnifiedMemory
Constructors
------------
.. doxygenfunction:: dragon::UnifiedMemory::UnifiedMemory()
.. doxygenfunction:: dragon::UnifiedMemory::UnifiedMemory(const TypeMeta &meta, size_t size)
Public Types
------------
State
#####
.. doxygenenum:: dragon::UnifiedMemory::State
Public Functions
----------------
SwitchToDevice
##############
.. doxygenfunction:: dragon::UnifiedMemory::SwitchToDevice
SwitchToCUDADevice
##################
.. doxygenfunction:: dragon::UnifiedMemory::SwitchToCUDADevice
ToCPU
#####
.. doxygenfunction:: dragon::UnifiedMemory::ToCPU
ToCUDA
######
.. doxygenfunction:: dragon::UnifiedMemory::ToCUDA
cpu_data
########
.. doxygenfunction:: dragon::UnifiedMemory::cpu_data
cuda_data
#########
.. doxygenfunction:: dragon::UnifiedMemory::cuda_data
device
######
.. doxygenfunction:: dragon::UnifiedMemory::device
info
####
.. doxygenfunction:: dragon::UnifiedMemory::info
mutable_cpu_data
################
.. doxygenfunction:: dragon::UnifiedMemory::mutable_cpu_data
mutable_cuda_data
#################
.. doxygenfunction:: dragon::UnifiedMemory::mutable_cuda_data
set_cpu_data
############
.. doxygenfunction:: dragon::UnifiedMemory::set_cpu_data
set_cuda_data
#############
.. doxygenfunction:: dragon::UnifiedMemory::set_cuda_data
size
####
.. doxygenfunction:: dragon::UnifiedMemory::size
state
#####
.. doxygenfunction:: dragon::UnifiedMemory::state
.. raw:: html
<style>
h1:before {
content: "dragon::";
color: #103d3e;
}
</style>
Workspace
=========
.. doxygenclass:: dragon::Workspace
Constructors
------------
.. doxygenfunction:: dragon::Workspace::Workspace(const string &name)
Public Functions
----------------
Clear
#####
.. doxygenfunction:: dragon::Workspace::Clear
CreateGraph
###########
.. doxygenfunction:: dragon::Workspace::CreateGraph
CreateTensor
############
.. doxygenfunction:: dragon::Workspace::CreateTensor
GetFillerInfo
#############
.. doxygenfunction:: dragon::Workspace::GetFillerInfo
GetTensor
#########
.. doxygenfunction:: dragon::Workspace::GetTensor
HasTensor
#########
.. doxygenfunction:: dragon::Workspace::HasTensor
MergeFrom
#########
.. doxygenfunction:: dragon::Workspace::MergeFrom
RegisterAlias
#############
.. doxygenfunction:: dragon::Workspace::RegisterAlias
ResetTensor
###########
.. doxygenfunction:: dragon::Workspace::ResetTensor
RunGraph
########
.. doxygenfunction:: dragon::Workspace::RunGraph
RunOperator
###########
.. doxygenfunction:: dragon::Workspace::RunOperator
TryGetTensor
############
.. doxygenfunction:: dragon::Workspace::TryGetTensor
UniqueName
##########
.. doxygenfunction:: dragon::Workspace::UniqueName
data
####
.. doxygenfunction:: dragon::Workspace::data(const vector<size_t> &segments)
data
####
.. doxygenfunction:: dragon::Workspace::data(const vector<int64_t> &segments)
graphs
######
.. doxygenfunction:: dragon::Workspace::graphs
name
####
.. doxygenfunction:: dragon::Workspace::name
tensors
#######
.. doxygenfunction:: dragon::Workspace::tensors
.. raw:: html
<style>
h1:before {
content: "dragon::";
color: #103d3e;
}
</style>
Dragon - C++ API
================
Routines
--------
.. only:: html
`Routine core <dragon/core.html>`_
: Public API for ``dragon/core`` routine.
.. toctree::
:hidden:
dragon/core
:: #########################################################
:: Command file to build on Windows for Sphinx documentation
:: #########################################################
@echo off
:: You can set these variables from the command line
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=..\..\_build\api\cc
set ALLSPHINXOPTS=-d %BUILDDIR%\doctrees %SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. doxygen to make Doxygen XML files
echo. html to make standalone HTML files
echo. debughtml to make debugging HTML files
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. latexpdf to make LaTeX files and run them through pdflatex
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
:: Check if sphinx-build is available and fallback to Python version if any
%SPHINXBUILD% 2> nul
if errorlevel 9009 goto sphinx_python
goto sphinx_ok
:sphinx_python
set SPHINXBUILD=python -m sphinx.__init__
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
:sphinx_ok
if "%1" == "doxygen" (
(if exist %BUILDDIR%_doxygen rmdir /q /s %BUILDDIR%_doxygen) && mkdir %BUILDDIR%_doxygen && doxygen
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Doxygen XML files are in %BUILDDIR%_doxygen/xml.
goto end
)
if "%1" == "html" (
%SPHINXBUILD% -b html -j %NUMBER_OF_PROCESSORS% %ALLSPHINXOPTS% %BUILDDIR%
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%-latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%-latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%-latex
cd %BUILDDIR%-latex
make all-pdf
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%-latex.
goto end
)
:end
# Makefile for Sphinx documentation # Makefile for Sphinx documentation
#
# You can set these variables from the command line. # You can set these variables from the command line
SPHINXOPTS = SPHINXOPTS =
SPHINXBUILD = sphinx-build SPHINXBUILD = sphinx-build
PAPER = PAPER =
BUILDDIR = ../../_build/api BUILDDIR = ../../_build/api/python
# User-friendly check for sphinx-build # User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed.)
endif endif
# Internal variables. # Internal variables
PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others NUMBER_OF_PROCESSORS:=$(shell getconf _NPROCESSORS_ONLN)
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
NPROC:=$(shell getconf _NPROCESSORS_ONLN)
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext .PHONY: help clean html debughtml latex latexpdf
help: help:
@echo "Please use \`make <target>' where <target> is one of" @echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files" @echo " html to make standalone HTML files"
@echo " deployhtml to make HTML files copyied to website" @echo " debughtml to make debugging HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
clean: clean:
rm -rf $(BUILDDIR)/* rm -rf $(BUILDDIR)/*
html: html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/python $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)
@echo @echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/python." @echo "Build finished. The HTML pages are in $(BUILDDIR)."
debughtml: debughtml:
$(SPHINXBUILD) -b html -j ${NPROC} $(ALLSPHINXOPTS) $(BUILDDIR)/python $(SPHINXBUILD) -b html -j ${NUMBER_OF_PROCESSORS} $(ALLSPHINXOPTS) $(BUILDDIR)
@echo @echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/python." @echo "Build finished. The HTML pages are in $(BUILDDIR)."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Dragon.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Dragon.qhc"
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/Dragon"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Dragon"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex: latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)-latex
@echo @echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Build finished; the LaTeX files are in $(BUILDDIR)-latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \ @echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)." "(use \`make latexpdf' here to do that automatically)."
latexpdf: latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)-latex
@echo "Running LaTeX files through pdflatex..." @echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf $(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." @echo "pdflatex finished; the PDF files are in $(BUILDDIR)-latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
...@@ -8,7 +8,6 @@ ...@@ -8,7 +8,6 @@
# <https://opensource.org/licenses/BSD-2-Clause> # <https://opensource.org/licenses/BSD-2-Clause>
# #
# ------------------------------------------------------------ # ------------------------------------------------------------
"""Sphinx configuration for Python API.""" """Sphinx configuration for Python API."""
from __future__ import absolute_import from __future__ import absolute_import
...@@ -43,7 +42,9 @@ napoleon_use_rtype = False ...@@ -43,7 +42,9 @@ napoleon_use_rtype = False
# Project # Project
project = 'dragon' project = 'dragon'
copyright = 'Copyright (c) 2017-present, SeetaTech, Co.,Ltd' copyright = 'Copyright (c) 2017-present, SeetaTech, Co.,Ltd'
author = 'Ting Pan\\\\tingpan@seetatech.com' author = 'SeetaTech'
with open('../../../dragon/version.txt', 'r') as f:
version = f.read().strip()
# HTML # HTML
html_theme = 'seeta' html_theme = 'seeta'
...@@ -60,17 +61,18 @@ html_theme_options = { ...@@ -60,17 +61,18 @@ html_theme_options = {
'navbar_links': { 'navbar_links': {
'Install': path_to('../../install', 1), 'Install': path_to('../../install', 1),
'API': [ 'API': [
('C++', path_to('../cc', 1)), ('master', path_to('../../api/python', 1)),
('Python', path_to('', 1)) ('versions...', path_to('../../versions', 1)),
], ],
'Github': 'https://github.com/seetaresearch/dragon', 'Github': 'https://github.com/seetaresearch/dragon',
}, },
'navbar_logo_link': path_to('../..', 1), 'navbar_logo_link': path_to('../..', 1),
'sidebar_title': 'Python v0.3.0', 'sidebar_title': 'Python v{}'.format(version),
'sidebar_title_link': path_to('../../versions', 1), 'sidebar_title_link': path_to('../../versions', 1),
'breadcrumb_links': [ 'breadcrumb_links': [
('Dragon', path_to('../..', 1)), ('Dragon', path_to('../..', 1)),
('API', path_to('../../versions', 1)), ('API', path_to('../../versions', 1)),
('Dragon v{}'.format(version.replace('a0', '-a0')), path_to('../../api', 1)),
('Python', path_to('', 1)), ('Python', path_to('', 1)),
], ],
} }
......
...@@ -24,7 +24,7 @@ name ...@@ -24,7 +24,7 @@ name
ndim ndim
#### ####
.. autoattribute:: dragon.EagerTensor.name .. autoattribute:: dragon.EagerTensor.ndim
shape shape
##### #####
......
@ECHO OFF :: #########################################################
:: Command file to build on Windows for Sphinx documentation
:: #########################################################
REM Command file for Sphinx documentation @echo off
:: You can set these variables from the command line
if "%SPHINXBUILD%" == "" ( if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build set SPHINXBUILD=sphinx-build
) )
set BUILDDIR=..\..\_build\api set BUILDDIR=..\..\_build\api\python
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . set ALLSPHINXOPTS=-d %BUILDDIR%\doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" ( if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
) )
if "%1" == "" goto help if "%1" == "" goto help
...@@ -19,25 +20,9 @@ if "%1" == "help" ( ...@@ -19,25 +20,9 @@ if "%1" == "help" (
:help :help
echo.Please use `make ^<target^>` where ^<target^> is one of echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories echo. debughtml to make debugging HTML files
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files echo. latexpdf to make LaTeX files and run them through pdflatex
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
echo. coverage to run coverage check of the documentation if enabled
goto end goto end
) )
...@@ -47,13 +32,7 @@ if "%1" == "clean" ( ...@@ -47,13 +32,7 @@ if "%1" == "clean" (
goto end goto end
) )
if "%2" == "f" ( :: Check if sphinx-build is available and fallback to Python version if any
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
)
REM Check if sphinx-build is available and fallback to Python version if any
%SPHINXBUILD% 2> nul %SPHINXBUILD% 2> nul
if errorlevel 9009 goto sphinx_python if errorlevel 9009 goto sphinx_python
goto sphinx_ok goto sphinx_ok
...@@ -76,192 +55,37 @@ if errorlevel 9009 ( ...@@ -76,192 +55,37 @@ if errorlevel 9009 (
:sphinx_ok :sphinx_ok
if "%1" == "html" ( if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/python %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/python.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1 if errorlevel 1 exit /b 1
echo. echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^ echo.Build finished. The HTML pages are in %BUILDDIR%.
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Dragon.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Dragon.ghc
goto end goto end
) )
if "%1" == "devhelp" ( if "%1" == "debughtml" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp %SPHINXBUILD% -b html -j %NUMBER_OF_PROCESSORS% %ALLSPHINXOPTS% %BUILDDIR%
if errorlevel 1 exit /b 1 if errorlevel 1 exit /b 1
echo. echo.
echo.Build finished. echo.Build finished. The HTML pages are in %BUILDDIR%.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end goto end
) )
if "%1" == "latex" ( if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%-latex
if errorlevel 1 exit /b 1 if errorlevel 1 exit /b 1
echo. echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. echo.Build finished; the LaTeX files are in %BUILDDIR%-latex.
goto end goto end
) )
if "%1" == "latexpdf" ( if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%-latex
cd %BUILDDIR%/latex cd %BUILDDIR%-latex
make all-pdf make all-pdf
cd %~dp0 cd %~dp0
echo. echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex. echo.Build finished; the PDF files are in %BUILDDIR%-latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "coverage" (
%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
if errorlevel 1 exit /b 1
echo.
echo.Testing of coverage in the sources finished, look at the ^
results in %BUILDDIR%/coverage/python.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end goto end
) )
......
...@@ -12,7 +12,7 @@ regularizers ...@@ -12,7 +12,7 @@ regularizers
`class L1L2 <regularizers/L1L2.html>`_ `class L1L2 <regularizers/L1L2.html>`_
: The L1L2 regularizer. : The L1L2 regularizer.
`class L2 <regularizers/L1.html>`_ `class L2 <regularizers/L2.html>`_
: The L1 regularizer. : The L1 regularizer.
`class Regularizer <regularizers/Regularizer.html>`_ `class Regularizer <regularizers/Regularizer.html>`_
......
...@@ -17,15 +17,18 @@ ...@@ -17,15 +17,18 @@
namespace dragon { namespace dragon {
/*!
* \brief The cpu device context.
*/
class DRAGON_API CPUContext { class DRAGON_API CPUContext {
public: public:
/*! \brief Default Constructor */ /*! \brief Default Constructor */
explicit CPUContext() : random_seed_(3) {} CPUContext() : random_seed_(3) {}
/*! \brief Constructor with the specified random seed */ /*! \brief Constructor with the random seed */
explicit CPUContext(unsigned int random_seed) : random_seed_(random_seed) {} explicit CPUContext(unsigned int random_seed) : random_seed_(random_seed) {}
/*! \brief Constructor with the specified device option */ /*! \brief Constructor with the device option */
explicit CPUContext(const DeviceOption& option) explicit CPUContext(const DeviceOption& option)
: random_seed_( : random_seed_(
option.has_random_seed() ? option.random_seed() option.has_random_seed() ? option.random_seed()
...@@ -34,74 +37,74 @@ class DRAGON_API CPUContext { ...@@ -34,74 +37,74 @@ class DRAGON_API CPUContext {
/*! \brief Destructor */ /*! \brief Destructor */
virtual ~CPUContext() {} virtual ~CPUContext() {}
/*! \brief Alloc the memory */ /*! \brief Allocate a block of memory */
static void* New(size_t nbytes) { static void* New(size_t size) {
void* data = malloc(nbytes); void* data = malloc(size);
CHECK(data) << "\nAllocate memory with " << nbytes << " bytes failed."; CHECK(data) << "\nAllocate memory with " << size << " bytes failed.";
return data; return data;
} }
/*! \brief Zero-Reset the memory */ /*! \brief Set a memory block to the given value */
static void Memset(size_t nbytes, void* ptr) { static void Memset(size_t n, void* ptr, int value = 0) {
memset(ptr, 0, nbytes); memset(ptr, value, n);
} }
/*! \brief Copy the memory */ /*! \brief Set a memory block to the given value asynchronously */
template <class DestContext, class SrcContext> void MemsetAsync(size_t n, void* ptr, int value) {
static void Memcpy(size_t nbytes, void* dest, const void* src) { memset(ptr, value, n);
memcpy(dest, src, nbytes);
} }
/*! \brief Free the memory */ /*! \brief Copy a memory block to the destination */
static void Delete(void* data) { template <class DestContext, class SrcContext>
free(data); static void Memcpy(size_t n, void* dest, const void* src) {
memcpy(dest, src, n);
} }
/*! \brief Zero-Reset the memory asynchronously */ /*! \brief Copy a memory block to the destination asynchronously */
void MemsetAsync(size_t nbytes, void* ptr) { template <class DestContext, class SrcContext>
memset(ptr, 0, nbytes); void MemcpyAsync(size_t n, void* dest, const void* src) {
memcpy(dest, src, n);
} }
/*! \brief Copy the memory asynchronously */ /*! \brief Deallocate a memory block */
template <class DestContext, class SrcContext> static void Delete(void* ptr) {
void MemcpyAsync(size_t nbytes, void* dest, const void* src) { free(ptr);
memcpy(dest, src, nbytes);
} }
/*! \brief Switch to the device of this context */ /*! \brief Switch to the device in current thread */
void SwitchToDevice() {} void SwitchToDevice() {}
/*! \brief Switch to the device with the given stream */ /*! \brief Switch to the device and select given stream in current thread */
void SwitchToDevice(const int stream_id) {} void SwitchToDevice(int stream) {}
/*! \brief Copy the memory with given type asynchronously */ /*! \brief Copy a typed memory block to the destination */
template <typename T, class DestContext, class SrcContext> template <typename T, class DestContext, class SrcContext>
void Copy(int n, T* dest, const T* src) { static void Copy(int n, T* dest, const T* src) {
if (dest == src) return; if (dest == src) return;
if (std::is_fundamental<T>::value) { if (std::is_fundamental<T>::value) {
Memcpy<DestContext, SrcContext>( Memcpy<DestContext, SrcContext>(
n * sizeof(T), (void*)dest, (const void*)src); n * sizeof(T), (void*)dest, (const void*)src);
} else { } else {
for (int i = 0; i < n; i++) { for (int i = 0; i < n; ++i) {
dest[i] = src[i]; dest[i] = src[i];
} }
} }
} }
/*! \brief Synchronize the dispatched operations */ /*! \brief Wait for the dispatched computation to complete */
void FinishDeviceComputation() {} void FinishDeviceComputation() {}
/*! \brief Return the device index */ /*! \brief Return the device index */
int device_id() const { int device() const {
return 0; return 0;
} }
/*! \brief Return the stream index */ /*! \brief Return the stream index */
int stream_id() const { int stream() const {
return 0; return 0;
} }
/*! \brief Return the internal random generator */ /*! \brief Return the random generator */
std::mt19937* rand_generator() { std::mt19937* rand_generator() {
if (!rand_generator_.get()) { if (!rand_generator_.get()) {
rand_generator_.reset(new std::mt19937(random_seed_)); rand_generator_.reset(new std::mt19937(random_seed_));
...@@ -110,13 +113,13 @@ class DRAGON_API CPUContext { ...@@ -110,13 +113,13 @@ class DRAGON_API CPUContext {
} }
/*! \brief Set the stream index */ /*! \brief Set the stream index */
void set_stream_id(int stream_id) {} void set_stream(int stream) {}
private: private:
/*! \brief Store the random seed */ /*! \brief The random seed */
unsigned int random_seed_; unsigned int random_seed_;
/*! \brief Store the internal random generator */ /*! \brief The random generator */
unique_ptr<std::mt19937> rand_generator_; unique_ptr<std::mt19937> rand_generator_;
}; };
......
...@@ -13,8 +13,6 @@ ...@@ -13,8 +13,6 @@
#ifndef DRAGON_CORE_CONTEXT_CNML_H_ #ifndef DRAGON_CORE_CONTEXT_CNML_H_
#define DRAGON_CORE_CONTEXT_CNML_H_ #define DRAGON_CORE_CONTEXT_CNML_H_
/* CAMBRICON CNRT && CNML Environment */
#include "dragon/core/common.h" #include "dragon/core/common.h"
struct cnrtStream; struct cnrtStream;
...@@ -28,11 +26,19 @@ typedef struct cnmlFusionOp* cnmlFusionOp_t; ...@@ -28,11 +26,19 @@ typedef struct cnmlFusionOp* cnmlFusionOp_t;
namespace dragon { namespace dragon {
class CNRTObject; /*!
* \brief The cnml device context.
*/
class CNMLContext { class CNMLContext {
public: public:
/*! \brief Default Constructor */ /*! \brief Default constructor */
CNMLContext() : device_id_(0), random_seed_(DEFAULT_RNG_SEED) {}
/*! \brief Constructor with the device index */
explicit CNMLContext(int device)
: device_id_(device), random_seed_(DEFAULT_RNG_SEED) {}
/*! \brief Constructor with the device option */
explicit CNMLContext(const DeviceOption& option) explicit CNMLContext(const DeviceOption& option)
: device_id_(option.device_id()), : device_id_(option.device_id()),
random_seed_( random_seed_(
...@@ -41,77 +47,63 @@ class CNMLContext { ...@@ -41,77 +47,63 @@ class CNMLContext {
CHECK_EQ(option.device_type(), PROTO_CNML); CHECK_EQ(option.device_type(), PROTO_CNML);
} }
/*! \brief Constructor with the specified device index */ /*! \brief Allocate a block of memory */
explicit CNMLContext(int device_id = 0) static void* New(size_t size) {
: device_id_(device_id), random_seed_(DEFAULT_RNG_SEED) {} return nullptr;
}
/*! \brief Alloc the memory */ /*! \brief Set a memory block to the given value */
static void* New(size_t nbytes); static void Memset(size_t n, void* ptr, int value) {}
/*! \brief Zero-Reset the memory */ /*! \brief Set a memory block to the given value asynchronously */
static void Memset(size_t nbytes, void* ptr); void MemsetAsync(size_t n, void* ptr, int value) {
Memset(n, ptr, value);
}
/*! \brief Copy the memory */ /*! \brief Copy a memory block to the destination */
template <class DestContext, class SrcContext> template <class DestContext, class SrcContext>
static void Memcpy(size_t nbytes, void* dest, const void* src); static void Memcpy(size_t n, void* dest, const void* src) {}
/*! \brief Free the memory */
static void Delete(void* data);
/*! \brief Zero-Reset the memory asynchronously */
void MemsetAsync(size_t nbytes, void* ptr) {
Memset(nbytes, ptr);
}
/*! \brief Copy the memory asynchronously */ /*! \brief Copy a memory block to the destination asynchronously */
template <class DestContext, class SrcContext> template <class DestContext, class SrcContext>
void MemcpyAsync(size_t nbytes, void* dest, const void* src) { void MemcpyAsync(size_t n, void* dest, const void* src) {
Memcpy<DestContext, SrcContext>(dest, src, nbytes); Memcpy<DestContext, SrcContext>(dest, src, n);
} }
/*! \brief Switch to the device with the given stream */ /*! \brief Deallocate a memory block */
void SwitchToDevice(int stream_id) {} static void Delete(void* ptr) {}
/*! \brief Switch to the device of this context */ /*! \brief Switch to the device in current thread */
void SwitchToDevice() { void SwitchToDevice() {
SwitchToDevice(0); SwitchToDevice(0);
} }
/*! \brief Synchronize the dispatched operations */ /*! \brief Switch to the device and select given stream in current thread */
void FinishDeviceComputation() {} void SwitchToDevice(int stream) {}
/*! \brief Return the specified cnrt stream */ /*! \brief Wait for the dispatched computation to complete */
static cnrtStream_t cnrt_stream(int device_id, int stream_id); void FinishDeviceComputation() {}
/*! \brief Return the internal cnrt stream */ /*! \brief Return the cnrt stream */
cnrtStream_t cnrt_stream() { cnrtStream_t cnrt_stream() {
return cnrt_stream(device_id_, stream_id_); return cnrt_stream(device_id_, stream_id_);
} }
/*! \brief Return the specified cnrt stream */
static cnrtStream_t cnrt_stream(int device_id, int stream_id) {
return (cnrtStream_t) nullptr;
}
/*! \brief Return the device index */ /*! \brief Return the device index */
int device_id() const { int device() const {
return device_id_; return device_id_;
} }
/*! \brief Return the stream index */ /*! \brief Return the stream index */
int stream_id() const { int stream() const {
return stream_id_; return stream_id_;
} }
/*! \brief Return the global context locker */
static std::mutex& mutex() {
static std::mutex m;
return m;
}
/*! \brief Return the thread local cnrt object */
static CNRTObject* obj();
/*! \brief Set the stream index */
void set_stream_id(int stream_id) {
stream_id_ = stream_id;
}
private: private:
int device_id_, stream_id_ = 1, random_seed_; int device_id_, stream_id_ = 1, random_seed_;
unique_ptr<std::mt19937> rand_generator_; unique_ptr<std::mt19937> rand_generator_;
......
...@@ -13,8 +13,6 @@ ...@@ -13,8 +13,6 @@
#ifndef DRAGON_CORE_CONTEXT_CUDA_H_ #ifndef DRAGON_CORE_CONTEXT_CUDA_H_
#define DRAGON_CORE_CONTEXT_CUDA_H_ #define DRAGON_CORE_CONTEXT_CUDA_H_
/* NVIDIA CUDA Environment */
#include "dragon/core/common.h" #include "dragon/core/common.h"
#include "dragon/utils/cuda_device.h" #include "dragon/utils/cuda_device.h"
#include "dragon/utils/cudnn_device.h" #include "dragon/utils/cudnn_device.h"
...@@ -164,11 +162,23 @@ class CUDAObject { ...@@ -164,11 +162,23 @@ class CUDAObject {
bool cudnn_enabled_ = true; bool cudnn_enabled_ = true;
bool cudnn_benchmark_ = false; bool cudnn_benchmark_ = false;
private:
DISABLE_COPY_AND_ASSIGN(CUDAObject);
}; };
/*!
* \brief The cuda device context.
*/
class DRAGON_API CUDAContext { class DRAGON_API CUDAContext {
public: public:
/*! \brief Default Constructor */ /*! \brief Default constructor */
CUDAContext() : device_id_(0), random_seed_(DEFAULT_RNG_SEED) {}
/*! \brief Constructor with the device index */
explicit CUDAContext(int device) : device_id_(device) {}
/*! \brief Constructor with the device option */
explicit CUDAContext(const DeviceOption& option) explicit CUDAContext(const DeviceOption& option)
: device_id_(option.device_id()), : device_id_(option.device_id()),
random_seed_( random_seed_(
...@@ -177,104 +187,97 @@ class DRAGON_API CUDAContext { ...@@ -177,104 +187,97 @@ class DRAGON_API CUDAContext {
CHECK_EQ(option.device_type(), PROTO_CUDA); CHECK_EQ(option.device_type(), PROTO_CUDA);
} }
/*! \brief Constructor with the specified device index */ /*! \brief Allocate a block of memory */
explicit CUDAContext(int device_id = 0) static void* New(size_t size) {
: device_id_(device_id), random_seed_(DEFAULT_RNG_SEED) {}
/*! \brief Alloc the memory */
static void* New(size_t nbytes) {
void* data; void* data;
cudaMalloc(&data, nbytes); cudaMalloc(&data, size);
CHECK(data) << "\nAllocate cuda memory with " << nbytes << " bytes failed."; CHECK(data) << "\nAllocate cuda memory with " << size << " bytes failed.";
return data; return data;
} }
/*! \brief Zero-Reset the memory */ /*! \brief Set a memory block to the given value */
static void Memset(size_t nbytes, void* ptr) { static void Memset(size_t n, void* ptr, int value = 0) {
auto stream = object()->default_stream(); auto stream = object()->default_stream();
CUDA_CHECK(cudaMemsetAsync(ptr, 0, nbytes, stream)); CUDA_CHECK(cudaMemsetAsync(ptr, value, n, stream));
SyncStream(stream); SynchronizeStream(stream);
} }
/*! \brief Copy the memory */ /*! \brief Set a memory block to the given value asynchronously */
template <class DestContext, class SrcContext> void MemsetAsync(size_t n, void* ptr, int value = 0) {
static void Memcpy(size_t nbytes, void* dest, const void* src) { CUDA_CHECK(cudaMemsetAsync(ptr, value, n, cuda_stream()));
Memcpy<DestContext, SrcContext>(nbytes, dest, src, current_device());
} }
/*! \brief Copy the memory using specific stream */ /*! \brief Copy a memory block to the destination */
template <class DestContext, class SrcContext> template <class DestContext, class SrcContext>
static void static void Memcpy(size_t n, void* dest, const void* src) {
Memcpy(size_t nbytes, void* dest, const void* src, int device_id) { Memcpy<DestContext, SrcContext>(n, dest, src, current_device());
auto stream = object()->default_stream(device_id);
CUDA_CHECK(cudaMemcpyAsync(dest, src, nbytes, cudaMemcpyDefault, stream));
SyncStream(stream);
} }
/*! \brief Synchronize the specified cuda stream */ /*! \brief Copy a memory block to the destination using given device */
static void SyncStream(cudaStream_t stream) { template <class DestContext, class SrcContext>
cudaStreamSynchronize(stream); static void Memcpy(size_t n, void* dest, const void* src, int device) {
auto error = cudaGetLastError(); auto stream = object()->default_stream(device);
CHECK_EQ(error, cudaSuccess) CUDA_CHECK(cudaMemcpyAsync(dest, src, n, cudaMemcpyDefault, stream));
<< "\nCUDA Error: " << cudaGetErrorString(error); SynchronizeStream(stream);
}
/*! \brief Free the memory */
static void Delete(void* data) {
cudaFree(data);
} }
/*! \brief Zero-Reset the memory asynchronously */ /*! \brief Copy a memory block to the destination asynchronously */
void MemsetAsync(size_t nbytes, void* ptr) { template <class DestContext, class SrcContext>
CUDA_CHECK(cudaMemsetAsync(ptr, 0, nbytes, cuda_stream())); void MemcpyAsync(size_t n, void* dest, const void* src) {
CUDA_CHECK(cudaMemcpyAsync(dest, src, n, cudaMemcpyDefault, cuda_stream()));
} }
/*! \brief Copy the memory asynchronously */ /*! \brief Synchronize the given stream */
template <class DestContext, class SrcContext> static void SynchronizeStream(cudaStream_t stream) {
void MemcpyAsync(size_t nbytes, void* dest, const void* src) { cudaStreamSynchronize(stream);
CUDA_CHECK( auto err = cudaGetLastError();
cudaMemcpyAsync(dest, src, nbytes, cudaMemcpyDefault, cuda_stream())); CHECK_EQ(err, cudaSuccess) << "\nCUDA Error: " << cudaGetErrorString(err);
} }
/*! \brief Switch to the device with the given stream */ /*! \brief Deallocate a memory block */
void SwitchToDevice(const int stream_id) { static void Delete(void* ptr) {
CUDA_CHECK(cudaSetDevice(device_id_)); cudaFree(ptr);
stream_id_ = stream_id;
} }
/*! \brief Switch to the device of this context */ /*! \brief Switch to the device in current thread */
void SwitchToDevice() { void SwitchToDevice() {
SwitchToDevice(0); SwitchToDevice(0);
} }
/*! \brief Copy the memory with given type asynchronously */ /*! \brief Switch to the device and select given stream in current thread */
void SwitchToDevice(int stream) {
CUDA_CHECK(cudaSetDevice(device_id_));
stream_id_ = stream;
}
/*! \brief Copy a typed memory block to the destination */
template <typename T, class DestContext, class SrcContext> template <typename T, class DestContext, class SrcContext>
void Copy(int n, T* dest, const T* src) { void Copy(int n, T* dest, const T* src) {
if (dest == src) return; if (dest == src) return;
MemcpyAsync<SrcContext, DestContext>(n * sizeof(T), dest, src); MemcpyAsync<SrcContext, DestContext>(n * sizeof(T), dest, src);
} }
/*! \brief Synchronize the dispatched operations */ /*! \brief Wait for the dispatched computation to complete */
void FinishDeviceComputation() { void FinishDeviceComputation() {
SyncStream(cuda_stream()); SynchronizeStream(cuda_stream());
} }
/*! \brief Return the internal cuda stream */ /*! \brief Return the cuda stream */
cudaStream_t cuda_stream() { cudaStream_t cuda_stream() {
return cuda_stream(device_id_, stream_id_); return cuda_stream(device_id_, stream_id_);
} }
/*! \brief Return the specified cuda stream */ /*! \brief Return the specified cuda stream */
cudaStream_t cuda_stream(int device_id, int stream_id) { cudaStream_t cuda_stream(int device, int stream) {
return object()->stream(device_id, stream_id); return object()->stream(device, stream);
} }
/*! \brief Return the internal cublas handle */ /*! \brief Return the cublas handle */
cublasHandle_t cublas_handle() { cublasHandle_t cublas_handle() {
return object()->cublas_handle(device_id_, stream_id_); return object()->cublas_handle(device_id_, stream_id_);
} }
/*! \brief Return the internal cuda random generator */ /*! \brief Return the curand generator */
curandGenerator_t& curand_generator() { curandGenerator_t& curand_generator() {
if (!curand_generator_) { if (!curand_generator_) {
CUDADeviceGuard guard(device_id_); CUDADeviceGuard guard(device_id_);
...@@ -287,35 +290,35 @@ class DRAGON_API CUDAContext { ...@@ -287,35 +290,35 @@ class DRAGON_API CUDAContext {
return curand_generator_; return curand_generator_;
} }
/*! \brief Return the internal cudnn handle */ /*! \brief Return the cudnn handle */
#ifdef USE_CUDNN #ifdef USE_CUDNN
cudnnHandle_t cudnn_handle() { cudnnHandle_t cudnn_handle() {
return object()->cudnn_handle(device_id_, stream_id_); return object()->cudnn_handle(device_id_, stream_id_);
} }
#endif #endif
/*! \brief Return the device index of this context */ /*! \brief Return the device index */
int device_id() const { int device() const {
return device_id_; return device_id_;
} }
/*! \brief Return the stream index */
int stream() const {
return stream_id_;
}
/*! \brief Return the device index of current thread */ /*! \brief Return the device index of current thread */
static int current_device() { static int current_device() {
return CUDA_GET_DEVICE(); return CUDA_GET_DEVICE();
} }
/*! \brief Return the stream id */ /*! \brief Return the shared context mutex */
int stream_id() const {
return stream_id_;
}
/*! \brief Return the global context locker */
static std::mutex& mutex(); static std::mutex& mutex();
/*! \brief Return the thread local cuda object */ /*! \brief Return the thread-local cuda object */
static CUDAObject* object(); static CUDAObject* object();
/*! \brief Return the internal random generator */ /*! \brief Return the random generator */
std::mt19937* rand_generator() { std::mt19937* rand_generator() {
if (!rand_generator_.get()) { if (!rand_generator_.get()) {
rand_generator_.reset(new std::mt19937(random_seed_)); rand_generator_.reset(new std::mt19937(random_seed_));
...@@ -323,9 +326,9 @@ class DRAGON_API CUDAContext { ...@@ -323,9 +326,9 @@ class DRAGON_API CUDAContext {
return rand_generator_.get(); return rand_generator_.get();
} }
/*! \brief Set the stream id */ /*! \brief Set the stream index */
void set_stream_id(int stream_id) { void set_stream(int stream) {
stream_id_ = stream_id; stream_id_ = stream;
} }
private: private:
...@@ -338,46 +341,51 @@ class DRAGON_API CUDAContext { ...@@ -338,46 +341,51 @@ class DRAGON_API CUDAContext {
class DRAGON_API CUDAContext { class DRAGON_API CUDAContext {
public: public:
/*! \brief Default Constructor */ /*! \brief Default constructor */
explicit CUDAContext(const DeviceOption& option) { explicit CUDAContext() {
CUDA_NOT_COMPILED;
}
/*! \brief Constructor with the device index */
explicit CUDAContext(int device) {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
/*! \brief Constructor with the specified device id */ /*! \brief Constructor with the device option */
explicit CUDAContext(const int device_id = 0) { explicit CUDAContext(const DeviceOption& option) {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
/*! \brief Alloc the memory */ /*! \brief Allocate a block of memory */
static void* New(size_t nbytes) { static void* New(size_t nbytes) {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
return nullptr; return nullptr;
} }
/*! \brief Zero-Reset the memory */ /*! \brief Set a memory block to the given value */
static void Memset(size_t nbytes, void* ptr) { static void Memset(size_t nbytes, void* ptr, int value = 0) {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
/*! \brief Copy the memory */ /*! \brief Set a memory block to the given value asynchronously */
template <class DestContext, class SrcContext> void MemsetAsync(size_t nbytes, void* ptr, int value = 0) {
static void Memcpy(size_t nbytes, void* dest, const void* src) {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
/*! \brief Copy the memory using specific stream */ /*! \brief Copy a memory block to the destination */
template <class DestContext, class SrcContext> template <class DestContext, class SrcContext>
static void Memcpy(size_t nbytes, void* dst, const void* src, int device_id) { static void Memcpy(size_t nbytes, void* dest, const void* src) {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
/*! \brief Free the memory */ /*! \brief Copy a memory block to the destination using given device */
static void Delete(void* data) { template <class DestContext, class SrcContext>
static void Memcpy(size_t nbytes, void* dst, const void* src, int device_id) {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
/*! \brief Zero-Reset the memory asynchronously */ /*! \brief Deallocate a memory block */
void MemsetAsync(size_t nbytes, void* ptr) { static void Delete(void* ptr) {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
...@@ -387,23 +395,23 @@ class DRAGON_API CUDAContext { ...@@ -387,23 +395,23 @@ class DRAGON_API CUDAContext {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
/*! \brief Switch to the device with the given stream */ /*! \brief Switch to the device in current thread */
void SwitchToDevice(int stream_id) { void SwitchToDevice() {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
/*! \brief Switch to the device of this context */ /*! \brief Switch to the device and select given stream in current thread */
void SwitchToDevice() { void SwitchToDevice(int stream_id) {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
/*! \brief Synchronize the dispatched operations */ /*! \brief Wait for the dispatched computation to complete */
void FinishDeviceComputation() { void FinishDeviceComputation() {
CUDA_NOT_COMPILED; CUDA_NOT_COMPILED;
} }
/*! \brief Return the device index of this context */ /*! \brief Return the device index */
int device_id() const { int device() const {
return 0; return 0;
} }
...@@ -412,13 +420,10 @@ class DRAGON_API CUDAContext { ...@@ -412,13 +420,10 @@ class DRAGON_API CUDAContext {
return 0; return 0;
} }
/*! \brief Return the stream id */ /*! \brief Return the stream index */
int stream_id() const { int stream() const {
return 0; return 0;
} }
/*! \brief Set the stream id */
void set_stream_id(int stream_id) {}
}; };
#endif // USE_CUDA #endif // USE_CUDA
......
#include <regex>
#include "dragon/core/graph.h" #include "dragon/core/graph.h"
#include "dragon/core/graph_gradient.h" #include "dragon/core/graph_gradient.h"
#include "dragon/core/graph_optimizer.h" #include "dragon/core/graph_optimizer.h"
...@@ -46,8 +48,8 @@ GraphBase::GraphBase(const GraphDef& def, Workspace* ws) ...@@ -46,8 +48,8 @@ GraphBase::GraphBase(const GraphDef& def, Workspace* ws)
} }
} }
bool Graph::Create(const GraphDef& def, Workspace* ws) { bool Graph::Create(const GraphDef& def) {
this->opt_def_ = def; // Store for debugging this->optimized_def_ = def; // Store for debugging
bool has_device_option = def.has_device_option(); bool has_device_option = def.has_device_option();
for (int i = 0; i < def.op_size(); i++) { for (int i = 0; i < def.op_size(); i++) {
auto op_def(def.op(i)); auto op_def(def.op(i));
...@@ -63,7 +65,7 @@ bool Graph::Create(const GraphDef& def, Workspace* ws) { ...@@ -63,7 +65,7 @@ bool Graph::Create(const GraphDef& def, Workspace* ws) {
arg.set_i(1); arg.set_i(1);
op_def.add_arg()->CopyFrom(arg); op_def.add_arg()->CopyFrom(arg);
} }
cached_ops_.push_back(NewOperator(op_def, ws)); cached_ops_.push_back(NewOperator(op_def, ws_));
cached_ops_.back()->set_output_aliases(output_aliases_); cached_ops_.back()->set_output_aliases(output_aliases_);
} }
return true; return true;
...@@ -71,25 +73,25 @@ bool Graph::Create(const GraphDef& def, Workspace* ws) { ...@@ -71,25 +73,25 @@ bool Graph::Create(const GraphDef& def, Workspace* ws) {
Graph::Graph(const GraphDef& def, Workspace* ws) : GraphBase(def, ws) { Graph::Graph(const GraphDef& def, Workspace* ws) : GraphBase(def, ws) {
// Apply the optimizations // Apply the optimizations
GraphDef opt_def = def; GraphDef def_v2(def);
GraphOptimizer graph_optim(ws); GraphOptimizer graph_optimizer(ws);
GraphGradientMaker gradient_maker; GraphGradientMaker gradient_maker;
Map<string, vec32_t> subgraph_indices; Map<string, vec32_t> subgraph_indices;
int opt = 3; // defaults: O3 int opt = 3; // default: O3
if (args().count("optimization")) opt = arg("optimization").i(); if (args().count("optimization")) opt = arg("optimization").i();
if (opt >= 1) opt_def = graph_optim.PruneNodes(def); if (opt >= 1) def_v2 = graph_optimizer.PruneNodes(def);
if (opt >= 2) graph_optim.AddInplace(opt_def, output_aliases_); if (opt >= 2) graph_optimizer.AddInplace(def_v2, output_aliases_);
if (opt >= 3) { if (opt >= 3) {
if (phase() == "TRAIN") { if (phase() == "TRAIN") {
opt_def = graph_optim.MirrorStage(opt_def, subgraph_indices); def_v2 = graph_optimizer.MirrorStage(def_v2, subgraph_indices);
opt_def = gradient_maker.Share(opt_def); def_v2 = gradient_maker.Share(def_v2);
} else { } else {
opt_def = graph_optim.SimulateGC(opt_def); def_v2 = graph_optimizer.SimulateGC(def_v2);
} }
} }
// Create // Create
Create(opt_def, ws); Create(def_v2);
// Recomputation and SubGraph // Recomputation and SubGraph
if (subgraph_indices.size() > 0) { if (subgraph_indices.size() > 0) {
...@@ -105,11 +107,14 @@ Graph::Graph(const GraphDef& def, Workspace* ws) : GraphBase(def, ws) { ...@@ -105,11 +107,14 @@ Graph::Graph(const GraphDef& def, Workspace* ws) : GraphBase(def, ws) {
} }
} }
bool Graph::Run(const string& include, const string& exclude, int stream) { bool Graph::Run(int stream, const string& include, const string& exclude) {
unique_ptr<std::regex> regex_incl, regex_excl;
if (!include.empty()) regex_incl.reset(new std::regex(include));
if (!exclude.empty()) regex_excl.reset(new std::regex(exclude));
LOG(DEBUG) << "Run Graph: " << name(); LOG(DEBUG) << "Run Graph: " << name();
for (auto* op : cached_ops_) { for (auto* op : cached_ops_) {
if (!include.empty() && !str::find(op->type(), include)) continue; if (regex_incl && !regex_match(op->type(), *regex_incl)) continue;
if (!exclude.empty() && str::find(op->type(), exclude)) continue; if (regex_excl && regex_match(op->type(), *regex_excl)) continue;
op->SwitchToPhase(phase()); op->SwitchToPhase(phase());
LOG(DEBUG) << "Run Op: " << op->name(); LOG(DEBUG) << "Run Op: " << op->name();
op->Run(stream); op->Run(stream);
......
...@@ -18,26 +18,32 @@ ...@@ -18,26 +18,32 @@
namespace dragon { namespace dragon {
/*!
* \brief The base graph class.
*/
class DRAGON_API GraphBase { class DRAGON_API GraphBase {
public: public:
/*! \brief Default constructor */ /*! \brief Constructor with the def and workspace */
GraphBase(const GraphDef&, Workspace*); GraphBase(const GraphDef& def, Workspace* ws);
/*! \brief Default Destructor */ /*! \brief Destructor */
virtual ~GraphBase() {} virtual ~GraphBase() {}
/*! \brief Create a graph from the optimized def */ /*! \brief Create graph in the workspace */
virtual bool Create(const GraphDef&, Workspace*) = 0; virtual bool Create(const GraphDef& def) = 0;
/*! \brief Run the graph once synchronously */ /*! \brief Run graph on the given stream */
virtual bool Run(const string&, const string&, int = 0) = 0; virtual bool Run(
int stream = 0,
const string& include = "",
const string& exclude = "") = 0;
/*! \brief Return the graph name */ /*! \brief Return the graph name */
const string& name() const { const string& name() const {
return name_; return name_;
} }
/*! \brief Return the defined running phase */ /*! \brief Return the executing phase */
const string& phase() const { const string& phase() const {
return phase_; return phase_;
} }
...@@ -47,19 +53,19 @@ class DRAGON_API GraphBase { ...@@ -47,19 +53,19 @@ class DRAGON_API GraphBase {
return *(args_[name]); return *(args_[name]);
} }
/*! \brief Return the argument map */ /*! \brief Return all the arguments */
const Map<string, const Argument*>& args() { const Map<string, const Argument*>& args() {
return args_; return args_;
} }
/*! \brief Return the stored raw def */ /*! \brief Return the graph def */
const GraphDef& def() const { const GraphDef& def() const {
return def_; return def_;
} }
/*! \brief Return the stored opt def */ /*! \brief Return the optimized graph def */
const GraphDef& opt_def() const { const GraphDef& optimized_def() const {
return opt_def_; return optimized_def_;
} }
/*! \brief Return the parent workspace */ /*! \brief Return the parent workspace */
...@@ -68,42 +74,53 @@ class DRAGON_API GraphBase { ...@@ -68,42 +74,53 @@ class DRAGON_API GraphBase {
} }
protected: protected:
/*! \brief Store the name and running phase */ /*! \brief The name and executing phase */
string name_, phase_; string name_, phase_;
/*! \brief Store the defined arguments */ /*! \brief The defined arguments */
Map<string, const Argument*> args_; Map<string, const Argument*> args_;
/*! \brief Store the parent workspace */ /*! \brief The parent workspace */
Workspace* ws_; Workspace* ws_;
/*! \brief Store the graph definition */ /*! \brief The graph definition */
GraphDef def_, opt_def_; GraphDef def_;
/*! \brief The optimized graph definition */
GraphDef optimized_def_;
DISABLE_COPY_AND_ASSIGN(GraphBase);
}; };
/*!
* \brief Graph to execute operators sequentially.
*/
class Graph : public GraphBase { class Graph : public GraphBase {
public: public:
/*! \brief Default constructor */ /*! \brief Constructor with the def and workspace */
Graph(const GraphDef& def, Workspace* ws); Graph(const GraphDef& def, Workspace* ws);
/*! \brief Default Destructor */ /*! \brief Destructor */
virtual ~Graph() { virtual ~Graph() {
for (auto* cached_op : cached_ops_) { for (auto* cached_op : cached_ops_) {
delete cached_op; delete cached_op;
} }
} }
/*! \brief Create a graph from the optimized def */ /*! \brief Create graph in the workspace */
bool Create(const GraphDef&, Workspace*) override; bool Create(const GraphDef& def) override;
/*! \brief Run the graph once synchronously */ /*! \brief Run graph on the given stream */
bool Run(const string&, const string&, int = 0) override; bool Run(
int stream = 0,
const string& include = "",
const string& exclude = "") override;
protected: protected:
/*! \brief The cached operators */ /*! \brief The cached operators */
vector<OperatorBase*> cached_ops_; vector<OperatorBase*> cached_ops_;
/*! \brief Store the candidate output aliases */ /*! \brief The candidate output aliases */
Map<string, Set<string>> output_aliases_; Map<string, Set<string>> output_aliases_;
}; };
......
...@@ -72,6 +72,9 @@ class GraphOptimizer { ...@@ -72,6 +72,9 @@ class GraphOptimizer {
/* \brief Store the count of references */ /* \brief Store the count of references */
Map<string, int> reference_count_; Map<string, int> reference_count_;
private:
DISABLE_COPY_AND_ASSIGN(GraphOptimizer);
}; };
} // namespace dragon } // namespace dragon
......
...@@ -20,44 +20,45 @@ ...@@ -20,44 +20,45 @@
namespace dragon { namespace dragon {
typedef enum { typedef enum {
NCHW, NCHW = 0,
NHWC, NHWC = 1,
} StorageOrder; } StorageOrder;
/*!
* \brief Memory to manage both the host and device data.
*/
class DRAGON_API UnifiedMemory { class DRAGON_API UnifiedMemory {
public: public:
typedef enum { /*!
/*! \brief The initial state */ * \brief The device-aware state for data mutation.
UNINITIALIZED, */
/*! \brief Memory could be modified by CPUContext last time */ enum State {
STATE_AT_CPU, /*! \brief Initial state */
/*! \brief Memory could be modified by CUDAContext last time */ UNINITIALIZED = 0,
STATE_AT_CUDA, /*! \brief Data is mutable to cpu */
/*! \brief Memory could be modified by CNMLContext last time */ STATE_AT_CPU = 1,
STATE_AT_CNML, /*! \brief Data is mutable to cuda */
/*! \brief The synced state */ STATE_AT_CUDA = 2,
SYNCED, /*! \brief Data is mutable to cnml */
} State; STATE_AT_CNML = 3,
/*! \brief Data is synced between host and device */
/*! \brief Default Constructor */ SYNCED = 4,
UnifiedMemory() : cpu_ptr_(nullptr), cuda_ptr_(nullptr), cnml_ptr_(nullptr) {} };
/*! \brief Constructor with the known meta and size */ /*! \brief Default constructor */
UnifiedMemory(const TypeMeta& meta, size_t size) UnifiedMemory() {}
: meta_(meta),
size_(size), /*! \brief Constructor with the type meta and size */
cpu_ptr_(nullptr), UnifiedMemory(const TypeMeta& meta, size_t size) : meta_(meta), size_(size) {}
cuda_ptr_(nullptr),
cnml_ptr_(nullptr) {}
/*! \brief Destructor */ /*! \brief Destructor */
~UnifiedMemory(); ~UnifiedMemory();
/*! \brief Switch to the specified device */ /*! \brief Switch to the given device */
void SwitchToDevice(int device_id); void SwitchToDevice(int device);
/*! \brief Switch to the specified cuda device */ /*! \brief Switch to the given cuda device */
void SwitchToCUDADevice(int device_id); void SwitchToCUDADevice(int device);
/*! \brief Involve the state to CPUContext */ /*! \brief Involve the state to CPUContext */
void ToCPU(size_t size = 0); void ToCPU(size_t size = 0);
...@@ -65,9 +66,9 @@ class DRAGON_API UnifiedMemory { ...@@ -65,9 +66,9 @@ class DRAGON_API UnifiedMemory {
/*! \brief Involve the state to CUDAContext */ /*! \brief Involve the state to CUDAContext */
void ToCUDA(size_t size = 0); void ToCUDA(size_t size = 0);
/*! \brief Return the device index */ /*! \brief Return the memory state */
int device_id() const { State state() const {
return device_id_; return state_;
} }
/*! \brief Return the total number of bytes */ /*! \brief Return the total number of bytes */
...@@ -75,9 +76,9 @@ class DRAGON_API UnifiedMemory { ...@@ -75,9 +76,9 @@ class DRAGON_API UnifiedMemory {
return size_; return size_;
} }
/*! \brief Return the number of chunks */ /*! \brief Return the number of memory chunks */
size_t nchunks() const { size_t num_chunks() const {
return nchunks_; return num_chunks_;
} }
/*! \brief Return the storage order */ /*! \brief Return the storage order */
...@@ -85,30 +86,30 @@ class DRAGON_API UnifiedMemory { ...@@ -85,30 +86,30 @@ class DRAGON_API UnifiedMemory {
return order_; return order_;
} }
/*! \brief Return the memory state */ /*! \brief Return the device index */
State state() const { int device() const {
return state_; return device_id_;
} }
/*! \brief Return a string to describe the internal structure */ /*! \brief Return the data info */
Map<string, string> info() const; Map<string, string> info() const;
/*! \brief Return the const data pointer on CPUContext */ /*! \brief Return the const cpu data */
const void* cpu_data(size_t size = 0); const void* cpu_data(size_t size = 0);
/*! \brief Return the const data pointer on CUDAContext */ /*! \brief Return the const cuda data */
const void* cuda_data(size_t size = 0); const void* cuda_data(size_t size = 0);
/*! \brief Return the const data pointer on CNMLContext */ /*! \brief Return the const cnml data */
const void* cnml_data(); const void* cnml_data();
/*! \brief Return the mutable data pointer on CPUContext */ /*! \brief Return the mutable cpu data */
void* mutable_cpu_data(size_t size = 0); void* mutable_cpu_data(size_t size = 0);
/*! \brief Return the mutable data pointer on CUDAContext */ /*! \brief Return the mutable cuda data */
void* mutable_cuda_data(size_t size = 0); void* mutable_cuda_data(size_t size = 0);
/*! \brief Return the mutable data pointer on CNMLContext */ /*! \brief Return the mutable cnml data */
void* mutable_cnml_data(); void* mutable_cnml_data();
/*! \brief Return the binding cnml cpu tensor */ /*! \brief Return the binding cnml cpu tensor */
...@@ -117,15 +118,15 @@ class DRAGON_API UnifiedMemory { ...@@ -117,15 +118,15 @@ class DRAGON_API UnifiedMemory {
/*! \brief Return the binding cnml mlu tensor */ /*! \brief Return the binding cnml mlu tensor */
cnmlTensor_t& cnml_mlu_tensor(); cnmlTensor_t& cnml_mlu_tensor();
/*! \brief Allocate the mlu device memory */ /*! \brief Allocate the mlu device data */
void* malloc_cnml_data(); void* malloc_cnml_data();
/*! \brief Copy the mlu device memory to the host */ /*! \brief Copy the mlu device data to host */
void fetch_cnml_data(void** data); void fetch_cnml_data(void** data);
/*! \brief Set the chunks of this memory */ /*! \brief Set the number of data chunks */
void set_nchunks(size_t nchunks) { void set_num_chunks(size_t num_chunks) {
nchunks_ = nchunks; num_chunks_ = num_chunks;
} }
/*! \brief Set the storage order */ /*! \brief Set the storage order */
...@@ -133,39 +134,47 @@ class DRAGON_API UnifiedMemory { ...@@ -133,39 +134,47 @@ class DRAGON_API UnifiedMemory {
order_ = order; order_ = order;
} }
/*! \brief Set the cpu data pointer from external context */ /*! \brief Set to use an external block of cpu data */
void set_cpu_data(void* cpu_ptr, size_t size); void set_cpu_data(void* cpu_ptr, size_t size);
/*! \brief Set the cuda data pointer from external context */ /*! \brief Set to use an extenral block of cuda data */
void set_cuda_data(void* cuda_ptr, size_t size, int device_id); void set_cuda_data(void* cuda_ptr, size_t size, int device);
private: private:
/*! \brief The type meta */ /*! \brief The data state */
TypeMeta meta_; State state_ = UNINITIALIZED;
/*! \brief The size and number of chunks */ /*! \brief The size and number of chunks */
size_t size_ = 0, nchunks_ = 1; size_t size_ = 0, num_chunks_ = 1;
/*! \brief The type meta */
TypeMeta meta_;
/*! \brief The storage order */ /*! \brief The storage order */
StorageOrder order_ = NCHW; StorageOrder order_ = NCHW;
/*! \brief The current state */ /*! \brief The device index */
State state_ = UNINITIALIZED; int device_id_ = 0;
/*! \brief The cpu data pointer */
void* cpu_ptr_ = nullptr;
/*! \brief The data pointers */ /*! \brief The cuda data pointer */
void *cpu_ptr_, *cuda_ptr_, *cnml_ptr_; void* cuda_ptr_ = nullptr;
/*! \brief The cnml data pointer */
void* cnml_ptr_ = nullptr;
/*! \brief The ownership of data pointers */ /*! \brief The ownership of data pointers */
int own_cpu_ptr_ = 1, own_cuda_ptr_ = 1; int own_cpu_ptr_ = 1, own_cuda_ptr_ = 1;
/*! \brief The device index */
int device_id_ = 0;
/*! \brief The binding cpu tensor for cnml */ /*! \brief The binding cpu tensor for cnml */
cnmlCpuTensor_t cnml_cpu_tensor_ = nullptr; cnmlCpuTensor_t cnml_cpu_tensor_ = nullptr;
/*! \brief The binding mlu tensor for cnml */ /*! \brief The binding mlu tensor for cnml */
cnmlTensor_t cnml_mlu_tensor_ = nullptr; cnmlTensor_t cnml_mlu_tensor_ = nullptr;
DISABLE_COPY_AND_ASSIGN(UnifiedMemory);
}; };
} // namespace dragon } // namespace dragon
......
...@@ -41,12 +41,6 @@ OperatorBase::OperatorBase(const OperatorDef& def, Workspace* ws) ...@@ -41,12 +41,6 @@ OperatorBase::OperatorBase(const OperatorDef& def, Workspace* ws)
} }
} }
// template <class Context>
// Operator<Context>::Operator(const OperatorDef& def, Workspace* ws)
// : OperatorBase(def, ws),
// ctx_(def.device_option()),
// do_sync_(OpArg<bool>("do_sync", false)) {}
Tensor& OperatorBase::Input(int i) { Tensor& OperatorBase::Input(int i) {
CHECK_LT(i, (int)inputs_.size()); CHECK_LT(i, (int)inputs_.size());
CHECK_GE(i, -(int)inputs_.size()); CHECK_GE(i, -(int)inputs_.size());
...@@ -80,27 +74,17 @@ Tensor* OperatorBase::Buffer(const string& name) { ...@@ -80,27 +74,17 @@ Tensor* OperatorBase::Buffer(const string& name) {
return ws()->CreateTensor("/share/buffer/" + handle_ + "/" + name); return ws()->CreateTensor("/share/buffer/" + handle_ + "/" + name);
} }
string OperatorBase::TypeString(const Tensor& tensor, const Set<string>& types) string OperatorBase::MessageForUnsupported(
const { const string& value,
std::stringstream ss; const vector<string>& support_values,
ss << "Unsupported type of Tensor(" << tensor.name() const string& entry) const {
<< "): " << types::to_string(tensor.meta()) << "\n";
ss << "<" << type() << "Op>"
<< " supports the following types: {\n";
for (auto& type : types)
ss << " * " << type << ",\n";
ss << "}";
return ss.str();
}
string OperatorBase::TypeString(const string& dtype, const Set<string>& types)
const {
std::stringstream ss; std::stringstream ss;
ss << "Unsupported type: " << dtype << "\n"; ss << "Unsupported " << entry << ": " << value << "\n";
ss << "<" << type() << "Op>" ss << "<" << type() << "Op>"
<< " supports the following types: {\n"; << " supports the following " << entry << "(s): {\n";
for (auto& type : types) for (const auto& support_value : support_values) {
ss << " * " << type << ",\n"; ss << " * " << support_value << ",\n";
}
ss << "}"; ss << "}";
return ss.str(); return ss.str();
} }
...@@ -133,7 +117,7 @@ void Operator<Context>::Prepare() { ...@@ -133,7 +117,7 @@ void Operator<Context>::Prepare() {
flag->mutable_data<bool, CPUContext>()[0] = true; flag->mutable_data<bool, CPUContext>()[0] = true;
vector<OperatorBase*>& chain = subgraph()[name]; vector<OperatorBase*>& chain = subgraph()[name];
for (auto* op : chain) { for (auto* op : chain) {
op->Run(ctx()->stream_id()); op->Run(ctx()->stream());
} }
flag->mutable_data<bool, CPUContext>()[0] = false; flag->mutable_data<bool, CPUContext>()[0] = false;
} }
...@@ -156,12 +140,12 @@ template <class Context> ...@@ -156,12 +140,12 @@ template <class Context>
void Operator<Context>::SwitchToDevice() { void Operator<Context>::SwitchToDevice() {
for (auto* tensor : inputs_) { for (auto* tensor : inputs_) {
if (tensor->has_name()) { if (tensor->has_name()) {
tensor->SwitchToDevice(ctx()->device_id()); tensor->SwitchToDevice(ctx()->device());
} }
} }
for (auto* tensor : outputs_) { for (auto* tensor : outputs_) {
if (tensor->has_name()) { if (tensor->has_name()) {
tensor->SwitchToDevice(ctx()->device_id()); tensor->SwitchToDevice(ctx()->device());
} }
} }
} }
......
...@@ -28,30 +28,30 @@ class DRAGON_API OperatorBase { ...@@ -28,30 +28,30 @@ class DRAGON_API OperatorBase {
public: public:
typedef Map<string, vector<OperatorBase*>> SubGraph; typedef Map<string, vector<OperatorBase*>> SubGraph;
/*! \brief Default constructor */ /*! \brief Constructor with the def and workspace */
OperatorBase(const OperatorDef&, Workspace*); OperatorBase(const OperatorDef&, Workspace*);
/*! \brief Default Destructor */ /*! \brief Destructor */
virtual ~OperatorBase() {} virtual ~OperatorBase() {}
/*! \brief Fusion this operator into the specified graph */ /*! \brief Update operator from the given def */
virtual void Fusion(void* graph) { OperatorBase* UpdateFrom(const OperatorDef&);
/*! \brief Fusion operator into the given graph */
virtual void Fuse(void* graph) {
NOT_IMPLEMENTED; NOT_IMPLEMENTED;
} }
/*! \brief Run operator on the specified stream */ /*! \brief Run operator on the given stream */
virtual void Run(int stream = 0) { virtual void Run(int stream = 0) {
NOT_IMPLEMENTED; NOT_IMPLEMENTED;
} }
/*! \brief Switch the internal running phase */ /*! \brief Switch to the given executing phase */
void SwitchToPhase(const string& phase) { void SwitchToPhase(const string& phase) {
phase_ = phase; phase_ = phase;
} }
/*! \brief Update operator according to a new def */
OperatorBase* UpdateFrom(const OperatorDef&);
/*! \brief Return the input tensor */ /*! \brief Return the input tensor */
Tensor& Input(int i); Tensor& Input(int i);
...@@ -61,7 +61,7 @@ class DRAGON_API OperatorBase { ...@@ -61,7 +61,7 @@ class DRAGON_API OperatorBase {
/*! \brief Return the output tensor with input aliases */ /*! \brief Return the output tensor with input aliases */
Tensor* Output(int i, const vec32_t& inputs); Tensor* Output(int i, const vec32_t& inputs);
/*! \brief Return the unique named buffer */ /*! \brief Return the buffer tensor */
Tensor* Buffer(const string& name); Tensor* Buffer(const string& name);
/*! \brief Return the number of inputs */ /*! \brief Return the number of inputs */
...@@ -74,31 +74,26 @@ class DRAGON_API OperatorBase { ...@@ -74,31 +74,26 @@ class DRAGON_API OperatorBase {
return (int)outputs_.size(); return (int)outputs_.size();
} }
/*! \brief Return the value of the specified argument */ /*! \brief Return the value of single argument */
template <typename T> template <typename T>
T Arg(const string& name, const T& default_value); T Arg(const string& name, const T& default_value);
/*! \brief Return the values of the specified argument */ /*! \brief Return the value of repeated argument */
template <typename T> template <typename T>
vector<T> Args(const string& name); vector<T> Args(const string& name);
/*! \brief Return the debug string of stored def */ /*! \brief Return the message for supported value */
string DebugString() const { string MessageForUnsupported(
return def_.DebugString(); const string& value,
} const vector<string>& support_values,
const string& entry = "type") const;
/*! \brief Return the debug string of tensor type */
string TypeString(const Tensor&, const Set<string>&) const;
/* \brief Return the debug string of given type */
string TypeString(const string&, const Set<string>&) const;
/*! \brief Return the specified argument */ /*! \brief Return the specified argument */
const Argument& arg(const string& name) { const Argument& arg(const string& name) {
return *(args_[name]); return *(args_[name]);
} }
/*! \brief Return the argument map */ /*! \brief Return all the arguments */
const Map<string, const Argument*>& args() { const Map<string, const Argument*>& args() {
return args_; return args_;
} }
...@@ -113,7 +108,7 @@ class DRAGON_API OperatorBase { ...@@ -113,7 +108,7 @@ class DRAGON_API OperatorBase {
return def_.type(); return def_.type();
} }
/*! \brief Return the current running phase */ /*! \brief Return the running phase */
const string& phase() const { const string& phase() const {
return phase_; return phase_;
} }
...@@ -190,12 +185,17 @@ class DRAGON_API OperatorBase { ...@@ -190,12 +185,17 @@ class DRAGON_API OperatorBase {
/*! \brief Store the defined arguments */ /*! \brief Store the defined arguments */
Map<string, const Argument*> args_; Map<string, const Argument*> args_;
DISABLE_COPY_AND_ASSIGN(OperatorBase);
}; };
/*!
* \brief The base operator class with context.
*/
template <class Context> template <class Context>
class DRAGON_API Operator : public OperatorBase { class DRAGON_API Operator : public OperatorBase {
public: public:
/*! \brief Default constructor */ /*! \brief Constructor with the def and workspace */
Operator(const OperatorDef& def, Workspace* ws) Operator(const OperatorDef& def, Workspace* ws)
: OperatorBase(def, ws), : OperatorBase(def, ws),
ctx_(def.device_option()), ctx_(def.device_option()),
...@@ -254,8 +254,7 @@ OperatorBase* NewOperator(const OperatorDef&, Workspace*); ...@@ -254,8 +254,7 @@ OperatorBase* NewOperator(const OperatorDef&, Workspace*);
using OperatorBase::Buffer; \ using OperatorBase::Buffer; \
using OperatorBase::InputSize; \ using OperatorBase::InputSize; \
using OperatorBase::OutputSize; \ using OperatorBase::OutputSize; \
using OperatorBase::DebugString; \ using OperatorBase::MessageForUnsupported; \
using OperatorBase::TypeString; \
using OperatorBase::name; \ using OperatorBase::name; \
using OperatorBase::type; \ using OperatorBase::type; \
using OperatorBase::phase; \ using OperatorBase::phase; \
...@@ -323,10 +322,9 @@ struct DispatchHelper; ...@@ -323,10 +322,9 @@ struct DispatchHelper;
struct DispatchHelper<TensorTypes<>, Args...> { \ struct DispatchHelper<TensorTypes<>, Args...> { \
template <typename Op> \ template <typename Op> \
static void Call(Op* op, const TypeMeta& meta, string& types) { \ static void Call(Op* op, const TypeMeta& meta, string& types) { \
LOG(FATAL) << "Unsupported tensor type: " << types::to_string(meta) \ LOG(FATAL) << "Unsupported type: " << types::to_string(meta) << "\n" \
<< "\n" \
<< "<" << op->type() << "Op>" \ << "<" << op->type() << "Op>" \
<< " supports the following types: {\n" \ << " supports the following type(s): {\n" \
<< types << "}"; \ << types << "}"; \
} \ } \
template <typename Op> \ template <typename Op> \
......
...@@ -17,78 +17,93 @@ ...@@ -17,78 +17,93 @@
namespace dragon { namespace dragon {
template <class SrcType, class ObjType, class... Args> /*!
* \brief Registry to create class instances.
*/
template <class KeyType, class ObjectType, class... Args>
class Registry { class Registry {
public: public:
typedef std::function<ObjType*(Args...)> Creator; typedef std::function<ObjectType*(Args...)> Creator;
ObjType* Create(const SrcType& key, Args... args) { /*! \brief Create an instance of specified class */
ObjectType* Create(const KeyType& key, Args... args) {
CHECK(registry_.count(key)) << "\nKey(" << key << ") has not registered."; CHECK(registry_.count(key)) << "\nKey(" << key << ") has not registered.";
return registry_[key](args...); return registry_[key](args...);
} }
bool Has(const SrcType& key) { /*! \brief Return whether the specified class is registered */
bool Has(const KeyType& key) {
return (registry_.count(key)) != 0; return (registry_.count(key)) != 0;
} }
void Register(const SrcType& key, Creator creator) { /*! \brief Register a class with the creator */
void Register(const KeyType& key, Creator creator) {
CHECK(!registry_.count(key)) CHECK(!registry_.count(key))
<< "\nKey(" << key << ") has already registered."; << "\nKey(" << key << ") has already registered.";
registry_[key] = creator; registry_[key] = creator;
} }
vector<SrcType> keys() { /*! \brief Return the key of registered classes */
vector<SrcType> ret; vector<KeyType> keys() {
for (const auto& it : registry_) vector<KeyType> ret;
for (const auto& it : registry_) {
ret.push_back(it.first); ret.push_back(it.first);
}
return ret; return ret;
} }
private: private:
Map<SrcType, Creator> registry_; /*! \brief The registry map */
Map<KeyType, Creator> registry_;
}; };
template <class SrcType, class ObjType, class... Args> /*!
* \brief Register creator into the registry.
*/
template <class KeyType, class ObjectType, class... Args>
class Registerer { class Registerer {
public: public:
/*! \brief Constructor with key and creator */
Registerer( Registerer(
const SrcType& key, const KeyType& key,
Registry<SrcType, ObjType, Args...>* registry, Registry<KeyType, ObjectType, Args...>* registry,
typename Registry<SrcType, ObjType, Args...>::Creator creator, typename Registry<KeyType, ObjectType, Args...>::Creator creator,
const string& help_msg = "") { const string& help_msg = "") {
registry->Register(key, creator); registry->Register(key, creator);
} }
/*! \brief Return the default creator */
template <class DerivedType> template <class DerivedType>
static ObjType* defaultCreator(Args... args) { static ObjectType* DefaultCreator(Args... args) {
return new DerivedType(args...); return new DerivedType(args...);
} }
}; };
// Used in *.h files // Used in *.h files
#define DECLARE_TYPED_REGISTRY(RegistryName, SrcType, ObjType, ...) \ #define DECLARE_TYPED_REGISTRY(RegistryName, KeyType, ObjectType, ...) \
DRAGON_API Registry<SrcType, ObjType, ##__VA_ARGS__>* RegistryName(); \ DRAGON_API Registry<KeyType, ObjectType, ##__VA_ARGS__>* RegistryName(); \
typedef Registerer<SrcType, ObjType, ##__VA_ARGS__> Registerer##RegistryName; typedef Registerer<KeyType, ObjectType, ##__VA_ARGS__> \
Registerer##RegistryName;
// Used in *.cc files // Used in *.cc files
#define DEFINE_TYPED_REGISTRY(RegistryName, SrcType, ObjType, ...) \ #define DEFINE_TYPED_REGISTRY(RegistryName, KeyType, ObjectType, ...) \
Registry<SrcType, ObjType, ##__VA_ARGS__>* RegistryName() { \ Registry<KeyType, ObjectType, ##__VA_ARGS__>* RegistryName() { \
static Registry<SrcType, ObjType, ##__VA_ARGS__>* registry = \ static Registry<KeyType, ObjectType, ##__VA_ARGS__>* registry = \
new Registry<SrcType, ObjType, ##__VA_ARGS__>(); \ new Registry<KeyType, ObjectType, ##__VA_ARGS__>(); \
return registry; \ return registry; \
} }
#define DECLARE_REGISTRY(RegistryName, ObjType, ...) \ #define DECLARE_REGISTRY(RegistryName, ObjectType, ...) \
DECLARE_TYPED_REGISTRY(RegistryName, string, ObjType, ##__VA_ARGS__) DECLARE_TYPED_REGISTRY(RegistryName, string, ObjectType, ##__VA_ARGS__)
#define DEFINE_REGISTRY(RegistryName, ObjType, ...) \ #define DEFINE_REGISTRY(RegistryName, ObjectType, ...) \
DEFINE_TYPED_REGISTRY(RegistryName, string, ObjType, ##__VA_ARGS__) DEFINE_TYPED_REGISTRY(RegistryName, string, ObjectType, ##__VA_ARGS__)
#define REGISTER_TYPED_CLASS(RegistryName, key, ...) \ #define REGISTER_TYPED_CLASS(RegistryName, key, ...) \
static Registerer##RegistryName ANONYMOUS_VARIABLE(g_##RegistryName)( \ static Registerer##RegistryName ANONYMOUS_VARIABLE(g_##RegistryName)( \
key, \ key, \
RegistryName(), \ RegistryName(), \
Registerer##RegistryName::defaultCreator<__VA_ARGS__>) Registerer##RegistryName::DefaultCreator<__VA_ARGS__>)
#define REGISTER_CLASS(RegistryName, key, ...) \ #define REGISTER_CLASS(RegistryName, key, ...) \
REGISTER_TYPED_CLASS(RegistryName, #key, __VA_ARGS__) REGISTER_TYPED_CLASS(RegistryName, #key, __VA_ARGS__)
......
...@@ -18,28 +18,53 @@ ...@@ -18,28 +18,53 @@
namespace dragon { namespace dragon {
/*!
* \brief The base tensor class, manage memory or not.
*
* Tensor is usually constructed with the shape info:
*
* \code{.cpp}
* auto* a = new dragon::Tensor(std::vector<int64_t>({2, 3}));
* auto* b = dragon::Tensor().Reshape({2, 3}); // Equivalent
* \endcode
*
* To allocate the data, type meta and device context are also required:
*
* \code{.cpp}
* auto meta = dragon::TypeMeta::Make<float>();
* auto* raw_data = a->raw_mutable_data<dragon::CPUContext>(meta);
* auto* data = b->mutable_data<float, dragon::CPUContext>();
* \endcode
*
* Memory will be reset if required bytes is larger than capacity:
* \code{.cpp}
* std::cout << a->nbytes() << " " << a->capacity() << std::endl; // 24, 24
* std::cout << a->Reshape({2, 4})->size() << std::endl; // 8
* std::cout << a->nbytes() << " " << a->capacity() << std::endl; // 32, 0
* a->mutable_data<float, dragon::CPUContext>();
* a->Reshape({2, 3});
* std::cout << a->nbytes() << " " << a->capacity() << std::endl; // 24, 32
* \endcode
*/
class DRAGON_API Tensor { class DRAGON_API Tensor {
public: public:
Tensor(const Tensor&) = delete; /*! \brief Default constructor */
Tensor& operator=(const Tensor&) = delete;
/*! \brief Default Constructor */
Tensor() : name_("") {} Tensor() : name_("") {}
/*! \brief Constructor with the known name */ /*! \brief Constructor with the name */
explicit Tensor(const string& name) : name_(name) {} explicit Tensor(const string& name) : name_(name) {}
/*! \brief Constructor with the known int64 dimensions */ /*! \brief Constructor with the int64 dimensions */
explicit Tensor(const vec64_t& dims) { explicit Tensor(const vec64_t& dims) {
Reshape(dims); Reshape(dims);
} }
/*! \brief Constructor with the known int32 dimensions */ /*! \brief Constructor with the int32 dimensions */
explicit Tensor(const vec32_t& dims) { explicit Tensor(const vec32_t& dims) {
Reshape(vec64_t(dims.begin(), dims.end())); Reshape(vec64_t(dims.begin(), dims.end()));
} }
/*! \brief Constructor with the known meta */ /*! \brief Constructor with the type meta */
explicit Tensor(const TypeMeta& meta) { explicit Tensor(const TypeMeta& meta) {
set_meta(meta); set_meta(meta);
} }
...@@ -54,7 +79,7 @@ class DRAGON_API Tensor { ...@@ -54,7 +79,7 @@ class DRAGON_API Tensor {
} }
} }
/*! \brief Reshape to the given dimensions */ /*! \brief Change the tensor dimensions */
Tensor* Reshape(const vec64_t& dims) { Tensor* Reshape(const vec64_t& dims) {
dims_ = dims; dims_ = dims;
strides_.resize(dims.size()); strides_.resize(dims.size());
...@@ -79,18 +104,18 @@ class DRAGON_API Tensor { ...@@ -79,18 +104,18 @@ class DRAGON_API Tensor {
return this; return this;
} }
/*! \brief Reshape the dimensions like the given tensor */ /*! \brief Change the tensor dimensions as the other */
Tensor* ReshapeLike(const Tensor& other) { Tensor* ReshapeLike(const Tensor& other) {
return Reshape(other.dims_); return Reshape(other.dims_);
} }
/*! \brief Switch the memory to the specific device */ /*! \brief Switch memory to the specific device */
void SwitchToDevice(int device_id) { void SwitchToDevice(int device_id) {
UnifiedMemory* mem = memory(); UnifiedMemory* mem = memory();
if (mem) mem->SwitchToDevice(device_id); if (mem) mem->SwitchToDevice(device_id);
} }
/*! \brief Copy memory from the tensor with context */ /*! \brief Copy memory from a tensor with context */
template <class Context> template <class Context>
Tensor* CopyFrom(const Tensor& other, Context* ctx) { Tensor* CopyFrom(const Tensor& other, Context* ctx) {
if ((void*)&other == (void*)this) return this; if ((void*)&other == (void*)this) return this;
...@@ -102,7 +127,7 @@ class DRAGON_API Tensor { ...@@ -102,7 +127,7 @@ class DRAGON_API Tensor {
return this; return this;
} }
/*! \brief Copy memory from the vector */ /*! \brief Copy memory from a vector */
template <typename TensorType, typename VectorType> template <typename TensorType, typename VectorType>
Tensor* CopyFrom(const vector<VectorType>& other) { Tensor* CopyFrom(const vector<VectorType>& other) {
if (other.size() > 0) { if (other.size() > 0) {
...@@ -115,7 +140,7 @@ class DRAGON_API Tensor { ...@@ -115,7 +140,7 @@ class DRAGON_API Tensor {
return this; return this;
} }
/*! \brief Copy memory to the vector */ /*! \brief Copy memory to a vector */
template <typename TensorType, typename VectorType> template <typename TensorType, typename VectorType>
void CopyTo(vector<VectorType>& dest) { void CopyTo(vector<VectorType>& dest) {
dest.resize(size()); dest.resize(size());
...@@ -141,7 +166,7 @@ class DRAGON_API Tensor { ...@@ -141,7 +166,7 @@ class DRAGON_API Tensor {
own_memory_ = (memory == nullptr); own_memory_ = (memory == nullptr);
} }
/*! \brief Reset all resources */ /*! \brief Reset tensor to release all resources */
void Reset() { void Reset() {
dims_.clear(); dims_.clear();
strides_.clear(); strides_.clear();
...@@ -156,13 +181,13 @@ class DRAGON_API Tensor { ...@@ -156,13 +181,13 @@ class DRAGON_API Tensor {
} }
} }
/*! \brief Whether the data type is matched */ /*! \brief Return whether the data type is matched */
template <typename T> template <typename T>
bool IsType() { bool IsType() {
return meta_.Match<T>(); return meta_.Match<T>();
} }
/*! \brief Return a string formatting the dimensions */ /*! \brief Return a string formatting the given dimensions */
static string DimString(const vector<int64_t>& dims) { static string DimString(const vector<int64_t>& dims) {
if (dims.size() == 0) return "(0,)"; if (dims.size() == 0) return "(0,)";
std::stringstream ss; std::stringstream ss;
...@@ -187,7 +212,7 @@ class DRAGON_API Tensor { ...@@ -187,7 +212,7 @@ class DRAGON_API Tensor {
return name_; return name_;
} }
/*! \brief Return true if tensor name is set */ /*! \brief Return whether the tensor name is set */
bool has_name() const { bool has_name() const {
return !name_.empty(); return !name_.empty();
} }
...@@ -207,7 +232,7 @@ class DRAGON_API Tensor { ...@@ -207,7 +232,7 @@ class DRAGON_API Tensor {
return capacity_; return capacity_;
} }
/*! \brief Return the total number of bytes */ /*! \brief Return the total number of data bytes */
size_t nbytes() const { size_t nbytes() const {
return size_ * meta_.itemsize(); return size_ * meta_.itemsize();
} }
...@@ -231,50 +256,51 @@ class DRAGON_API Tensor { ...@@ -231,50 +256,51 @@ class DRAGON_API Tensor {
return (int)dims_.size(); return (int)dims_.size();
} }
/*! \brief Return the dimension of specified axis */ /*! \brief Return the dimension of given axis */
int64_t dim(int64_t i) const { int64_t dim(int64_t i) const {
return dims_[axis(i)]; return dims_[axis(i)];
} }
/*! \brief Return the stride of specified axis */ /*! \brief Return the stride of given axis */
int64_t stride(int64_t i) const { int64_t stride(int64_t i) const {
return strides_[axis(i)]; return strides_[axis(i)];
} }
/*! \brief Return all the dimensions */ /*! \brief Return the tensor dimensions */
const vec64_t& dims() const { const vec64_t& dims() const {
return dims_; return dims_;
} }
/*! \brief Return all the strides */ /*! \brief Return the tensor strides */
const vec64_t& strides() const { const vec64_t& strides() const {
return strides_; return strides_;
} }
/*! \brief Return the number of elements along the [start, end) axes */ /*! \brief Return the total number of elements */
int64_t count() const {
return (int64_t)size_;
}
/*! \brief Return the number of elements counting along the given axes */
int64_t count(int64_t start, int64_t end) const { int64_t count(int64_t start, int64_t end) const {
int64_t nelements = 1; int64_t nelements = 1;
for (int64_t i = start; i < end; i++) for (int64_t i = start; i < end; i++) {
nelements *= dim(i); nelements *= dim(i);
return nelements;
} }
return nelements;
/*! \brief Return the total number of elements */
int64_t count() const {
return (int64_t)size_;
} }
/*! \brief Return the number of elements from the start axis */ /*! \brief Return the number of elements counting from the given axis */
int64_t count(int64_t start) const { int64_t count(int64_t start) const {
return count(start, ndim()); return count(start, ndim());
} }
/*! \brief Whether this tensor is empty */ /*! \brief Return whether the total number of elements is zero */
bool empty() const { bool empty() const {
return size_ == 0; return size_ == 0;
} }
/*! \brief Whether this tensor holds a valid memory */ /*! \brief Return whether the memory is set */
bool has_memory() const { bool has_memory() const {
return internal_memory_ != nullptr || external_memory_ != nullptr; return internal_memory_ != nullptr || external_memory_ != nullptr;
} }
...@@ -286,12 +312,12 @@ class DRAGON_API Tensor { ...@@ -286,12 +312,12 @@ class DRAGON_API Tensor {
return ptr; return ptr;
} }
/*! \brief Return the state of memory */ /*! \brief Return the memory state */
UnifiedMemory::State memory_state() const { UnifiedMemory::State memory_state() const {
return memory(true)->state(); return memory(true)->state();
} }
/*! \brief Try to get the raw const data pointer */ /*! \brief Try to return the raw const data pointer */
template <class Context> template <class Context>
const void* const_data_ptr() const { const void* const_data_ptr() const {
TypeId ctx_type = TypeMeta::Id<Context>(); TypeId ctx_type = TypeMeta::Id<Context>();
...@@ -307,7 +333,7 @@ class DRAGON_API Tensor { ...@@ -307,7 +333,7 @@ class DRAGON_API Tensor {
} }
} }
/*! \brief Try to get the raw mutable data pointer */ /*! \brief Try to return the raw mutable data pointer */
template <class Context> template <class Context>
void mutable_data_ptr(void** data_ptr) { void mutable_data_ptr(void** data_ptr) {
auto* mem = memory(); auto* mem = memory();
...@@ -327,21 +353,23 @@ class DRAGON_API Tensor { ...@@ -327,21 +353,23 @@ class DRAGON_API Tensor {
} }
} }
/*! \brief Try to allocate the raw data for memory */ /*!
* \brief Return the raw mutable data pointer.
*
* If memory is not set, create to manage it with the given meta.
*/
template <class Context> template <class Context>
void* raw_mutable_data(const TypeMeta& meta) { void* raw_mutable_data(const TypeMeta& meta) {
void* data_ptr; void* data_ptr;
mutable_data_ptr<Context>(&data_ptr); mutable_data_ptr<Context>(&data_ptr);
// Return the data of memory directly // Return the data pointer directly
if (meta_ == meta && data_ptr) return data_ptr; if (meta_ == meta && data_ptr) return data_ptr;
// Create a new memory with knowned size // Create a new memory created with size and meta
CHECK_GT(size_, 0) << "\nInvalid tensor size."; CHECK_GT(size_, 0) << "\nInvalid tensor size.";
meta_ = meta; meta_ = meta;
capacity_ = size_ * meta.itemsize(); capacity_ = size_ * meta.itemsize();
internal_memory_.reset(new UnifiedMemory(meta_, capacity_)); internal_memory_.reset(new UnifiedMemory(meta_, capacity_));
// Allocate space
mutable_data_ptr<Context>(&data_ptr); mutable_data_ptr<Context>(&data_ptr);
// Call the constructor if necessary
if (meta_.ctor()) meta_.ctor()(data_ptr, size_); if (meta_.ctor()) meta_.ctor()(data_ptr, size_);
return data_ptr; return data_ptr;
} }
...@@ -360,7 +388,7 @@ class DRAGON_API Tensor { ...@@ -360,7 +388,7 @@ class DRAGON_API Tensor {
return const_data_ptr<Context>(); return const_data_ptr<Context>();
} }
/*! \brief Get the typed mutable data pointer */ /*! \brief Return the typed mutable data pointer */
template <typename T, class Context> template <typename T, class Context>
T* mutable_data() { T* mutable_data() {
void* data_ptr; void* data_ptr;
...@@ -377,7 +405,7 @@ class DRAGON_API Tensor { ...@@ -377,7 +405,7 @@ class DRAGON_API Tensor {
return static_cast<T*>(raw_mutable_data<Context>(TypeMeta::Make<T>())); return static_cast<T*>(raw_mutable_data<Context>(TypeMeta::Make<T>()));
} }
/*! \brief Get the typed const data pointer */ /*! \brief Return the typed const data pointer */
template <typename T, class Context> template <typename T, class Context>
const T* data() const { const T* data() const {
CHECK(meta_.Match<T>()) << "\nThe type of Tensor(" << name() << ") is " CHECK(meta_.Match<T>()) << "\nThe type of Tensor(" << name() << ") is "
...@@ -391,13 +419,13 @@ class DRAGON_API Tensor { ...@@ -391,13 +419,13 @@ class DRAGON_API Tensor {
version_ = version; version_ = version;
} }
/*! \brief Set the meta of data type */ /*! \brief Set the type meta */
Tensor* set_meta(const TypeMeta& meta) { Tensor* set_meta(const TypeMeta& meta) {
meta_ = meta; meta_ = meta;
return this; return this;
} }
/*! \brief Set the internal memory */ /*! \brief Set to manage the memory */
void set_memory(UnifiedMemory* memory) { void set_memory(UnifiedMemory* memory) {
if (memory != internal_memory_.get()) { if (memory != internal_memory_.get()) {
internal_memory_.reset(memory); internal_memory_.reset(memory);
...@@ -429,6 +457,8 @@ class DRAGON_API Tensor { ...@@ -429,6 +457,8 @@ class DRAGON_API Tensor {
/*! \brief The external memory indicator */ /*! \brief The external memory indicator */
bool own_memory_ = true; bool own_memory_ = true;
DISABLE_COPY_AND_ASSIGN(Tensor);
}; };
} // namespace dragon } // namespace dragon
......
...@@ -31,15 +31,41 @@ struct DRAGON_API TypeRegister { ...@@ -31,15 +31,41 @@ struct DRAGON_API TypeRegister {
} }
}; };
/*!
* \brief Metaclass for all types.
*
* TypeMeta is commonly used for type identification:
*
* \code{.cpp}
* auto meta1 = dragon::TypeMeta::Make<float>();
* auto meta2 = dragon::TypeMeta::Make<float>();
* std::cout << (meta1 == meta2) << std::endl; // 1
* std::cout << (meta1.id() == meta2.id()) << std::endl; // 1
* std::cout << meta1.Match<float>() << std::endl; // 1
* std::cout << (meta1.id() == dragon::TypeMeta::Id<float>()) << std::endl; // 1
* \endcode
*
* Default constructor and destructor are available for non-fundamental types:
*
* \code{.cpp}
* auto meta = dragon::TypeMeta::Make<std::string>();
* auto* raw_string_data = malloc(1 * meta.itemsize());
* meta.ctor()(raw_string_data, 1);
* auto* string_data = reinterpret_cast<std::string*>(raw_string_data);
* std::cout << string_data[0].size();
* meta.dtor()(raw_string_data, 1);
* \endcode
*/
class TypeMeta { class TypeMeta {
public: public:
typedef void (*PlacementNew)(void*, size_t); typedef void (*PlacementNew)(void*, size_t);
typedef void (*TypedCopy)(const void*, void*, size_t); typedef void (*TypedCopy)(const void*, void*, size_t);
typedef void (*TypedDestructor)(void*, size_t); typedef void (*TypedDestructor)(void*, size_t);
TypeMeta() /*! \brief Default constructor */
: id_(0), itemsize_(0), ctor_(nullptr), copy_(nullptr), dtor_(nullptr) {} TypeMeta() : id_(0), itemsize_(0) {}
/*! \brief Constructor with the other type meta */
TypeMeta(const TypeMeta& src) TypeMeta(const TypeMeta& src)
: id_(src.id_), : id_(src.id_),
itemsize_(src.itemsize_), itemsize_(src.itemsize_),
...@@ -57,32 +83,38 @@ class TypeMeta { ...@@ -57,32 +83,38 @@ class TypeMeta {
return *this; return *this;
} }
/*! \brief Return whether the two identifications are equal */
bool operator==(const TypeMeta& other) const { bool operator==(const TypeMeta& other) const {
return (id_ == other.id_); return (id_ == other.id_);
} }
/*! \brief Return whether the two identifications are not equal */
bool operator!=(const TypeMeta& other) const { bool operator!=(const TypeMeta& other) const {
return (id_ != other.id_); return (id_ != other.id_);
} }
/*! \brief Return the identification of given type */
template <typename T> template <typename T>
static TypeId Id() { static TypeId Id() {
return TypeRegister<T>::id(); return TypeRegister<T>::id();
} }
/*! \brief Return the item size of given type */
template <typename T> template <typename T>
static size_t Itemsize() { static size_t Itemsize() {
return sizeof(T); return sizeof(T);
} }
/*! \brief Call the constructor for each element */
template <typename T> template <typename T>
static void Ctor(void* ptr, size_t n) { static void Ctor(void* ptr, size_t n) {
T* typed_ptr = static_cast<T*>(ptr); T* typed_ptr = static_cast<T*>(ptr);
for (size_t i = 0; i < n; i++) { for (size_t i = 0; i < n; ++i) {
new (typed_ptr + i) T; new (typed_ptr + i) T;
} }
} }
/*! \brief Call the destructor for each element */
template <typename T> template <typename T>
static void Dtor(void* ptr, size_t n) { static void Dtor(void* ptr, size_t n) {
T* typed_ptr = static_cast<T*>(ptr); T* typed_ptr = static_cast<T*>(ptr);
...@@ -91,55 +123,66 @@ class TypeMeta { ...@@ -91,55 +123,66 @@ class TypeMeta {
} }
} }
/*! \brief Call the copy constructor for each element */
template <typename T> template <typename T>
static void Copy(const void* src, void* dst, size_t n) { static void Copy(const void* src, void* dst, size_t n) {
const T* typed_src = static_cast<const T*>(src); const T* typed_src = static_cast<const T*>(src);
T* typed_dst = static_cast<T*>(dst); T* typed_dst = static_cast<T*>(dst);
for (size_t i = 0; i < n; ++i) for (size_t i = 0; i < n; ++i) {
typed_dst[i] = typed_src[i]; typed_dst[i] = typed_src[i];
} }
}
#define FundMeta std::enable_if<std::is_fundamental<T>::value, TypeMeta>::type #define FundamentalTypeMeta \
std::enable_if<std::is_fundamental<T>::value, TypeMeta>::type
#define StructMeta \ #define StructuralTypeMeta \
std::enable_if< \ std::enable_if< \
!std::is_fundamental<T>::value && std::is_copy_assignable<T>::value, \ !std::is_fundamental<T>::value && std::is_copy_assignable<T>::value, \
TypeMeta>::type TypeMeta>::type
/*! \brief Return a type meta of given type */
template <typename T> template <typename T>
static typename FundMeta Make() { static typename FundamentalTypeMeta Make() {
return TypeMeta(Id<T>(), Itemsize<T>(), nullptr, nullptr, nullptr); return TypeMeta(Id<T>(), Itemsize<T>(), nullptr, nullptr, nullptr);
} }
/*! \brief Return a type meta of given type */
template <typename T> template <typename T>
static typename StructMeta Make() { static typename StructuralTypeMeta Make() {
return TypeMeta(Id<T>(), Itemsize<T>(), Ctor<T>, Copy<T>, Dtor<T>); return TypeMeta(Id<T>(), Itemsize<T>(), Ctor<T>, Copy<T>, Dtor<T>);
} }
#undef FundMeta #undef FundamentalTypeMeta
#undef StructMeta #undef StructuralTypeMeta
/*! \brief Return whether the meta is matched with given type */
template <typename T> template <typename T>
bool Match() const { bool Match() const {
return (id_ == Id<T>()); return (id_ == Id<T>());
} }
/*! \brief Return the type identification */
const TypeId& id() const { const TypeId& id() const {
return id_; return id_;
} }
/*! \brief Return the item size */
const size_t& itemsize() const { const size_t& itemsize() const {
return itemsize_; return itemsize_;
} }
/*! \brief Return the type constructor */
PlacementNew ctor() const { PlacementNew ctor() const {
return ctor_; return ctor_;
} }
/*! \brief Return the type destructor */
TypedDestructor dtor() const { TypedDestructor dtor() const {
return dtor_; return dtor_;
} }
/*! \brief Return the type copy constructor */
TypedCopy copy() const { TypedCopy copy() const {
return copy_; return copy_;
} }
...@@ -156,9 +199,9 @@ class TypeMeta { ...@@ -156,9 +199,9 @@ class TypeMeta {
private: private:
TypeId id_; TypeId id_;
size_t itemsize_; size_t itemsize_;
PlacementNew ctor_; PlacementNew ctor_ = nullptr;
TypedCopy copy_; TypedCopy copy_ = nullptr;
TypedDestructor dtor_; TypedDestructor dtor_ = nullptr;
}; };
} // namespace dragon } // namespace dragon
......
...@@ -128,7 +128,7 @@ void Workspace::RunGraph( ...@@ -128,7 +128,7 @@ void Workspace::RunGraph(
const int stream) { const int stream) {
CHECK(graph_map_.count(name)) CHECK(graph_map_.count(name))
<< "\nGraph(" << name << ") is not in current workspace."; << "\nGraph(" << name << ") is not in current workspace.";
graph_map_[name]->Run(include, exclude, stream); graph_map_[name]->Run(stream, include, exclude);
} }
void Workspace::RegisterAlias(const string& target, const string& alias) { void Workspace::RegisterAlias(const string& target, const string& alias) {
......
...@@ -17,55 +17,58 @@ ...@@ -17,55 +17,58 @@
namespace dragon { namespace dragon {
class Workspace { /*!
* \brief Sandbox to isolate the resources and computations.
*/
class DRAGON_API Workspace {
public: public:
/*! \brief Constructor */ /*! \brief Constructor with the name */
DRAGON_API explicit Workspace(const string& name); explicit Workspace(const string& name);
/*! \brief Merge resources from other */ /*! \brief Merge resources from other */
DRAGON_API void MergeFrom(Workspace*); void MergeFrom(Workspace* other);
/*! \brief Clear the cached resources */ /*! \brief Clear the cached resources */
DRAGON_API void Clear(); void Clear();
/* \brief Return an unique name */ /* \brief Return an unique name */
DRAGON_API string UniqueName( string UniqueName(
const string& name, const string& name,
const string& suffix, const string& suffix,
const string& scope = "", const string& scope = "",
const bool zero_based = false); const bool zero_based = false);
/* \brief Register an alias for the target */ /* \brief Register an alias for the target */
DRAGON_API void RegisterAlias(const string& target, const string& alias); void RegisterAlias(const string& target, const string& alias);
/*! \brief Return whether tensor is existing */ /*! \brief Return whether tensor is existing */
DRAGON_API bool HasTensor(const string& name, bool external = true) const { bool HasTensor(const string& name, bool external = true) const {
return TryGetTensor(name, external) == nullptr ? false : true; return TryGetTensor(name, external) == nullptr ? false : true;
} }
/*! \brief Create the tensor */ /*! \brief Create the tensor */
DRAGON_API Tensor* CreateTensor(const string&, FillerInfo* = nullptr); Tensor* CreateTensor(const string&, FillerInfo* = nullptr);
/*! \brief Try to return the tensor */ /*! \brief Try to return the tensor */
DRAGON_API Tensor* TryGetTensor(const string&, bool = true) const; Tensor* TryGetTensor(const string&, bool = true) const;
/*! \brief Return the tensor */ /*! \brief Return the tensor */
DRAGON_API Tensor* GetTensor(const string&, bool = true) const; Tensor* GetTensor(const string&, bool = true) const;
/*! \brief Reset the tensor */ /*! \brief Reset the tensor */
DRAGON_API void ResetTensor(const string&); void ResetTensor(const string&);
/*! \brief Return the filler info */ /*! \brief Return the filler info */
DRAGON_API FillerInfo* GetFillerInfo(const string&); FillerInfo* GetFillerInfo(const string&);
/*! \brief Run the operator */ /*! \brief Run the operator */
DRAGON_API void RunOperator(const OperatorDef&); void RunOperator(const OperatorDef&);
/*! \brief Create the graph */ /*! \brief Create the graph */
DRAGON_API GraphBase* CreateGraph(const GraphDef&); GraphBase* CreateGraph(const GraphDef&);
/*! \brief Run the graph */ /*! \brief Run the graph */
DRAGON_API void RunGraph( void RunGraph(
const string& graph_name, const string& graph_name,
const string& include = "", const string& include = "",
const string& exclude = "", const string& exclude = "",
...@@ -77,12 +80,12 @@ class Workspace { ...@@ -77,12 +80,12 @@ class Workspace {
} }
/*! \brief Return the name of cached tensors */ /*! \brief Return the name of cached tensors */
DRAGON_API vector<string> tensors() const; vector<string> tensors() const;
/*! \brief Return the name of cached graphs */ /*! \brief Return the name of cached graphs */
DRAGON_API vector<string> graphs() const; vector<string> graphs() const;
/*! \brief Provide a group of the shared byte data */ /*! \brief Return a group of the shared raw data */
template <class Context> template <class Context>
vector<void*> data(const vector<size_t>& segments) { vector<void*> data(const vector<size_t>& segments) {
int64_t nbytes = 0; int64_t nbytes = 0;
...@@ -96,7 +99,7 @@ class Workspace { ...@@ -96,7 +99,7 @@ class Workspace {
return ret; return ret;
} }
/*! \brief Provide a group of shared typed data */ /*! \brief Return a group of shared typed data */
template <typename T, class Context> template <typename T, class Context>
vector<T*> data(const vector<int64_t>& segments) { vector<T*> data(const vector<int64_t>& segments) {
vector<size_t> segments_in_byte; vector<size_t> segments_in_byte;
...@@ -133,6 +136,8 @@ class Workspace { ...@@ -133,6 +136,8 @@ class Workspace {
/*! \brief The cached graphs */ /*! \brief The cached graphs */
Map<string, unique_ptr<GraphBase>> graph_map_; Map<string, unique_ptr<GraphBase>> graph_map_;
DISABLE_COPY_AND_ASSIGN(Workspace);
}; };
} // namespace dragon } // namespace dragon
......
...@@ -69,7 +69,7 @@ class NumpyFetcher : public TensorFetcherBase { ...@@ -69,7 +69,7 @@ class NumpyFetcher : public TensorFetcherBase {
tensor.nbytes(), tensor.nbytes(),
PyArray_DATA(reinterpret_cast<PyArrayObject*>(array)), PyArray_DATA(reinterpret_cast<PyArrayObject*>(array)),
tensor.raw_data<CUDAContext>(), tensor.raw_data<CUDAContext>(),
tensor.memory()->device_id()); tensor.memory()->device());
} else { } else {
CPUContext::Memcpy<CPUContext, CPUContext>( CPUContext::Memcpy<CPUContext, CPUContext>(
tensor.nbytes(), tensor.nbytes(),
......
...@@ -130,7 +130,7 @@ void RegisterModule(py::module& m) { ...@@ -130,7 +130,7 @@ void RegisterModule(py::module& m) {
#ifdef USE_CUDA #ifdef USE_CUDA
if (device_id < 0) device_id = CUDAContext::current_device(); if (device_id < 0) device_id = CUDAContext::current_device();
auto stream = CUDAContext::object()->stream(device_id, stream_id); auto stream = CUDAContext::object()->stream(device_id, stream_id);
CUDAContext::SyncStream(stream); CUDAContext::SynchronizeStream(stream);
#endif #endif
}); });
......
...@@ -50,7 +50,7 @@ class DLPackWrapper { ...@@ -50,7 +50,7 @@ class DLPackWrapper {
} else { } else {
data = memory->mutable_cuda_data(nbytes); data = memory->mutable_cuda_data(nbytes);
} }
ctx.device_id = memory->device_id(); ctx.device_id = memory->device();
ctx.device_type = DLDeviceType::kDLGPU; ctx.device_type = DLDeviceType::kDLGPU;
break; break;
} }
......
...@@ -191,9 +191,12 @@ PYBIND11_MODULE(libdragon_python, m) { ...@@ -191,9 +191,12 @@ PYBIND11_MODULE(libdragon_python, m) {
auto* graph = self->CreateGraph(graph_def); auto* graph = self->CreateGraph(graph_def);
if (verbose) { if (verbose) {
bool could_be_serialized = true; bool could_be_serialized = true;
const auto& def = graph->opt_def(); const auto& def = graph->optimized_def();
for (auto& op : def.op()) for (auto& op : def.op()) {
if (op.type() == "GivenTensorFill") could_be_serialized = false; if (op.type() == "GivenTensorFill") {
could_be_serialized = false;
}
}
if (could_be_serialized) { if (could_be_serialized) {
auto msg = string("\n") + def.DebugString(); auto msg = string("\n") + def.DebugString();
msg.pop_back(); msg.pop_back();
......
...@@ -3,6 +3,9 @@ message(STATUS "Build module: ${CMAKE_CURRENT_LIST_DIR}") ...@@ -3,6 +3,9 @@ message(STATUS "Build module: ${CMAKE_CURRENT_LIST_DIR}")
# ---[ Defines # ---[ Defines
add_definitions(-DBUILD_RUNTIME) add_definitions(-DBUILD_RUNTIME)
if (USE_MPI)
remove_definitions(-DUSE_MPI)
endif()
# ---[ Sources # ---[ Sources
set(MODULE_INCLUDES "") set(MODULE_INCLUDES "")
......
...@@ -3,28 +3,33 @@ ...@@ -3,28 +3,33 @@
namespace dragon { namespace dragon {
int type_from_string(std::string type) { namespace {
if (type == "CPU") {
int type_from_string(const std::string& device_type) {
if (device_type == "CPU") {
return 0; return 0;
} else if (type == "GPU") { } else if (device_type == "GPU") {
return 1; return 1;
} else if (type == "CUDA") { } else if (device_type == "CUDA") {
return 1; return 1;
} }
LOG(FATAL) << "Unknown device type: " << type << ", " LOG(FATAL) << "Unsupported device type: " << device_type << "\n"
<< "known device types: " << "Following device types are supported: {"
<< "CPU, " << " * CPU\n"
<< "GPU, " << " * GPU\n"
<< "CUDA"; << " * CUDA\n"
<< "}";
return -1; return -1;
} }
} // namespace
Device::Device() : device_type_(0), device_id_(0) {} Device::Device() : device_type_(0), device_id_(0) {}
Device::Device(std::string device_type, int device_id) Device::Device(const std::string& device_type, int device_id)
: device_type_(type_from_string(device_type)), device_id_(device_id) {} : device_type_(type_from_string(device_type)), device_id_(device_id) {}
Device::Device(std::string device_type) Device::Device(const std::string& device_type)
: device_type_(type_from_string(device_type)), device_id_(0) {} : device_type_(type_from_string(device_type)), device_id_(0) {}
} // namespace dragon } // namespace dragon
...@@ -40,8 +40,8 @@ typedef class Workspace* Workspace_t; ...@@ -40,8 +40,8 @@ typedef class Workspace* Workspace_t;
class DRAGON_API Device { class DRAGON_API Device {
public: public:
Device(); Device();
explicit Device(std::string device_type); explicit Device(const std::string& device_type);
Device(std::string device_type, int device_id); Device(const std::string& device_type, int device_id);
const int& device_type() const { const int& device_type() const {
return device_type_; return device_type_;
...@@ -65,7 +65,7 @@ DRAGON_API Workspace_t ResetWorkspace(Workspace_t ws); ...@@ -65,7 +65,7 @@ DRAGON_API Workspace_t ResetWorkspace(Workspace_t ws);
DRAGON_API Workspace_t ResetWorkspace(const std::string& name); DRAGON_API Workspace_t ResetWorkspace(const std::string& name);
DRAGON_API void MoveWorkspace(Workspace_t dst, Workspace_t src); DRAGON_API void MoveWorkspace(Workspace_t dest, Workspace_t src);
DRAGON_API void DestroyWorkspace(Workspace_t ws); DRAGON_API void DestroyWorkspace(Workspace_t ws);
...@@ -76,15 +76,13 @@ DRAGON_API void DestroyWorkspace(const std::string& name); ...@@ -76,15 +76,13 @@ DRAGON_API void DestroyWorkspace(const std::string& name);
*/ */
DRAGON_API std::string DRAGON_API std::string
CreateGraph(const GraphDef_t graph_def, const Device& device, Workspace_t ws); CreateGraph(const GraphDef_t def, const Device& device, Workspace_t ws);
DRAGON_API std::string CreateGraph( DRAGON_API std::string
const std::string& graph_file, CreateGraph(const std::string& file, const Device& device, Workspace_t ws);
const Device& device,
Workspace_t ws);
DRAGON_API void DRAGON_API void
RunGraph(const std::string& graph_name, Workspace_t ws, int stream_id = 0); RunGraph(const std::string& name, Workspace_t ws, int stream = 0);
/*! /*!
* Tensor API * Tensor API
...@@ -111,9 +109,9 @@ DRAGON_API T* FetchTensor( ...@@ -111,9 +109,9 @@ DRAGON_API T* FetchTensor(
* Proto API * Proto API
*/ */
DRAGON_API void CreateGraphDef(GraphDef_t* graph_def); DRAGON_API void CreateGraphDef(GraphDef_t* def);
DRAGON_API void DestroyGraphDef(GraphDef_t graph_def); DRAGON_API void DestroyGraphDef(GraphDef_t def);
/*! /*!
* Model API * Model API
...@@ -121,8 +119,8 @@ DRAGON_API void DestroyGraphDef(GraphDef_t graph_def); ...@@ -121,8 +119,8 @@ DRAGON_API void DestroyGraphDef(GraphDef_t graph_def);
DRAGON_API void LoadONNXModel( DRAGON_API void LoadONNXModel(
const std::string& model_file, const std::string& model_file,
GraphDef_t init_graph, GraphDef_t init_def,
GraphDef_t pred_graph, GraphDef_t pred_def,
std::vector<std::string>& inputs, std::vector<std::string>& inputs,
std::vector<std::string>& outputs); std::vector<std::string>& outputs);
......
#include "dragon/core/common.h" #include "dragon/core/workspace.h"
#include "dragon/modules/runtime/dragon_runtime.h" #include "dragon/modules/runtime/dragon_runtime.h"
#include "dragon/onnx/onnx_backend.h" #include "dragon/onnx/onnx_backend.h"
#include "dragon/utils/proto_utils.h" #include "dragon/utils/proto_utils.h"
...@@ -9,7 +9,7 @@ std::mutex g_mutex; ...@@ -9,7 +9,7 @@ std::mutex g_mutex;
Map<string, unique_ptr<Workspace>> g_workspaces; Map<string, unique_ptr<Workspace>> g_workspaces;
Map<string, vector<string>> sub_workspaces; Map<string, vector<string>> sub_workspaces;
Workspace* CreateWorkspace(const string& name) { Workspace_t CreateWorkspace(const string& name) {
std::unique_lock<std::mutex> lock(g_mutex); std::unique_lock<std::mutex> lock(g_mutex);
LOG(INFO) << "Create the Workspace(" << name << ")."; LOG(INFO) << "Create the Workspace(" << name << ").";
if (g_workspaces.count(name)) return g_workspaces[name].get(); if (g_workspaces.count(name)) return g_workspaces[name].get();
...@@ -19,7 +19,7 @@ Workspace* CreateWorkspace(const string& name) { ...@@ -19,7 +19,7 @@ Workspace* CreateWorkspace(const string& name) {
return g_workspaces[name].get(); return g_workspaces[name].get();
} }
Workspace* ResetWorkspace(const string& name) { Workspace_t ResetWorkspace(const string& name) {
std::unique_lock<std::mutex> lock(g_mutex); std::unique_lock<std::mutex> lock(g_mutex);
CHECK(g_workspaces.count(name)) CHECK(g_workspaces.count(name))
<< "\nWorkspace(" << name << ") does not exist." << "\nWorkspace(" << name << ") does not exist."
...@@ -34,19 +34,19 @@ Workspace* ResetWorkspace(const string& name) { ...@@ -34,19 +34,19 @@ Workspace* ResetWorkspace(const string& name) {
return g_workspaces[name].get(); return g_workspaces[name].get();
} }
Workspace* ResetWorkspace(Workspace_t ws) { Workspace_t ResetWorkspace(Workspace_t ws) {
CHECK(ws) << "\nGiven workspace is invalid."; CHECK(ws) << "\nGiven workspace is invalid.";
return ResetWorkspace(ws->name()); return ResetWorkspace(ws->name());
} }
void MoveWorkspace(Workspace_t dst, Workspace_t src) { void MoveWorkspace(Workspace_t dest, Workspace_t src) {
std::unique_lock<std::mutex> lock(g_mutex); std::unique_lock<std::mutex> lock(g_mutex);
CHECK(src) << "\nGiven source workspace is invalid."; CHECK(src) << "\nGiven source workspace is invalid.";
CHECK(dst) << "\nGiven destination workspace is invalid."; CHECK(dest) << "\nGiven destination workspace is invalid.";
dst->MergeFrom(src); dest->MergeFrom(src);
sub_workspaces[dst->name()].push_back(src->name()); sub_workspaces[dest->name()].push_back(src->name());
LOG(INFO) << "Move the Workspace(" << src->name() << ") " LOG(INFO) << "Move the Workspace(" << src->name() << ") "
<< "into the Workspace(" << dst->name() << ")."; << "into the Workspace(" << dest->name() << ").";
} }
void DestroyWorkspace(const string& name) { void DestroyWorkspace(const string& name) {
...@@ -63,27 +63,25 @@ void DestroyWorkspace(Workspace_t ws) { ...@@ -63,27 +63,25 @@ void DestroyWorkspace(Workspace_t ws) {
return DestroyWorkspace(ws->name()); return DestroyWorkspace(ws->name());
} }
string string CreateGraph(const GraphDef_t def, const Device& device, Workspace_t ws) {
CreateGraph(const GraphDef_t graph_def, const Device& device, Workspace_t ws) { auto def_v2(*def);
auto graph_def_copy(*graph_def); auto* device_option = def_v2.mutable_device_option();
// Overwritten device options
DeviceOption* device_option = graph_def_copy.mutable_device_option();
device_option->set_device_type((DeviceTypeProto)device.device_type()); device_option->set_device_type((DeviceTypeProto)device.device_type());
device_option->set_device_id(device.device_id()); device_option->set_device_id(device.device_id());
auto* graph = ws->CreateGraph(graph_def_copy); auto* graph = ws->CreateGraph(def_v2);
if (!graph) LOG(FATAL) << "Can not create the graph."; if (!graph) LOG(FATAL) << "Can not create the graph.";
return graph->name(); return graph->name();
} }
std::string std::string
CreateGraph(const string& graph_file, const Device& device, Workspace_t ws) { CreateGraph(const string& file, const Device& device, Workspace_t ws) {
GraphDef graph_def; GraphDef graph_def;
ParseProtoFromText(graph_file.c_str(), &graph_def); ParseProtoFromText(file.c_str(), &graph_def);
return CreateGraph(&graph_def, device, ws); return CreateGraph(&graph_def, device, ws);
} }
void RunGraph(const string& graph_name, Workspace_t ws, const int stream_id) { void RunGraph(const string& name, Workspace_t ws, int stream) {
ws->RunGraph(graph_name, "", "", stream_id); ws->RunGraph(name, "", "", stream);
} }
void CreateTensor(const string& name, Workspace_t ws) { void CreateTensor(const string& name, Workspace_t ws) {
...@@ -148,34 +146,38 @@ void FeedTensor( ...@@ -148,34 +146,38 @@ void FeedTensor(
tensor->raw_mutable_data<CPUContext>(), tensor->raw_mutable_data<CPUContext>(),
static_cast<const void*>(data)); static_cast<const void*>(data));
} else { } else {
LOG(FATAL) << "Unknown device type."; LOG(FATAL) << "Unsupported device type.";
} }
} }
DRAGON_API void CreateGraphDef(GraphDef_t* graph_def) { void CreateGraphDef(GraphDef_t* def) {
*graph_def = new GraphDef(); *def = new GraphDef();
} }
DRAGON_API void DestroyGraphDef(GraphDef_t graph_def) { void DestroyGraphDef(GraphDef_t def) {
if (graph_def) delete graph_def; if (def) {
delete def;
}
} }
void LoadONNXModel( void LoadONNXModel(
const string& model_file, const string& model_file,
GraphDef_t init_graph, GraphDef_t init_def,
GraphDef_t pred_graph, GraphDef_t pred_def,
vector<string>& inputs, vector<string>& inputs,
vector<string>& outputs) { vector<string>& outputs) {
LOG(INFO) << "Load Model: " << model_file << "......"; LOG(INFO) << "Load Model: " << model_file << "......";
LOG(INFO) << "Format: ONNX"; LOG(INFO) << "Format: ONNX";
onnx::ONNXBackend onnx_backend; onnx::ONNXBackend onnx_backend;
onnx_backend.Prepare(model_file, init_graph, pred_graph); onnx_backend.Prepare(model_file, init_def, pred_def);
inputs.clear(); inputs.clear();
outputs.clear(); outputs.clear();
for (const auto& e : pred_graph->input()) for (const auto& input : pred_def->input()) {
inputs.emplace_back(e); inputs.push_back(input);
for (const auto& e : pred_graph->output()) }
outputs.emplace_back(e); for (const auto& output : pred_def->output()) {
outputs.push_back(output);
}
} }
#define INSTANTIATE_API(T) \ #define INSTANTIATE_API(T) \
......
...@@ -8,54 +8,56 @@ namespace dragon { ...@@ -8,54 +8,56 @@ namespace dragon {
#define ELIGIBLE_TENSOR_TYPES \ #define ELIGIBLE_TENSOR_TYPES \
{ "bool", "int8", "uint8", "int32", "int64", "float16", "float32", "float64" } { "bool", "int8", "uint8", "int32", "int64", "float16", "float32", "float64" }
#define DEFINE_TYPE_A_TO_B(Ta, type_str, Tb) \ #define DISPATCH_TYPE_TO(InputType, OutputType) \
if (dtype() == type_str) { \ if (dtype() == types::to_string<OutputType>()) { \
if (InputSize() != 0) { \ if (InputSize() != 0) { \
Output(0)->ReshapeLike(Input(0)); \ Output(0)->ReshapeLike(Input(0)); \
auto* x = Input(0).template data<Ta, Context>(); \ auto* x = Input(0).template data<InputType, Context>(); \
auto* y = Output(0)->template mutable_data<Tb, Context>(); \ auto* y = Output(0)->template mutable_data<OutputType, Context>(); \
kernel::Cast(Input(0).count(), x, y, ctx()); \ kernel::Cast(Input(0).count(), x, y, ctx()); \
} else { \ } else { \
auto n = Output(0)->count(); \ auto n = Output(0)->count(); \
auto* x = Output(0)->template data<Ta, Context>(); \ auto* x = Output(0)->template data<InputType, Context>(); \
auto* scratch = ws()->template data<Tb, Context>({n})[0]; \ auto* scratch = ws()->template data<OutputType, Context>({n})[0]; \
kernel::Cast(n, x, scratch, ctx()); \ kernel::Cast(n, x, scratch, ctx()); \
ctx()->FinishDeviceComputation(); \ ctx()->FinishDeviceComputation(); \
auto* y = Output(0)->template mutable_data<Tb, Context>(); \ auto* y = Output(0)->template mutable_data<OutputType, Context>(); \
math::Copy(n, scratch, y, ctx()); \ math::Copy(n, scratch, y, ctx()); \
} \ } \
return; \ return; \
} }
#define DEFINE_TYPE_A_TO_ALL(Ta) \ #define DISPATCH_TYPE_TO_ALL(InputType) \
DEFINE_TYPE_A_TO_B(Ta, "bool", bool); \ DISPATCH_TYPE_TO(InputType, bool); \
DEFINE_TYPE_A_TO_B(Ta, "int8", int8_t); \ DISPATCH_TYPE_TO(InputType, int8_t); \
DEFINE_TYPE_A_TO_B(Ta, "uint8", uint8_t); \ DISPATCH_TYPE_TO(InputType, uint8_t); \
DEFINE_TYPE_A_TO_B(Ta, "int32", int); \ DISPATCH_TYPE_TO(InputType, int); \
DEFINE_TYPE_A_TO_B(Ta, "int64", int64_t); \ DISPATCH_TYPE_TO(InputType, int64_t); \
DEFINE_TYPE_A_TO_B(Ta, "float16", float16); \ DISPATCH_TYPE_TO(InputType, float16); \
DEFINE_TYPE_A_TO_B(Ta, "float32", float); \ DISPATCH_TYPE_TO(InputType, float); \
DEFINE_TYPE_A_TO_B(Ta, "float64", double) DISPATCH_TYPE_TO(InputType, double); \
LOG(FATAL) << MessageForUnsupported(dtype(), ELIGIBLE_TENSOR_TYPES);
#define DISPATCH_WITH_TENSOR(X) \ #define DISPATCH_WITH_TENSOR(X) \
if (XIsType(X, bool)) { \ if (XIsType(X, bool)) { \
DEFINE_TYPE_A_TO_ALL(bool); \ DISPATCH_TYPE_TO_ALL(bool); \
} else if (XIsType(X, int8_t)) { \ } else if (XIsType(X, int8_t)) { \
DEFINE_TYPE_A_TO_ALL(int8_t); \ DISPATCH_TYPE_TO_ALL(int8_t); \
} else if (XIsType(X, uint8_t)) { \ } else if (XIsType(X, uint8_t)) { \
DEFINE_TYPE_A_TO_ALL(uint8_t); \ DISPATCH_TYPE_TO_ALL(uint8_t); \
} else if (XIsType(X, int)) { \ } else if (XIsType(X, int)) { \
DEFINE_TYPE_A_TO_ALL(int); \ DISPATCH_TYPE_TO_ALL(int); \
} else if (XIsType(X, int64_t)) { \ } else if (XIsType(X, int64_t)) { \
DEFINE_TYPE_A_TO_ALL(int64_t); \ DISPATCH_TYPE_TO_ALL(int64_t); \
} else if (XIsType(X, float16)) { \ } else if (XIsType(X, float16)) { \
DEFINE_TYPE_A_TO_ALL(float16); \ DISPATCH_TYPE_TO_ALL(float16); \
} else if (XIsType(X, float)) { \ } else if (XIsType(X, float)) { \
DEFINE_TYPE_A_TO_ALL(float); \ DISPATCH_TYPE_TO_ALL(float); \
} else if (XIsType(X, double)) { \ } else if (XIsType(X, double)) { \
DEFINE_TYPE_A_TO_ALL(double); \ DISPATCH_TYPE_TO_ALL(double); \
} else { \ } else { \
LOG(FATAL) << TypeString(X, ELIGIBLE_TENSOR_TYPES); \ LOG(FATAL) << MessageForUnsupported( \
types::to_string(X.meta()), ELIGIBLE_TENSOR_TYPES); \
} }
template <class Context> template <class Context>
...@@ -101,8 +103,8 @@ OPERATOR_SCHEMA(CastGradient) ...@@ -101,8 +103,8 @@ OPERATOR_SCHEMA(CastGradient)
REGISTER_GRADIENT(Cast, SimpleGradientMaker); REGISTER_GRADIENT(Cast, SimpleGradientMaker);
#undef ELIGIBLE_TENSOR_TYPES #undef ELIGIBLE_TENSOR_TYPES
#undef DEFINE_TYPE_A_TO_B #undef DISPATCH_TYPE_TO
#undef DEFINE_TYPE_A_TO_ALL #undef DISPATCH_TYPE_TO_ALL
#undef DISPATCH_WITH_TENSOR #undef DISPATCH_WITH_TENSOR
} // namespace dragon } // namespace dragon
...@@ -52,7 +52,8 @@ void ChannelNormalizeOp<Context>::DoRunWithType() { ...@@ -52,7 +52,8 @@ void ChannelNormalizeOp<Context>::DoRunWithType() {
} else if (dtype() == "float64") { } else if (dtype() == "float64") {
DoRunWithTypeAndCast<T, double>(); DoRunWithTypeAndCast<T, double>();
} else { } else {
LOG(FATAL) << TypeString(dtype(), {"float16", "float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
dtype(), {"float16", "float32", "float64"});
} }
} }
......
...@@ -60,7 +60,7 @@ void MultinomialOp<Context>::DoRunWithType() { ...@@ -60,7 +60,7 @@ void MultinomialOp<Context>::DoRunWithType() {
template <class Context> template <class Context>
void MultinomialOp<Context>::RunOnDevice() { void MultinomialOp<Context>::RunOnDevice() {
ctx()->set_stream_id(0); // Enforce the default stream ctx()->set_stream(0); // Enforce the default stream
DispatchHelper<TensorTypes<float, double>>::Call(this, Input(0)); DispatchHelper<TensorTypes<float, double>>::Call(this, Input(0));
} }
......
...@@ -124,23 +124,24 @@ template <class Context> ...@@ -124,23 +124,24 @@ template <class Context>
void CollectiveOp<Context>::RunOnDevice() { void CollectiveOp<Context>::RunOnDevice() {
if (communication_ == "ALLREDUCE") { if (communication_ == "ALLREDUCE") {
for (int i = 0; i < InputSize(); i++) { for (int i = 0; i < InputSize(); i++) {
if (XIsType(Input(i), int8_t)) { auto& X = Input(i);
if (XIsType(X, int8_t)) {
AllReduceDispatcher<int8_t>(&Input(i)); AllReduceDispatcher<int8_t>(&Input(i));
} else if (XIsType(Input(i), uint8_t)) { } else if (XIsType(X, uint8_t)) {
AllReduceDispatcher<uint8_t>(&Input(i)); AllReduceDispatcher<uint8_t>(&Input(i));
} else if (XIsType(Input(i), int)) { } else if (XIsType(X, int)) {
AllReduceDispatcher<int>(&Input(i)); AllReduceDispatcher<int>(&Input(i));
} else if (XIsType(Input(i), int64_t)) { } else if (XIsType(X, int64_t)) {
AllReduceDispatcher<int64_t>(&Input(i)); AllReduceDispatcher<int64_t>(&Input(i));
} else if (XIsType(Input(i), float16)) { } else if (XIsType(X, float16)) {
AllReduceDispatcher<float16>(&Input(i)); AllReduceDispatcher<float16>(&Input(i));
} else if (XIsType(Input(i), float)) { } else if (XIsType(X, float)) {
AllReduceDispatcher<float>(&Input(i)); AllReduceDispatcher<float>(&Input(i));
} else if (XIsType(Input(i), double)) { } else if (XIsType(X, double)) {
AllReduceDispatcher<double>(&Input(i)); AllReduceDispatcher<double>(&Input(i));
} else { } else {
LOG(FATAL) << TypeString( LOG(FATAL) << MessageForUnsupported(
Input(i), types::to_string(X.meta()),
{"int8", {"int8",
"uint8", "uint8",
"int32", "int32",
...@@ -152,25 +153,26 @@ void CollectiveOp<Context>::RunOnDevice() { ...@@ -152,25 +153,26 @@ void CollectiveOp<Context>::RunOnDevice() {
} }
} else if (communication_ == "BROADCAST") { } else if (communication_ == "BROADCAST") {
for (int i = 0; i < InputSize(); i++) { for (int i = 0; i < InputSize(); i++) {
if (XIsType(Input(i), bool)) { auto& X = Input(i);
if (XIsType(X, bool)) {
BroadcastDispatcher<bool>(&Input(i)); BroadcastDispatcher<bool>(&Input(i));
} else if (XIsType(Input(i), int8_t)) { } else if (XIsType(X, int8_t)) {
BroadcastDispatcher<int8_t>(&Input(i)); BroadcastDispatcher<int8_t>(&Input(i));
} else if (XIsType(Input(i), uint8_t)) { } else if (XIsType(X, uint8_t)) {
BroadcastDispatcher<uint8_t>(&Input(i)); BroadcastDispatcher<uint8_t>(&Input(i));
} else if (XIsType(Input(i), int)) { } else if (XIsType(X, int)) {
BroadcastDispatcher<int>(&Input(i)); BroadcastDispatcher<int>(&Input(i));
} else if (XIsType(Input(i), int64_t)) { } else if (XIsType(X, int64_t)) {
BroadcastDispatcher<int64_t>(&Input(i)); BroadcastDispatcher<int64_t>(&Input(i));
} else if (XIsType(Input(i), float16)) { } else if (XIsType(X, float16)) {
BroadcastDispatcher<float16>(&Input(i)); BroadcastDispatcher<float16>(&Input(i));
} else if (XIsType(Input(i), float)) { } else if (XIsType(X, float)) {
BroadcastDispatcher<float>(&Input(i)); BroadcastDispatcher<float>(&Input(i));
} else if (XIsType(Input(i), double)) { } else if (XIsType(X, double)) {
BroadcastDispatcher<double>(&Input(i)); BroadcastDispatcher<double>(&Input(i));
} else { } else {
LOG(FATAL) << TypeString( LOG(FATAL) << MessageForUnsupported(
Input(i), types::to_string(X.meta()),
{"bool", {"bool",
"int8", "int8",
"uint8", "uint8",
......
...@@ -149,7 +149,7 @@ class CollectiveOpBase : public Operator<Context> { ...@@ -149,7 +149,7 @@ class CollectiveOpBase : public Operator<Context> {
ncclComm_t nccl_comm() { ncclComm_t nccl_comm() {
auto ret = CUDAContext::object()->nccl_comm( auto ret = CUDAContext::object()->nccl_comm(
this->ctx()->template device_id(), this->ctx()->template device(),
group_str_, group_str_,
nullptr, nullptr,
comm_size_, comm_size_,
...@@ -162,7 +162,7 @@ class CollectiveOpBase : public Operator<Context> { ...@@ -162,7 +162,7 @@ class CollectiveOpBase : public Operator<Context> {
} }
Broadcast((uint8_t*)&comm_uuid, sizeof(comm_uuid)); Broadcast((uint8_t*)&comm_uuid, sizeof(comm_uuid));
ret = CUDAContext::object()->nccl_comm( ret = CUDAContext::object()->nccl_comm(
this->ctx()->template device_id(), this->ctx()->template device(),
group_str_, group_str_,
&comm_uuid, &comm_uuid,
comm_size_, comm_size_,
......
...@@ -85,7 +85,8 @@ void CuDNNCTCLossOp<Context>::RunOnDevice() { ...@@ -85,7 +85,8 @@ void CuDNNCTCLossOp<Context>::RunOnDevice() {
CUDNN_CHECK(cudnnSetCTCLossDescriptor(ctc_desc_, CUDNN_DATA_FLOAT)); CUDNN_CHECK(cudnnSetCTCLossDescriptor(ctc_desc_, CUDNN_DATA_FLOAT));
DoRunWithType<float>(); DoRunWithType<float>();
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32"});
} }
} }
......
...@@ -72,7 +72,8 @@ void NLLLossOp<Context>::RunOnDevice() { ...@@ -72,7 +72,8 @@ void NLLLossOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<float, int64_t>(); DoRunWithType<float, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float32", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float32", "int64"});
} }
} else if (XIsType(Input(0), double)) { } else if (XIsType(Input(0), double)) {
if (XIsType(Input(1), double)) { if (XIsType(Input(1), double)) {
...@@ -80,10 +81,12 @@ void NLLLossOp<Context>::RunOnDevice() { ...@@ -80,10 +81,12 @@ void NLLLossOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<double, int64_t>(); DoRunWithType<double, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float64", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float64", "int64"});
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32", "float64"});
} }
} }
...@@ -139,7 +142,8 @@ void NLLLossGradientOp<Context>::RunOnDevice() { ...@@ -139,7 +142,8 @@ void NLLLossGradientOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<float, int64_t>(); DoRunWithType<float, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float32", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float32", "int64"});
} }
} else if (XIsType(Input(0), double)) { } else if (XIsType(Input(0), double)) {
if (XIsType(Input(1), double)) { if (XIsType(Input(1), double)) {
...@@ -147,10 +151,12 @@ void NLLLossGradientOp<Context>::RunOnDevice() { ...@@ -147,10 +151,12 @@ void NLLLossGradientOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<double, int64_t>(); DoRunWithType<double, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float64", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float64", "int64"});
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32", "float64"});
} }
} }
......
...@@ -72,7 +72,8 @@ void SigmoidFocalLossOp<Context>::RunOnDevice() { ...@@ -72,7 +72,8 @@ void SigmoidFocalLossOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<float, int64_t>(); DoRunWithType<float, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float32", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float32", "int64"});
} }
} else if (XIsType(Input(0), double)) { } else if (XIsType(Input(0), double)) {
if (XIsType(Input(1), double)) { if (XIsType(Input(1), double)) {
...@@ -80,10 +81,12 @@ void SigmoidFocalLossOp<Context>::RunOnDevice() { ...@@ -80,10 +81,12 @@ void SigmoidFocalLossOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<double, int64_t>(); DoRunWithType<double, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float64", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float64", "int64"});
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32", "float64"});
} }
} }
...@@ -139,7 +142,8 @@ void SigmoidFocalLossGradientOp<Context>::RunOnDevice() { ...@@ -139,7 +142,8 @@ void SigmoidFocalLossGradientOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<float, int64_t>(); DoRunWithType<float, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float32", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float32", "int64"});
} }
} else if (XIsType(Input(0), double)) { } else if (XIsType(Input(0), double)) {
if (XIsType(Input(1), double)) { if (XIsType(Input(1), double)) {
...@@ -147,10 +151,12 @@ void SigmoidFocalLossGradientOp<Context>::RunOnDevice() { ...@@ -147,10 +151,12 @@ void SigmoidFocalLossGradientOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<double, int64_t>(); DoRunWithType<double, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float64", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float64", "int64"});
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32", "float64"});
} }
} }
......
...@@ -82,7 +82,8 @@ void SparseSoftmaxCrossEntropyOp<Context>::RunOnDevice() { ...@@ -82,7 +82,8 @@ void SparseSoftmaxCrossEntropyOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<float, int64_t>(); DoRunWithType<float, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float32", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float32", "int64"});
} }
} else if (XIsType(Input(0), double)) { } else if (XIsType(Input(0), double)) {
if (XIsType(Input(1), double)) { if (XIsType(Input(1), double)) {
...@@ -90,10 +91,12 @@ void SparseSoftmaxCrossEntropyOp<Context>::RunOnDevice() { ...@@ -90,10 +91,12 @@ void SparseSoftmaxCrossEntropyOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<double, int64_t>(); DoRunWithType<double, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float64", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float64", "int64"});
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32", "float64"});
} }
} }
...@@ -152,7 +155,8 @@ void SparseSoftmaxCrossEntropyGradientOp<Context>::RunOnDevice() { ...@@ -152,7 +155,8 @@ void SparseSoftmaxCrossEntropyGradientOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<float, int64_t>(); DoRunWithType<float, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float32", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float32", "int64"});
} }
} else if (XIsType(Input(0), double)) { } else if (XIsType(Input(0), double)) {
if (XIsType(Input(1), double)) { if (XIsType(Input(1), double)) {
...@@ -160,10 +164,12 @@ void SparseSoftmaxCrossEntropyGradientOp<Context>::RunOnDevice() { ...@@ -160,10 +164,12 @@ void SparseSoftmaxCrossEntropyGradientOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<double, int64_t>(); DoRunWithType<double, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float64", "int64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float64", "int64"});
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32", "float64"});
} }
} }
......
...@@ -45,8 +45,8 @@ void AxpbyOp<Context>::RunOnDevice() { ...@@ -45,8 +45,8 @@ void AxpbyOp<Context>::RunOnDevice() {
} else if (XIsType(X, double)) { } else if (XIsType(X, double)) {
DoRunWithType<double>(&X, Y); DoRunWithType<double>(&X, Y);
} else } else
LOG(FATAL) << TypeString( LOG(FATAL) << MessageForUnsupported(
X, types::to_string(X.meta()),
{"int8", "uint8", "int32", "int64", "float16", "float32", "float64"}); {"int8", "uint8", "int32", "int64", "float16", "float32", "float64"});
} }
} }
......
...@@ -75,8 +75,8 @@ void MomentsOp<Context>::RunOnDevice() { ...@@ -75,8 +75,8 @@ void MomentsOp<Context>::RunOnDevice() {
} else if (XIsType(X, double)) { } else if (XIsType(X, double)) {
DoRunWithType<double, double>(); DoRunWithType<double, double>();
} else { } else {
LOG(FATAL) << TypeString( LOG(FATAL) << MessageForUnsupported(
X, types::to_string(X.meta()),
{"int8", "uint8", "int32", "int64", "float16", "float32", "float64"}); {"int8", "uint8", "int32", "int64", "float16", "float32", "float64"});
} }
} }
......
...@@ -55,7 +55,8 @@ void AccuracyOp<Context>::RunOnDevice() { ...@@ -55,7 +55,8 @@ void AccuracyOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<float, int64_t>(); DoRunWithType<float, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"int64", "float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float32", "int64"});
} }
} else if (XIsType(Input(0), double)) { } else if (XIsType(Input(0), double)) {
if (XIsType(Input(1), double)) { if (XIsType(Input(1), double)) {
...@@ -63,10 +64,12 @@ void AccuracyOp<Context>::RunOnDevice() { ...@@ -63,10 +64,12 @@ void AccuracyOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), int64_t)) { } else if (XIsType(Input(1), int64_t)) {
DoRunWithType<double, int64_t>(); DoRunWithType<double, int64_t>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"int64", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float64", "int64"});
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32", "float64"});
} }
} }
......
...@@ -114,7 +114,8 @@ void BatchNormOp<Context>::RunOnDevice() { ...@@ -114,7 +114,8 @@ void BatchNormOp<Context>::RunOnDevice() {
InferenceImpl<float, float>(); InferenceImpl<float, float>();
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32"});
} }
} }
...@@ -190,7 +191,8 @@ void BatchNormGradientOp<Context>::RunOnDevice() { ...@@ -190,7 +191,8 @@ void BatchNormGradientOp<Context>::RunOnDevice() {
InferenceImpl<float, float>(); InferenceImpl<float, float>();
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32"});
} }
} }
......
...@@ -90,7 +90,8 @@ void CuDNNBatchNormOp<Context>::RunOnDevice() { ...@@ -90,7 +90,8 @@ void CuDNNBatchNormOp<Context>::RunOnDevice() {
} else if (XIsType(Input(0), float16)) { } else if (XIsType(Input(0), float16)) {
DoRunWithType<float16>(); DoRunWithType<float16>();
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32", "float16"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float16", "float32"});
} }
} }
...@@ -156,10 +157,12 @@ void CuDNNBatchNormGradientOp<Context>::RunOnDevice() { ...@@ -156,10 +157,12 @@ void CuDNNBatchNormGradientOp<Context>::RunOnDevice() {
TrainingImpl<float16>(); TrainingImpl<float16>();
} else { } else {
// We will support it some day -:) // We will support it some day -:)
LOG(FATAL) << TypeString(Input(0), {"float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32"});
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float16", "float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float16", "float32"});
} }
} }
......
...@@ -111,7 +111,8 @@ void SyncBatchNormOp<Context>::RunOnDevice() { ...@@ -111,7 +111,8 @@ void SyncBatchNormOp<Context>::RunOnDevice() {
this->template InferenceImpl<float, float>(); this->template InferenceImpl<float, float>();
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32"});
} }
} }
...@@ -195,7 +196,8 @@ void SyncBatchNormGradientOp<Context>::RunOnDevice() { ...@@ -195,7 +196,8 @@ void SyncBatchNormGradientOp<Context>::RunOnDevice() {
this->template InferenceImpl<float, float>(); this->template InferenceImpl<float, float>();
} }
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32"});
} }
} }
......
...@@ -60,7 +60,8 @@ void GroupNormOp<Context>::RunOnDevice() { ...@@ -60,7 +60,8 @@ void GroupNormOp<Context>::RunOnDevice() {
} else if (XIsType(Input(0), float16)) { } else if (XIsType(Input(0), float16)) {
DoRunWithType<float16, float>(); DoRunWithType<float16, float>();
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32", "float16"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float16", "float32"});
} }
} }
...@@ -101,7 +102,8 @@ void GroupNormGradientOp<Context>::RunOnDevice() { ...@@ -101,7 +102,8 @@ void GroupNormGradientOp<Context>::RunOnDevice() {
} else if (XIsType(Input(0), float16)) { } else if (XIsType(Input(0), float16)) {
DoRunWithType<float16, float>(); DoRunWithType<float16, float>();
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float16", "float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float16", "float32"});
} }
} }
......
...@@ -23,7 +23,8 @@ void LSTMCellOp<Context>::RunOnDevice() { ...@@ -23,7 +23,8 @@ void LSTMCellOp<Context>::RunOnDevice() {
if (XIsType(Input(0), float)) { if (XIsType(Input(0), float)) {
DoRunWithType<float>(); DoRunWithType<float>();
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32"});
} }
} }
...@@ -60,7 +61,8 @@ void LSTMCellGradientOp<Context>::RunOnDevice() { ...@@ -60,7 +61,8 @@ void LSTMCellGradientOp<Context>::RunOnDevice() {
if (XIsType(Input(0), float)) { if (XIsType(Input(0), float)) {
DoRunWithType<float>(); DoRunWithType<float>();
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float32"});
} }
} }
......
...@@ -100,7 +100,8 @@ void UpdateOpBase<Context>::RunOnDevice() { ...@@ -100,7 +100,8 @@ void UpdateOpBase<Context>::RunOnDevice() {
ComputeUpdate(dX_cast); ComputeUpdate(dX_cast);
ApplyUpdate<float>(dX_cast, X); ApplyUpdate<float>(dX_cast, X);
} else { } else {
LOG(FATAL) << TypeString(dX, {"float16", "float32"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(dX.meta()), {"float16", "float32"});
} }
} }
......
...@@ -199,7 +199,8 @@ void ResizeGradientOp<Context>::RunOnDevice() { ...@@ -199,7 +199,8 @@ void ResizeGradientOp<Context>::RunOnDevice() {
} else if (XIsType(Input(0), double)) { } else if (XIsType(Input(0), double)) {
DoRunWithTypeAndCast<double>(); DoRunWithTypeAndCast<double>();
} else { } else {
LOG(FATAL) << TypeString(Input(0), {"float16", "float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(0).meta()), {"float16", "float32", "float64"});
}; };
} }
......
...@@ -95,7 +95,8 @@ void RoiAlignGradientOp<Context>::RunOnDevice() { ...@@ -95,7 +95,8 @@ void RoiAlignGradientOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), double)) { } else if (XIsType(Input(1), double)) {
DoRunWithTypeAndCast<double>(); DoRunWithTypeAndCast<double>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float16", "float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float16", "float32", "float64"});
}; };
} }
......
...@@ -98,7 +98,8 @@ void RoiPoolGradientOp<Context>::RunOnDevice() { ...@@ -98,7 +98,8 @@ void RoiPoolGradientOp<Context>::RunOnDevice() {
} else if (XIsType(Input(1), double)) { } else if (XIsType(Input(1), double)) {
DoRunWithTypeAndCast<double>(); DoRunWithTypeAndCast<double>();
} else { } else {
LOG(FATAL) << TypeString(Input(1), {"float16", "float32", "float64"}); LOG(FATAL) << MessageForUnsupported(
types::to_string(Input(1).meta()), {"float16", "float32", "float64"});
}; };
} }
......
...@@ -52,14 +52,14 @@ def device(device_type, device_index=0): ...@@ -52,14 +52,14 @@ def device(device_type, device_index=0):
def eager_scope(data='${DATA}', graph='${GRAPH}'): def eager_scope(data='${DATA}', graph='${GRAPH}'):
"""Context-manager to nest the domain for eager resources. """Context-manager to nest the namespace for eager resources.
Parameters Parameters
---------- ----------
data : str, optional, default='${DATA}' data : str, optional, default='${DATA}'
The domain for resources traced by python. The namespace for resources traced by python.
graph : str, optional, default='${GRAPH}' graph : str, optional, default='${GRAPH}'
The domain for resources traced by graph. The namespace for resources traced by graph.
""" """
domain_tuple = (graph, data) domain_tuple = (graph, data)
......
...@@ -105,7 +105,7 @@ class Workspace(backend.Workspace): ...@@ -105,7 +105,7 @@ class Workspace(backend.Workspace):
return self._collectors return self._collectors
def as_default(self): def as_default(self):
"""Switch ``self`` as the default workspace. """Switch this workspace as the default.
Call this method with the **with** keyword. Call this method with the **with** keyword.
...@@ -114,7 +114,7 @@ class Workspace(backend.Workspace): ...@@ -114,7 +114,7 @@ class Workspace(backend.Workspace):
Returns Returns
------- -------
dragon.Workspace dragon.Workspace
The ``self``. This workspace.
""" """
return _GLOBAL_DEFAULT_WORKSPACE_STACK.get_controller(self) return _GLOBAL_DEFAULT_WORKSPACE_STACK.get_controller(self)
...@@ -273,7 +273,7 @@ class Workspace(backend.Workspace): ...@@ -273,7 +273,7 @@ class Workspace(backend.Workspace):
Returns Returns
------- -------
dragon.Workspace dragon.Workspace
The ``self``. This workspace.
""" """
self.MergeFrom(other) self.MergeFrom(other)
...@@ -302,7 +302,7 @@ class Workspace(backend.Workspace): ...@@ -302,7 +302,7 @@ class Workspace(backend.Workspace):
The tensor to reset. The tensor to reset.
""" """
return self.ResetTensor(_stringify_object(tensor)) self.ResetTensor(_stringify_object(tensor))
def run_backward( def run_backward(
self, self,
...@@ -487,8 +487,7 @@ _GLOBAL_DEFAULT_WORKSPACE_STACK = _DefaultWorkspaceStack() ...@@ -487,8 +487,7 @@ _GLOBAL_DEFAULT_WORKSPACE_STACK = _DefaultWorkspaceStack()
# Predefined graph executing stages. # Predefined graph executing stages.
_PREDEFINED_GRAPH_EXECUTING_STAGES = { _PREDEFINED_GRAPH_EXECUTING_STAGES = {
'default': {'include': '', 'exclude': ''}, 'default': {'include': '', 'exclude': ''},
'forward': {'include': '', 'exclude': 'Gradient'}, 'forward': {'include': '', 'exclude': '.*Gradient.*'},
'backward': {'include': 'Gradient', 'exclude': 'Generate'}, 'backward': {'include': '.*Gradient.*', 'exclude': 'GradientGenerate'},
'backward_v2': {'include': 'Gradient', 'exclude': ''}, 'backward_v2': {'include': '.*Gradient.*', 'exclude': ''},
'external_grads': {'include': '', 'exclude': 'Generate'},
} }
...@@ -153,7 +153,7 @@ setuptools.setup( ...@@ -153,7 +153,7 @@ setuptools.setup(
package_data={'dragon': find_package_data()}, package_data={'dragon': find_package_data()},
package_dir={'dragon': 'dragon'}, package_dir={'dragon': 'dragon'},
cmdclass={'bdist_wheel': bdist_wheel, 'install': install}, cmdclass={'bdist_wheel': bdist_wheel, 'install': install},
python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*', python_requires='>=3.5',
install_requires=['numpy', 'protobuf', 'kpl-dataset'], install_requires=['numpy', 'protobuf', 'kpl-dataset'],
classifiers=[ classifiers=[
'Development Status :: 5 - Production/Stable', 'Development Status :: 5 - Production/Stable',
...@@ -162,12 +162,12 @@ setuptools.setup( ...@@ -162,12 +162,12 @@ setuptools.setup(
'Intended Audience :: Science/Research', 'Intended Audience :: Science/Research',
'License :: OSI Approved :: BSD License', 'License :: OSI Approved :: BSD License',
'Programming Language :: C++', 'Programming Language :: C++',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Topic :: Scientific/Engineering', 'Topic :: Scientific/Engineering',
'Topic :: Scientific/Engineering :: Mathematics', 'Topic :: Scientific/Engineering :: Mathematics',
'Topic :: Scientific/Engineering :: Artificial Intelligence', 'Topic :: Scientific/Engineering :: Artificial Intelligence',
......
...@@ -24,10 +24,19 @@ ...@@ -24,10 +24,19 @@
#define TLS_OBJECT __declspec(thread) #define TLS_OBJECT __declspec(thread)
#endif #endif
// Disable the copy and assignment operator for a class
#define DISABLE_COPY_AND_ASSIGN(classname) \
classname(const classname&) = delete; \
classname& operator=(const classname&) = delete
// Concatenate two strings
#define CONCATENATE_IMPL(s1, s2) s1##s2 #define CONCATENATE_IMPL(s1, s2) s1##s2
#define CONCATENATE(s1, s2) CONCATENATE_IMPL(s1, s2) #define CONCATENATE(s1, s2) CONCATENATE_IMPL(s1, s2)
// Return a anonymous variable name using line number
#define ANONYMOUS_VARIABLE(str) CONCATENATE(str, __LINE__) #define ANONYMOUS_VARIABLE(str) CONCATENATE(str, __LINE__)
#define NOT_IMPLEMENTED \
LOG(FATAL) << "This module has not been implemented yet." // Throw a fatal logging for not implemented function
#define NOT_IMPLEMENTED LOG(FATAL) << "This function is not implemented."
#endif // DRAGON_UTILS_MARCROS_H_ #endif // DRAGON_UTILS_MARCROS_H_
:: ############################################################################# :: ##############################################################
:: Example command to build on Windows for Visual Studio 2013 (VC12). :: Command file to build on Windows for Visual Studio 2013 (VC12)
:: ############################################################################# :: ##############################################################
@echo off @echo off
setlocal setlocal
SET ORIGINAL_DIR=%cd% :: Build variables
SET REPO_ROOT=%~dp0%.. set ORIGINAL_DIR=%cd%
SET DRAGON_ROOT=%REPO_ROOT%\dragon set REPO_ROOT=%~dp0%..
SET THIRD_PARTY_DIR=%REPO_ROOT%\third_party set DRAGON_ROOT=%REPO_ROOT%\dragon
SET CMAKE_GENERATOR="Visual Studio 12 2013 Win64" set THIRD_PARTY_DIR=%REPO_ROOT%\third_party
set CMAKE_GENERATOR="Visual Studio 12 2013 Win64"
:: Build options :: Build options
SET BUILD_PYTHON=ON set BUILD_PYTHON=ON
SET BUILD_RUNTIME=OFF set BUILD_RUNTIME=OFF
:: Optional libraries
set USE_CUDA=ON
set USE_CUDNN=ON
set USE_OPENMP=ON
set USE_AVX=ON
set USE_AVX2=ON
set USE_FMA=ON
:: Protobuf SDK options :: Protobuf SDK options
SET PROTOBUF_SDK_ROOT_DIR=%THIRD_PARTY_DIR%\protobuf set PROTOBUF_SDK_ROOT_DIR=%THIRD_PARTY_DIR%\protobuf
:: Protobuf Compiler options :: Protobuf Compiler options
:: Set the protobuf compiler(i.e., protoc) if necessary :: Set the protobuf compiler(i.e., protoc) if necessary.
:: If not, a compiler in the sdk or environment will be used :: If not, a compiler in the sdk or environment will be used.
SET PROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_SDK_ROOT_DIR%\bin\protoc set PROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_SDK_ROOT_DIR%\bin\protoc
:: Python options :: Python options
:: Set your python "interpreter" if necessary :: Set your python "interpreter" if necessary.
:: If not, a default interpreter will be used :: If not, a default interpreter will be used.
:: SET PYTHON_EXECUTABLE=X:/Anaconda3/python :: set PYTHON_EXECUTABLE=X:/Anaconda3/python
if %BUILD_PYTHON% == ON ( if %BUILD_PYTHON% == ON (
if NOT DEFINED PYTHON_EXECUTABLE ( if NOT DEFINED PYTHON_EXECUTABLE (
for /F %%i in ('python -c "import sys;print(sys.executable)"') do (set PYTHON_EXECUTABLE=%%i) for /F %%i in ('python -c "import sys;print(sys.executable)"') do (set PYTHON_EXECUTABLE=%%i)
...@@ -47,6 +57,12 @@ cmake .. ^ ...@@ -47,6 +57,12 @@ cmake .. ^
-G%CMAKE_GENERATOR% ^ -G%CMAKE_GENERATOR% ^
-DBUILD_PYTHON=%BUILD_PYTHON% ^ -DBUILD_PYTHON=%BUILD_PYTHON% ^
-DBUILD_RUNTIME=%BUILD_RUNTIME% ^ -DBUILD_RUNTIME=%BUILD_RUNTIME% ^
-USE_CUDA==%USE_CUDA% ^
-USE_CUDNN==%USE_CUDNN% ^
-USE_OPENMP==%USE_OPENMP% ^
-USE_AVX==%USE_AVX% ^
-USE_AVX2==%USE_AVX2% ^
-USE_FMA==%USE_FMA% ^
-DTHIRD_PARTY_DIR=%THIRD_PARTY_DIR% ^ -DTHIRD_PARTY_DIR=%THIRD_PARTY_DIR% ^
-DPROTOBUF_SDK_ROOT_DIR=%PROTOBUF_SDK_ROOT_DIR% ^ -DPROTOBUF_SDK_ROOT_DIR=%PROTOBUF_SDK_ROOT_DIR% ^
-DPROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_PROTOC_EXECUTABLE% ^ -DPROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_PROTOC_EXECUTABLE% ^
......
:: ############################################################################# :: ##############################################################
:: Example command to build on Windows for Visual Studio 2015 (VC14). :: Command file to build on Windows for Visual Studio 2015 (VC14)
:: ############################################################################# :: ##############################################################
@echo off @echo off
setlocal setlocal
SET ORIGINAL_DIR=%cd% :: Build variables
SET REPO_ROOT=%~dp0%.. set ORIGINAL_DIR=%cd%
SET DRAGON_ROOT=%REPO_ROOT%\dragon set REPO_ROOT=%~dp0%..
SET THIRD_PARTY_DIR=%REPO_ROOT%\third_party set DRAGON_ROOT=%REPO_ROOT%\dragon
SET CMAKE_GENERATOR="Visual Studio 14 2015 Win64" set THIRD_PARTY_DIR=%REPO_ROOT%\third_party
set CMAKE_GENERATOR="Visual Studio 14 2015 Win64"
:: Build options :: Build options
SET BUILD_PYTHON=ON set BUILD_PYTHON=ON
SET BUILD_RUNTIME=OFF set BUILD_RUNTIME=OFF
:: Optional libraries
set USE_CUDA=ON
set USE_CUDNN=ON
set USE_OPENMP=ON
set USE_AVX=ON
set USE_AVX2=ON
set USE_FMA=ON
:: Protobuf SDK options :: Protobuf SDK options
SET PROTOBUF_SDK_ROOT_DIR=%THIRD_PARTY_DIR%\protobuf set PROTOBUF_SDK_ROOT_DIR=%THIRD_PARTY_DIR%\protobuf
:: Protobuf Compiler options :: Protobuf Compiler options
:: Set the protobuf compiler(i.e., protoc) if necessary :: Set the protobuf compiler(i.e., protoc) if necessary.
:: If not, a compiler in the sdk or environment will be used :: If not, a compiler in the sdk or environment will be used.
SET PROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_SDK_ROOT_DIR%\bin\protoc set PROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_SDK_ROOT_DIR%\bin\protoc
:: Python options :: Python options
:: Set your python "interpreter" if necessary :: Set your python "interpreter" if necessary.
:: If not, a default interpreter will be used :: If not, a default interpreter will be used.
:: SET PYTHON_EXECUTABLE=X:/Anaconda3/python :: set PYTHON_EXECUTABLE=X:/Anaconda3/python
if %BUILD_PYTHON% == ON ( if %BUILD_PYTHON% == ON (
if NOT DEFINED PYTHON_EXECUTABLE ( if NOT DEFINED PYTHON_EXECUTABLE (
for /F %%i in ('python -c "import sys;print(sys.executable)"') do (set PYTHON_EXECUTABLE=%%i) for /F %%i in ('python -c "import sys;print(sys.executable)"') do (set PYTHON_EXECUTABLE=%%i)
...@@ -47,6 +57,12 @@ cmake .. ^ ...@@ -47,6 +57,12 @@ cmake .. ^
-G%CMAKE_GENERATOR% ^ -G%CMAKE_GENERATOR% ^
-DBUILD_PYTHON=%BUILD_PYTHON% ^ -DBUILD_PYTHON=%BUILD_PYTHON% ^
-DBUILD_RUNTIME=%BUILD_RUNTIME% ^ -DBUILD_RUNTIME=%BUILD_RUNTIME% ^
-USE_CUDA==%USE_CUDA% ^
-USE_CUDNN==%USE_CUDNN% ^
-USE_OPENMP==%USE_OPENMP% ^
-USE_AVX==%USE_AVX% ^
-USE_AVX2==%USE_AVX2% ^
-USE_FMA==%USE_FMA% ^
-DTHIRD_PARTY_DIR=%THIRD_PARTY_DIR% ^ -DTHIRD_PARTY_DIR=%THIRD_PARTY_DIR% ^
-DPROTOBUF_SDK_ROOT_DIR=%PROTOBUF_SDK_ROOT_DIR% ^ -DPROTOBUF_SDK_ROOT_DIR=%PROTOBUF_SDK_ROOT_DIR% ^
-DPROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_PROTOC_EXECUTABLE% ^ -DPROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_PROTOC_EXECUTABLE% ^
......
:: ############################################################################# :: ###############################################################
:: Example command to build on Windows for Visual Studio 2017 (VC141). :: Command file to build on Windows for Visual Studio 2017 (VC141)
:: ############################################################################# :: ###############################################################
@echo off @echo off
setlocal setlocal
SET ORIGINAL_DIR=%cd% :: Build variables
SET REPO_ROOT=%~dp0%.. set ORIGINAL_DIR=%cd%
SET DRAGON_ROOT=%REPO_ROOT%\dragon set REPO_ROOT=%~dp0%..
SET THIRD_PARTY_DIR=%REPO_ROOT%\third_party set DRAGON_ROOT=%REPO_ROOT%\dragon
SET CMAKE_GENERATOR="Visual Studio 15 2017 Win64" set THIRD_PARTY_DIR=%REPO_ROOT%\third_party
set CMAKE_GENERATOR="Visual Studio 15 2017 Win64"
:: Build options :: Build options
SET BUILD_PYTHON=ON set BUILD_PYTHON=ON
SET BUILD_RUNTIME=OFF set BUILD_RUNTIME=OFF
:: Optional libraries
set USE_CUDA=ON
set USE_CUDNN=ON
set USE_OPENMP=ON
set USE_AVX=ON
set USE_AVX2=ON
set USE_FMA=ON
:: Protobuf SDK options :: Protobuf SDK options
SET PROTOBUF_SDK_ROOT_DIR=%THIRD_PARTY_DIR%\protobuf set PROTOBUF_SDK_ROOT_DIR=%THIRD_PARTY_DIR%\protobuf
:: Protobuf Compiler options :: Protobuf Compiler options
:: Set the protobuf compiler(i.e., protoc) if necessary :: Set the protobuf compiler(i.e., protoc) if necessary.
:: If not, a compiler in the sdk or environment will be used :: If not, a compiler in the sdk or environment will be used.
SET PROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_SDK_ROOT_DIR%\bin\protoc set PROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_SDK_ROOT_DIR%\bin\protoc
:: Python options :: Python options
:: Set your python "interpreter" if necessary :: Set your python "interpreter" if necessary.
:: If not, a default interpreter will be used :: If not, a default interpreter will be used.
:: SET PYTHON_EXECUTABLE=X:/Anaconda3/python :: set PYTHON_EXECUTABLE=X:/Anaconda3/python
if %BUILD_PYTHON% == ON ( if %BUILD_PYTHON% == ON (
if NOT DEFINED PYTHON_EXECUTABLE ( if NOT DEFINED PYTHON_EXECUTABLE (
for /F %%i in ('python -c "import sys;print(sys.executable)"') do (set PYTHON_EXECUTABLE=%%i) for /F %%i in ('python -c "import sys;print(sys.executable)"') do (set PYTHON_EXECUTABLE=%%i)
...@@ -47,6 +57,12 @@ cmake .. ^ ...@@ -47,6 +57,12 @@ cmake .. ^
-G%CMAKE_GENERATOR% ^ -G%CMAKE_GENERATOR% ^
-DBUILD_PYTHON=%BUILD_PYTHON% ^ -DBUILD_PYTHON=%BUILD_PYTHON% ^
-DBUILD_RUNTIME=%BUILD_RUNTIME% ^ -DBUILD_RUNTIME=%BUILD_RUNTIME% ^
-USE_CUDA==%USE_CUDA% ^
-USE_CUDNN==%USE_CUDNN% ^
-USE_OPENMP==%USE_OPENMP% ^
-USE_AVX==%USE_AVX% ^
-USE_AVX2==%USE_AVX2% ^
-USE_FMA==%USE_FMA% ^
-DTHIRD_PARTY_DIR=%THIRD_PARTY_DIR% ^ -DTHIRD_PARTY_DIR=%THIRD_PARTY_DIR% ^
-DPROTOBUF_SDK_ROOT_DIR=%PROTOBUF_SDK_ROOT_DIR% ^ -DPROTOBUF_SDK_ROOT_DIR=%PROTOBUF_SDK_ROOT_DIR% ^
-DPROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_PROTOC_EXECUTABLE% ^ -DPROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_PROTOC_EXECUTABLE% ^
......
:: ############################################################################# :: ###############################################################
:: Example command to build on Windows for Visual Studio 2019 (VC142). :: Command file to build on Windows for Visual Studio 2019 (VC142)
:: ############################################################################# :: ###############################################################
@echo off @echo off
setlocal setlocal
SET ORIGINAL_DIR=%cd% :: Build variables
SET REPO_ROOT=%~dp0%.. set ORIGINAL_DIR=%cd%
SET DRAGON_ROOT=%REPO_ROOT%\dragon set REPO_ROOT=%~dp0%..
SET THIRD_PARTY_DIR=%REPO_ROOT%\third_party set DRAGON_ROOT=%REPO_ROOT%\dragon
SET CMAKE_GENERATOR="Visual Studio 16 2019" set THIRD_PARTY_DIR=%REPO_ROOT%\third_party
set CMAKE_GENERATOR="Visual Studio 16 2019"
:: Build options :: Build options
SET BUILD_PYTHON=ON set BUILD_PYTHON=ON
SET BUILD_RUNTIME=OFF set BUILD_RUNTIME=OFF
:: Optional libraries
set USE_CUDA=ON
set USE_CUDNN=ON
set USE_OPENMP=ON
set USE_AVX=ON
set USE_AVX2=ON
set USE_FMA=ON
:: Protobuf SDK options :: Protobuf SDK options
SET PROTOBUF_SDK_ROOT_DIR=%THIRD_PARTY_DIR%\protobuf set PROTOBUF_SDK_ROOT_DIR=%THIRD_PARTY_DIR%\protobuf
:: Protobuf Compiler options :: Protobuf Compiler options
:: Set the protobuf compiler(i.e., protoc) if necessary :: Set the protobuf compiler(i.e., protoc) if necessary.
:: If not, a compiler in the sdk or environment will be used :: If not, a compiler in the sdk or environment will be used.
SET PROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_SDK_ROOT_DIR%\bin\protoc set PROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_SDK_ROOT_DIR%\bin\protoc
:: Python options :: Python options
:: Set your python "interpreter" if necessary :: Set your python "interpreter" if necessary.
:: If not, a default interpreter will be used :: If not, a default interpreter will be used.
:: SET PYTHON_EXECUTABLE=X:/Anaconda3/python :: set PYTHON_EXECUTABLE=X:/Anaconda3/python
if %BUILD_PYTHON% == ON ( if %BUILD_PYTHON% == ON (
if NOT DEFINED PYTHON_EXECUTABLE ( if NOT DEFINED PYTHON_EXECUTABLE (
for /F %%i in ('python -c "import sys;print(sys.executable)"') do (set PYTHON_EXECUTABLE=%%i) for /F %%i in ('python -c "import sys;print(sys.executable)"') do (set PYTHON_EXECUTABLE=%%i)
...@@ -48,6 +58,12 @@ cmake .. ^ ...@@ -48,6 +58,12 @@ cmake .. ^
-Ax64 ^ -Ax64 ^
-DBUILD_PYTHON=%BUILD_PYTHON% ^ -DBUILD_PYTHON=%BUILD_PYTHON% ^
-DBUILD_RUNTIME=%BUILD_RUNTIME% ^ -DBUILD_RUNTIME=%BUILD_RUNTIME% ^
-USE_CUDA==%USE_CUDA% ^
-USE_CUDNN==%USE_CUDNN% ^
-USE_OPENMP==%USE_OPENMP% ^
-USE_AVX==%USE_AVX% ^
-USE_AVX2==%USE_AVX2% ^
-USE_FMA==%USE_FMA% ^
-DTHIRD_PARTY_DIR=%THIRD_PARTY_DIR% ^ -DTHIRD_PARTY_DIR=%THIRD_PARTY_DIR% ^
-DPROTOBUF_SDK_ROOT_DIR=%PROTOBUF_SDK_ROOT_DIR% ^ -DPROTOBUF_SDK_ROOT_DIR=%PROTOBUF_SDK_ROOT_DIR% ^
-DPROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_PROTOC_EXECUTABLE% ^ -DPROTOBUF_PROTOC_EXECUTABLE=%PROTOBUF_PROTOC_EXECUTABLE% ^
......
...@@ -529,7 +529,7 @@ def as_dtype(type_value): ...@@ -529,7 +529,7 @@ def as_dtype(type_value):
Parameters Parameters
---------- ----------
type_value : object type_value : Any
The data type. The data type.
Returns Returns
......
...@@ -28,8 +28,8 @@ try: ...@@ -28,8 +28,8 @@ try:
except ImportError: except ImportError:
from dragon.core.util import deprecation from dragon.core.util import deprecation
onnx = deprecation.NotInstalled('onnx') onnx = deprecation.NotInstalled('onnx')
Backend = deprecation.NotInstalled('onnx') Backend = object
BackendRep = deprecation.NotInstalled('onnx') BackendRep = object
Device = deprecation.NotInstalled('onnx') Device = deprecation.NotInstalled('onnx')
DeviceType = deprecation.NotInstalled('onnx') DeviceType = deprecation.NotInstalled('onnx')
......
...@@ -40,7 +40,7 @@ SOURCES = [t[1] for t in TESTS_AND_SOURCES] ...@@ -40,7 +40,7 @@ SOURCES = [t[1] for t in TESTS_AND_SOURCES]
def parse_args(): def parse_args():
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description='Run the unittests', description='run the unittests',
epilog='where TESTS is any of: {}'.format(', '.join(TESTS))) epilog='where TESTS is any of: {}'.format(', '.join(TESTS)))
parser.add_argument( parser.add_argument(
'-v', '-v',
......
...@@ -26,11 +26,10 @@ class TestJit(unittest.TestCase): ...@@ -26,11 +26,10 @@ class TestJit(unittest.TestCase):
@torch.jit.trace(example_inputs=[ @torch.jit.trace(example_inputs=[
torch.Tensor(1, dtype=torch.int64), torch.Tensor(1, dtype=torch.int64),
torch.Tensor(1, dtype=torch.int64), torch.Tensor(1, dtype=torch.int64),
torch.Tensor(1, dtype=torch.int64),
]) ])
def func1(self, a, b, c=0, **kwargs): def func1(self, a, b, **kwargs):
_ = kwargs _ = kwargs
return a + b + c return a + b
def test_trace(self): def test_trace(self):
@torch.jit.trace(example_inputs=[None, None]) @torch.jit.trace(example_inputs=[None, None])
...@@ -57,7 +56,7 @@ class TestJit(unittest.TestCase): ...@@ -57,7 +56,7 @@ class TestJit(unittest.TestCase):
self.assertEqual(func3(a, b).numpy().tolist(), [4, 6]) self.assertEqual(func3(a, b).numpy().tolist(), [4, 6])
self.assertEqual(func5(a, b).numpy().tolist(), [4, 6]) self.assertEqual(func5(a, b).numpy().tolist(), [4, 6])
self.assertEqual(m(a, b).numpy().tolist(), [4, 6]) self.assertEqual(m(a, b).numpy().tolist(), [4, 6])
self.assertEqual(self.func1(a, b, c=c).numpy().tolist(), [5, 7]) self.assertEqual(self.func1(a, b, c=c).numpy().tolist(), [4, 6])
try: try:
func4(a, b) func4(a, b)
except ValueError: except ValueError:
......
...@@ -16,7 +16,6 @@ INSTALL_PATH=$(cd "$(dirname "$0")/..";pwd) ...@@ -16,7 +16,6 @@ INSTALL_PATH=$(cd "$(dirname "$0")/..";pwd)
if [ $USE_CUDA_AWARE -eq 1 ];then if [ $USE_CUDA_AWARE -eq 1 ];then
echo "Build with cuda...." echo "Build with cuda...."
read -p "Press any key to continue." var
./configure CFLAGS=-fPIC \ ./configure CFLAGS=-fPIC \
CXXFLAGS=-fPIC \ CXXFLAGS=-fPIC \
--with-cuda \ --with-cuda \
...@@ -29,7 +28,6 @@ read -p "Press any key to continue." var ...@@ -29,7 +28,6 @@ read -p "Press any key to continue." var
--prefix=$INSTALL_PATH --prefix=$INSTALL_PATH
else else
echo "Build without cuda...." echo "Build without cuda...."
read -p "Press any key to continue." var
./configure CFLAGS=-fPIC \ ./configure CFLAGS=-fPIC \
CXXFLAGS=-fPIC \ CXXFLAGS=-fPIC \
--with-pic=PIC \ --with-pic=PIC \
......
...@@ -27,12 +27,12 @@ DEFAULT_PROTOCOL = PICKLE_MODULE.HIGHEST_PROTOCOL ...@@ -27,12 +27,12 @@ DEFAULT_PROTOCOL = PICKLE_MODULE.HIGHEST_PROTOCOL
def save(obj, f, pickle_module=PICKLE_MODULE, pickle_protocol=DEFAULT_PROTOCOL): def save(obj, f, pickle_module=PICKLE_MODULE, pickle_protocol=DEFAULT_PROTOCOL):
"""Save a pickle object. """Save an object using pickle.
Parameters Parameters
---------- ----------
obj : object_like obj : Any
The object to pickle. The object to serialize.
f : file_like f : file_like
The file object or file name. The file object or file name.
pickle_module : module, optional pickle_module : module, optional
...@@ -41,11 +41,12 @@ def save(obj, f, pickle_module=PICKLE_MODULE, pickle_protocol=DEFAULT_PROTOCOL): ...@@ -41,11 +41,12 @@ def save(obj, f, pickle_module=PICKLE_MODULE, pickle_protocol=DEFAULT_PROTOCOL):
The optional pickle protocol. The optional pickle protocol.
""" """
return _with_file_like(f, 'wb', lambda f: _save(obj, f, pickle_module, pickle_protocol)) return _with_file_like(
f, 'wb', lambda f: _save(obj, f, pickle_module, pickle_protocol))
def load(f, pickle_module=PICKLE_MODULE): def load(f, pickle_module=PICKLE_MODULE):
"""Load a pickle object. """Load an object using pickle.
Parameters Parameters
---------- ----------
...@@ -54,6 +55,11 @@ def load(f, pickle_module=PICKLE_MODULE): ...@@ -54,6 +55,11 @@ def load(f, pickle_module=PICKLE_MODULE):
pickle_module : module pickle_module : module
The optional pickle module. The optional pickle module.
Returns
-------
Any
The deserialized object.
""" """
try: try:
return _with_file_like( return _with_file_like(
......
Markdown is supported
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!