Commit 52402169 by Ting PAN

Clean the torch operators

1 parent d0fa332c
Showing with 1134 additions and 1401 deletions
...@@ -47,7 +47,7 @@ Override ...@@ -47,7 +47,7 @@ Override
List Brief List Brief
============================== ============================================================================= ============================== =============================================================================
`Tensor.__add__`_ x.__add__(y) <=> x + y `Tensor.__add__`_ x.__add__(y) <=> x + y
`Tensor.__radd__`_ x.__radd__(y) <=> y + x `Tensor.__radd__`_ x.__radd__(y) <=> y + x
`Tensor.__sub__`_ x.__sub__(y) <=> x - y `Tensor.__sub__`_ x.__sub__(y) <=> x - y
`Tensor.__rsub__`_ x.__rsub__(y) <=> y - x `Tensor.__rsub__`_ x.__rsub__(y) <=> y - x
`Tensor.__mul__`_ x.__mul__(y) <=> x * y `Tensor.__mul__`_ x.__mul__(y) <=> x * y
...@@ -55,7 +55,12 @@ List Brief ...@@ -55,7 +55,12 @@ List Brief
`Tensor.__div__`_ x.__div__(y) <=> x / y `Tensor.__div__`_ x.__div__(y) <=> x / y
`Tensor.__rdiv__`_ x.__rdiv__(y) <=> y / x `Tensor.__rdiv__`_ x.__rdiv__(y) <=> y / x
`Tensor.__neg__`_ x.__neg__() <=> -x `Tensor.__neg__`_ x.__neg__() <=> -x
`Tensor.__str__`_ Return the information(name/shape). `Tensor.__gt__`_ x.__gt__() <=> x > y
`Tensor.__ge__`_ x.__ge__() <=> x >= y
`Tensor.__lt__`_ x.__lt__() <=> x < y
`Tensor.__le__`_ x.__le__() <=> x <= y
`Tensor.__eq__`_ x.__eq__() <=> x == y
`Tensor.__repr__`_ Return the information(name/shape).
`Tensor.__getitem__`_ Return a Tensor with specific indices. `Tensor.__getitem__`_ Return a Tensor with specific indices.
`Tensor.__call__`_ Return the expressions for displaying. `Tensor.__call__`_ Return the expressions for displaying.
============================== ============================================================================= ============================== =============================================================================
...@@ -70,6 +75,24 @@ API Reference ...@@ -70,6 +75,24 @@ API Reference
:members: :members:
.. automethod:: __init__ .. automethod:: __init__
.. automethod:: __add__
.. automethod:: __radd__
.. automethod:: __sub__
.. automethod:: __rsub__
.. automethod:: __mul__
.. automethod:: __rmul__
.. automethod:: __div__
.. automethod:: __rdiv__
.. automethod:: __neg__
.. automethod:: __gt__
.. automethod:: __ge__
.. automethod:: __lt__
.. automethod:: __le__
.. automethod:: __eq__
.. automethod:: __eq__
.. automethod:: __repr__
.. automethod:: __getitem__
.. automethod:: __call__
.. _Tensor.Variable: #dragon.core.tensor.Tensor.Variable .. _Tensor.Variable: #dragon.core.tensor.Tensor.Variable
.. _Tensor.Placeholder: #dragon.core.tensor.Tensor.Placeholder .. _Tensor.Placeholder: #dragon.core.tensor.Tensor.Placeholder
...@@ -90,8 +113,12 @@ API Reference ...@@ -90,8 +113,12 @@ API Reference
.. _Tensor.__div__: #dragon.core.tensor.Tensor.__div__ .. _Tensor.__div__: #dragon.core.tensor.Tensor.__div__
.. _Tensor.__rdiv__: #dragon.core.tensor.Tensor.__rdiv__ .. _Tensor.__rdiv__: #dragon.core.tensor.Tensor.__rdiv__
.. _Tensor.__neg__: #dragon.core.tensor.Tensor.__neg__ .. _Tensor.__neg__: #dragon.core.tensor.Tensor.__neg__
.. _Tensor.__str__: #dragon.core.tensor.Tensor.__str__ .. _Tensor.__gt__: #dragon.core.tensor.Tensor.__gt__
.. _Tensor.__getattr__: #dragon.core.tensor.Tensor.__getattr__ .. _Tensor.__ge__: #dragon.core.tensor.Tensor.__ge__
.. _Tensor.__lt__: #dragon.core.tensor.Tensor.__lt__
.. _Tensor.__le__: #dragon.core.tensor.Tensor.__le__
.. _Tensor.__eq__: #dragon.core.tensor.Tensor.__eq__
.. _Tensor.__repr__: #dragon.core.tensor.Tensor.__repr__
.. _Tensor.__getitem__: #dragon.core.tensor.Tensor.__getitem__ .. _Tensor.__getitem__: #dragon.core.tensor.Tensor.__getitem__
.. _Tensor.__call__: #dragon.core.tensor.Tensor.__call__ .. _Tensor.__call__: #dragon.core.tensor.Tensor.__call__
......
...@@ -11,7 +11,7 @@ Common ...@@ -11,7 +11,7 @@ Common
operators/data operators/data
operators/initializer operators/initializer
operators/arithmetic operators/arithmetic
operators/ndarray operators/array
operators/control_flow operators/control_flow
operators/misc operators/misc
operators/mpi operators/mpi
......
============
:mod:`Array`
============
.. toctree::
:hidden:
.. automodule:: dragon.operators.array
:members:
.. _ops.Reduce(*args, **kwargs): #dragon.operators.array.Reduce
\ No newline at end of file
==============
:mod:`NDArray`
==============
.. toctree::
:hidden:
.. automodule:: dragon.operators.ndarray
:members:
.. _ops.Reduce(*args, **kwargs): #dragon.operators.ndarray.Reduce
\ No newline at end of file
...@@ -129,8 +129,8 @@ List Brief ...@@ -129,8 +129,8 @@ List Brief
`L2Norm`_ L2 Normalization. `[Liu et.al, 2015] <https://arxiv.org/abs/1506.04579>`_. `L2Norm`_ L2 Normalization. `[Liu et.al, 2015] <https://arxiv.org/abs/1506.04579>`_.
================== ====================================================================== ================== ======================================================================
NDArray Array
------- -----
=============== ====================================================================== =============== ======================================================================
List Brief List Brief
=============== ====================================================================== =============== ======================================================================
...@@ -157,6 +157,7 @@ List Brief ...@@ -157,6 +157,7 @@ List Brief
`ExpandDims`_ Expand the new dimension with size 1 to specific axis. `ExpandDims`_ Expand the new dimension with size 1 to specific axis.
`Shape`_ Get the dynamic shape of a Tensor. `Shape`_ Get the dynamic shape of a Tensor.
`Arange`_ Return evenly spaced values within a given interval. `Arange`_ Return evenly spaced values within a given interval.
`Multinomial`_ Return indices sampled from the multinomial distribution.
=============== ====================================================================== =============== ======================================================================
Control Flow Control Flow
...@@ -167,7 +168,9 @@ List Brief ...@@ -167,7 +168,9 @@ List Brief
`Copy`_ Copy A to B. `Copy`_ Copy A to B.
`Equal`_ *Equal* Comparing between A and B. `Equal`_ *Equal* Comparing between A and B.
`Less`_ *Less* Comparing between A and B. `Less`_ *Less* Comparing between A and B.
`LessEqual`_ *LessEqual* Comparing between A and B.
`Greater`_ *Greater* Comparing between A and B. `Greater`_ *Greater* Comparing between A and B.
`GreaterEqual`_ *GreaterEqual* Comparing between A and B.
=============== ====================================================================== =============== ======================================================================
Misc Misc
...@@ -277,34 +280,37 @@ List Brief ...@@ -277,34 +280,37 @@ List Brief
.. _InstanceNorm: operators/norm.html#dragon.operators.norm.InstanceNorm .. _InstanceNorm: operators/norm.html#dragon.operators.norm.InstanceNorm
.. _L2Norm: operators/norm.html#dragon.operators.norm.L2Norm .. _L2Norm: operators/norm.html#dragon.operators.norm.L2Norm
.. _Gather: operators/ndarray.html#dragon.operators.ndarray.Gather .. _Gather: operators/array.html#dragon.operators.array.Gather
.. _Crop: operators/ndarray.html#dragon.operators.ndarray.Crop .. _Crop: operators/array.html#dragon.operators.array.Crop
.. _Reduce: operators/ndarray.html#dragon.operators.ndarray.Reduce .. _Reduce: operators/array.html#dragon.operators.array.Reduce
.. _Sum: operators/ndarray.html#dragon.operators.ndarray.Sum .. _Sum: operators/array.html#dragon.operators.array.Sum
.. _Mean: operators/ndarray.html#dragon.operators.ndarray.Mean .. _Mean: operators/array.html#dragon.operators.array.Mean
.. _Max: operators/ndarray.html#dragon.operators.ndarray.Max .. _Max: operators/array.html#dragon.operators.array.Max
.. _ArgMax: operators/ndarray.html#dragon.operators.ndarray.ArgMax .. _ArgMax: operators/array.html#dragon.operators.array.ArgMax
.. _Min: operators/ndarray.html#dragon.operators.ndarray.Min .. _Min: operators/array.html#dragon.operators.array.Min
.. _ArgMin: operators/ndarray.html#dragon.operators.ndarray.ArgMin .. _ArgMin: operators/array.html#dragon.operators.array.ArgMin
.. _Slice: operators/ndarray.html#dragon.operators.ndarray.Slice .. _Slice: operators/array.html#dragon.operators.array.Slice
.. _Stack: operators/ndarray.html#dragon.operators.ndarray.Stack .. _Stack: operators/array.html#dragon.operators.array.Stack
.. _Concat: operators/ndarray.html#dragon.operators.ndarray.Concat .. _Concat: operators/array.html#dragon.operators.array.Concat
.. _Transpose: operators/ndarray.html#dragon.operators.ndarray.Transpose .. _Transpose: operators/array.html#dragon.operators.array.Transpose
.. _Repeat: operators/ndarray.html#dragon.operators.ndarray.Repeat .. _Repeat: operators/array.html#dragon.operators.array.Repeat
.. _Tile: operators/ndarray.html#dragon.operators.ndarray.Tile .. _Tile: operators/array.html#dragon.operators.array.Tile
.. _Pad: operators/ndarray.html#dragon.operators.ndarray.Pad .. _Pad: operators/array.html#dragon.operators.array.Pad
.. _OneHot: operators/ndarray.html#dragon.operators.ndarray.OneHot .. _OneHot: operators/array.html#dragon.operators.array.OneHot
.. _Flatten: operators/ndarray.html#dragon.operators.ndarray.Flatten .. _Flatten: operators/array.html#dragon.operators.array.Flatten
.. _Reshape: operators/ndarray.html#dragon.operators.ndarray.Reshape .. _Reshape: operators/array.html#dragon.operators.array.Reshape
.. _Squeeze: operators/ndarray.html#dragon.operators.ndarray.Squeeze .. _Squeeze: operators/array.html#dragon.operators.array.Squeeze
.. _ExpandDims: operators/ndarray.html#dragon.operators.ndarray.ExpandDims .. _ExpandDims: operators/array.html#dragon.operators.array.ExpandDims
.. _Shape: operators/ndarray.html#dragon.operators.ndarray.Shape .. _Shape: operators/array.html#dragon.operators.array.Shape
.. _Arange: operators/ndarray.html#dragon.operators.ndarray.Arange .. _Arange: operators/array.html#dragon.operators.array.Arange
.. _Multinomial: operators/array.html#dragon.operators.array.Multinomial
.. _Copy: operators/control_flow.html#dragon.operators.control_flow.Copy .. _Copy: operators/control_flow.html#dAragon.operators.control_flow.Copy
.. _Equal: operators/control_flow.html#dragon.operators.control_flow.Equal .. _Equal: operators/control_flow.html#dragon.operators.control_flow.Equal
.. _Less: operators/control_flow.html#dragon.operators.control_flow.Less .. _Less: operators/control_flow.html#dragon.operators.control_flow.Less
.. _LessEqual: operators/control_flow.html#dragon.operators.control_flow.LessEqual
.. _Greater: operators/control_flow.html#dragon.operators.control_flow.Greater .. _Greater: operators/control_flow.html#dragon.operators.control_flow.Greater
.. _GreaterEqual: operators/control_flow.html#dragon.operators.control_flow.GreaterEqual
.. _Cast: operators/misc.html#dragon.operators.misc.Cast .. _Cast: operators/misc.html#dragon.operators.misc.Cast
.. _Run: operators/misc.html#dragon.operators.misc.Run .. _Run: operators/misc.html#dragon.operators.misc.Run
......
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_ARGMAX_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_ARGMAX_OP_H_
#define DRAGON_OPERATORS_NDARRAY_ARGMAX_OP_H_ #define DRAGON_OPERATORS_ARRAY_ARGMAX_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -46,4 +46,4 @@ DEFINE_ARGUMENT_WITH_DESC(int64_t, ArangeOp, step); ...@@ -46,4 +46,4 @@ DEFINE_ARGUMENT_WITH_DESC(int64_t, ArangeOp, step);
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_ARANGE_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_ARANGE_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_ARGREDUCE_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_ARGREDUCE_OP_H_
#define DRAGON_OPERATORS_NDARRAY_ARGREDUCE_OP_H_ #define DRAGON_OPERATORS_ARRAY_ARGREDUCE_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -39,4 +39,4 @@ class ArgReduceOp final : public Operator<Context> { ...@@ -39,4 +39,4 @@ class ArgReduceOp final : public Operator<Context> {
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_ARGREDUCE_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_ARGREDUCE_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_CONCAT_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_CONCAT_OP_H_
#define DRAGON_OPERATORS_NDARRAY_CONCAT_OP_H_ #define DRAGON_OPERATORS_ARRAY_CONCAT_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -55,4 +55,4 @@ class ConcatGradientOp : public Operator<Context> { ...@@ -55,4 +55,4 @@ class ConcatGradientOp : public Operator<Context> {
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_CONCAT_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_CONCAT_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_CROP_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_CROP_OP_H_
#define DRAGON_OPERATORS_NDARRAY_CROP_OP_H_ #define DRAGON_OPERATORS_ARRAY_CROP_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -63,4 +63,4 @@ class CropGradientOp final : public Operator<Context> { ...@@ -63,4 +63,4 @@ class CropGradientOp final : public Operator<Context> {
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_CROP_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_CROP_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_DIMENSION_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_DIMENSION_OP_H_
#define DRAGON_OPERATORS_NDARRAY_DIMENSION_OP_H_ #define DRAGON_OPERATORS_ARRAY_DIMENSION_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -151,4 +151,4 @@ DEFINE_DIMENSION_GRADIENT_OP(Squeeze); ...@@ -151,4 +151,4 @@ DEFINE_DIMENSION_GRADIENT_OP(Squeeze);
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_RESHAPE_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_RESHAPE_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_GATHER_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_GATHER_OP_H_
#define DRAGON_OPERATORS_NDARRAY_GATHER_OP_H_ #define DRAGON_OPERATORS_ARRAY_GATHER_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -52,4 +52,4 @@ class GatherGradientOp final : public Operator<Context> { ...@@ -52,4 +52,4 @@ class GatherGradientOp final : public Operator<Context> {
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_GATHER_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_GATHER_OP_H_
\ No newline at end of file \ No newline at end of file
/*!
* Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
*
* Licensed under the BSD 2-Clause License.
* You should have received a copy of the BSD 2-Clause License
* along with the software. If not, See,
*
* <https://opensource.org/licenses/BSD-2-Clause>
*
* ------------------------------------------------------------
*/
#ifndef DRAGON_OPERATORS_ARRAY_MULTINOMIAL_OP_H_
#define DRAGON_OPERATORS_ARRAY_MULTINOMIAL_OP_H_
#include "core/operator.h"
namespace dragon {
template <class Context>
class MultinomialOp final : public Operator<Context> {
public:
MultinomialOp(const OperatorDef& def, Workspace* ws)
: Operator<Context>(def, ws),
normalize(OperatorBase::Arg<int64_t>("normalize", 0)),
num_samples(OperatorBase::Arg<int64_t>("num_samples", 1)) {}
USE_OPERATOR_FUNCTIONS;
void SoftmaxRun();
void RunOnDevice() override;
template <typename T> void RunWithType();
protected:
Tensor* prob;
int64_t normalize, num_samples, outer_dim, axis;
unique_ptr<OperatorBase> softmax_op;
};
} // namespace dragon
#endif // DRAGON_OPERATORS_ARRAY_MULTINOMIAL_OP_H_
\ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_ONE_HOT_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_ONE_HOT_OP_H_
#define DRAGON_OPERATORS_NDARRAY_ONE_HOT_OP_H_ #define DRAGON_OPERATORS_ARRAY_ONE_HOT_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -36,4 +36,4 @@ class OneHotOp final : public Operator < Context > { ...@@ -36,4 +36,4 @@ class OneHotOp final : public Operator < Context > {
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_ONE_HOT_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_ONE_HOT_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_PAD_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_PAD_OP_H_
#define DRAGON_OPERATORS_NDARRAY_PAD_OP_H_ #define DRAGON_OPERATORS_ARRAY_PAD_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -73,4 +73,4 @@ class PadGradientOp final : public Operator<Context> { ...@@ -73,4 +73,4 @@ class PadGradientOp final : public Operator<Context> {
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_PAD_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_PAD_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_REDUCE_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_REDUCE_OP_H_
#define DRAGON_OPERATORS_NDARRAY_REDUCE_OP_H_ #define DRAGON_OPERATORS_ARRAY_REDUCE_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -59,4 +59,4 @@ class ReduceGradientOp final : public Operator<Context> { ...@@ -59,4 +59,4 @@ class ReduceGradientOp final : public Operator<Context> {
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_REDUCE_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_REDUCE_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_REPEAT_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_REPEAT_OP_H_
#define DRAGON_OPERATORS_NDARRAY_REPEAT_OP_H_ #define DRAGON_OPERATORS_ARRAY_REPEAT_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -58,4 +58,4 @@ DEFINE_ARGUMENT_WITH_DESC(int64_t, RepeatGradientOp, repeats); ...@@ -58,4 +58,4 @@ DEFINE_ARGUMENT_WITH_DESC(int64_t, RepeatGradientOp, repeats);
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_REPEAT_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_REPEAT_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_SHAPE_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_SHAPE_OP_H_
#define DRAGON_OPERATORS_NDARRAY_SHAPE_OP_H_ #define DRAGON_OPERATORS_ARRAY_SHAPE_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -27,4 +27,4 @@ class ShapeOp final : public Operator<Context> { ...@@ -27,4 +27,4 @@ class ShapeOp final : public Operator<Context> {
} // namespace dragon } // namespace dragon
#endif //DRAGON_OPERATORS_NDARRAY_SHAPE_OP_H_ #endif //DRAGON_OPERATORS_ARRAY_SHAPE_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_SLICE_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_SLICE_OP_H_
#define DRAGON_OPERATORS_NDARRAY_SLICE_OP_H_ #define DRAGON_OPERATORS_ARRAY_SLICE_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -55,4 +55,4 @@ class SliceGradientOp final : public Operator<Context> { ...@@ -55,4 +55,4 @@ class SliceGradientOp final : public Operator<Context> {
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_SLICE_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_SLICE_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_STACK_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_STACK_OP_H_
#define DRAGON_OPERATORS_NDARRAY_STACK_OP_H_ #define DRAGON_OPERATORS_ARRAY_STACK_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -50,4 +50,4 @@ class StackGradientOp final : public Operator<Context> { ...@@ -50,4 +50,4 @@ class StackGradientOp final : public Operator<Context> {
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_STACK_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_STACK_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_TILE_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_TILE_OP_H_
#define DRAGON_OPERATORS_NDARRAY_TILE_OP_H_ #define DRAGON_OPERATORS_ARRAY_TILE_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -60,4 +60,4 @@ DEFINE_ARGUMENTS_WITH_DESC(int64_t, TileGradientOp, multiples); ...@@ -60,4 +60,4 @@ DEFINE_ARGUMENTS_WITH_DESC(int64_t, TileGradientOp, multiples);
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_TILE_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_TILE_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
* ------------------------------------------------------------ * ------------------------------------------------------------
*/ */
#ifndef DRAGON_OPERATORS_NDARRAY_TRANSPOSE_OP_H_ #ifndef DRAGON_OPERATORS_ARRAY_TRANSPOSE_OP_H_
#define DRAGON_OPERATORS_NDARRAY_TRANSPOSE_OP_H_ #define DRAGON_OPERATORS_ARRAY_TRANSPOSE_OP_H_
#include "core/operator.h" #include "core/operator.h"
...@@ -56,4 +56,4 @@ DEFINE_ARGUMENTS_WITH_DESC(int64_t, TransposeGradientOp, perm); ...@@ -56,4 +56,4 @@ DEFINE_ARGUMENTS_WITH_DESC(int64_t, TransposeGradientOp, perm);
} // namespace dragon } // namespace dragon
#endif // DRAGON_OPERATORS_NDARRAY_TRANSPOSE_OP_H_ #endif // DRAGON_OPERATORS_ARRAY_TRANSPOSE_OP_H_
\ No newline at end of file \ No newline at end of file
...@@ -30,7 +30,9 @@ class CompareOp final : public Operator<Context> { ...@@ -30,7 +30,9 @@ class CompareOp final : public Operator<Context> {
void RunOnDevice() override; void RunOnDevice() override;
template <typename T> void EqualRunWithType(); template <typename T> void EqualRunWithType();
template <typename T> void LessRunWithType(); template <typename T> void LessRunWithType();
template <typename T> void LessEqualRunWithType();
template <typename T> void GreaterRunWithType(); template <typename T> void GreaterRunWithType();
template <typename T> void GreaterEqualRunWithType();
protected: protected:
string operation; string operation;
......
...@@ -192,6 +192,6 @@ DEFINE_ARGUMENTS_WITH_DESC(int64_t, InitializeOp, dims); ...@@ -192,6 +192,6 @@ DEFINE_ARGUMENTS_WITH_DESC(int64_t, InitializeOp, dims);
DEFINE_ARGUMENTS_WITH_DESC(int64_t, FillOp, dims); DEFINE_ARGUMENTS_WITH_DESC(int64_t, FillOp, dims);
DEFINE_ARGUMENTS_WITH_DESC(int64_t, GivenTensorFillOp, dims); DEFINE_ARGUMENTS_WITH_DESC(int64_t, GivenTensorFillOp, dims);
} // namespace } // namespace dragon
#endif // DRAGON_OPERATORS_MISC_INITIALIZE_OP_H_ #endif // DRAGON_OPERATORS_MISC_INITIALIZE_OP_H_
\ No newline at end of file
...@@ -351,6 +351,14 @@ void Less( ...@@ -351,6 +351,14 @@ void Less(
Context* ctx); Context* ctx);
template <typename T, class Context> template <typename T, class Context>
void LessEqual(
const int count,
const T* a,
const T* b,
bool* y,
Context* ctx);
template <typename T, class Context>
void Greater( void Greater(
const int count, const int count,
const T* a, const T* a,
...@@ -358,6 +366,14 @@ void Greater( ...@@ -358,6 +366,14 @@ void Greater(
bool* y, bool* y,
Context* ctx); Context* ctx);
template <typename T, class Context>
void GreaterEqual(
const int count,
const T* a,
const T* b,
bool* y,
Context* ctx);
/*! loss.l1_loss */ /*! loss.l1_loss */
template <typename T, class Context> template <typename T, class Context>
...@@ -574,7 +590,7 @@ void ImageData( ...@@ -574,7 +590,7 @@ void ImageData(
Ty* y, Ty* y,
Context* ctx); Context* ctx);
/*! ndarray.arange */ /*! array.arange */
template <typename T, class Context> template <typename T, class Context>
void Arange( void Arange(
...@@ -584,7 +600,7 @@ void Arange( ...@@ -584,7 +600,7 @@ void Arange(
T* y, T* y,
Context* ctx); Context* ctx);
/*! ndarray.argreduce */ /*! array.argreduce */
template <typename T, class Context> template <typename T, class Context>
void ArgMax( void ArgMax(
...@@ -608,7 +624,7 @@ void ArgMin( ...@@ -608,7 +624,7 @@ void ArgMin(
T* values, T* values,
Context* ctx); Context* ctx);
/*! ndarray.gather */ /*! array.gather */
template <typename T, class Context> template <typename T, class Context>
void Gather( void Gather(
...@@ -632,7 +648,7 @@ void GatherGrad( ...@@ -632,7 +648,7 @@ void GatherGrad(
T* dx, T* dx,
Context* ctx); Context* ctx);
/*! ndarray.concat */ /*! array.concat */
template <typename T, class Context> template <typename T, class Context>
void Concat( void Concat(
...@@ -645,7 +661,7 @@ void Concat( ...@@ -645,7 +661,7 @@ void Concat(
T* y, T* y,
Context* ctx); Context* ctx);
/*! ndarray.crop */ /*! array.crop */
template <typename T, class Context> template <typename T, class Context>
void Crop( void Crop(
...@@ -669,7 +685,7 @@ void CropGrad( ...@@ -669,7 +685,7 @@ void CropGrad(
T* dx, T* dx,
Context* ctx); Context* ctx);
/*! ndarray.pad */ /*! array.pad */
template <typename T, class Context> template <typename T, class Context>
void ConstPad( void ConstPad(
...@@ -708,7 +724,7 @@ void EdgePad( ...@@ -708,7 +724,7 @@ void EdgePad(
T* y, T* y,
Context* ctx); Context* ctx);
/*! ndarray.one_hot */ /*! array.one_hot */
template <typename T, class Context> template <typename T, class Context>
void OneHot( void OneHot(
...@@ -719,7 +735,7 @@ void OneHot( ...@@ -719,7 +735,7 @@ void OneHot(
T* y, T* y,
Context* ctx); Context* ctx);
/*! ndarray.reduce */ /*! array.reduce */
template <typename T, class Context> template <typename T, class Context>
void ReduceSum( void ReduceSum(
...@@ -744,7 +760,7 @@ void ReduceSumGrad( ...@@ -744,7 +760,7 @@ void ReduceSumGrad(
T* dx, T* dx,
Context* ctx); Context* ctx);
/*! ndarray.repeat */ /*! array.repeat */
template <typename T, class Context> template <typename T, class Context>
void Repeat( void Repeat(
...@@ -766,7 +782,7 @@ void RepeatGrad( ...@@ -766,7 +782,7 @@ void RepeatGrad(
T* dx, T* dx,
Context* ctx); Context* ctx);
/*! ndarray.slice */ /*! array.slice */
template <typename T, class Context> template <typename T, class Context>
void Slice( void Slice(
...@@ -790,7 +806,7 @@ void SliceGrad( ...@@ -790,7 +806,7 @@ void SliceGrad(
T* x, T* x,
Context* ctx); Context* ctx);
/*! ndarray.tile */ /*! array.tile */
template <typename T, class Context> template <typename T, class Context>
void Tile( void Tile(
...@@ -812,7 +828,7 @@ void TileGrad( ...@@ -812,7 +828,7 @@ void TileGrad(
T* dx, T* dx,
Context* ctx); Context* ctx);
/*! ndarray.transpose */ /*! array.transpose */
template <typename T, class Context> template <typename T, class Context>
void Transpose( void Transpose(
......
...@@ -70,12 +70,6 @@ void OnImportModule() { ...@@ -70,12 +70,6 @@ void OnImportModule() {
PYBIND11_MODULE(libdragon, m) { PYBIND11_MODULE(libdragon, m) {
/* ------------------------------------ *
* *
* Workspace *
* *
* ------------------------------------ */
/*! \brief Switch to the specific workspace */ /*! \brief Switch to the specific workspace */
m.def("SwitchWorkspace", &SwitchWorkspace); m.def("SwitchWorkspace", &SwitchWorkspace);
...@@ -133,6 +127,7 @@ PYBIND11_MODULE(libdragon, m) { ...@@ -133,6 +127,7 @@ PYBIND11_MODULE(libdragon, m) {
g_workspaces[target_workspace]->Clear(); g_workspaces[target_workspace]->Clear();
}); });
/*! \brief Copy the array data to the tensor */
m.def("FeedTensor", []( m.def("FeedTensor", [](
const string& name, const string& name,
pybind11::object value, pybind11::object value,
...@@ -150,6 +145,7 @@ PYBIND11_MODULE(libdragon, m) { ...@@ -150,6 +145,7 @@ PYBIND11_MODULE(libdragon, m) {
PyArrayObject*>(value.ptr()), tensor); PyArrayObject*>(value.ptr()), tensor);
}); });
/*! \brief Copy the tensor data to the array */
m.def("FetchTensor", [](const string& name) { m.def("FetchTensor", [](const string& name) {
if (!g_workspace->HasTensor(name)) if (!g_workspace->HasTensor(name))
LOG(FATAL) << "Tensor(" + name + ") " LOG(FATAL) << "Tensor(" + name + ") "
...@@ -169,7 +165,7 @@ PYBIND11_MODULE(libdragon, m) { ...@@ -169,7 +165,7 @@ PYBIND11_MODULE(libdragon, m) {
} }
}); });
/*! Misc */ /*! \brief Return a unique dummy name */
m.def("GetDummyName", []( m.def("GetDummyName", [](
const string& basename, const string& basename,
const string& suffix, const string& suffix,
......
...@@ -63,12 +63,6 @@ void AddProtoMethods(pybind11::module& m) { ...@@ -63,12 +63,6 @@ void AddProtoMethods(pybind11::module& m) {
[](OperatorDef* self, const vector<string>& output) { [](OperatorDef* self, const vector<string>& output) {
*(self->mutable_output()) = { output.begin(), output.end() }; *(self->mutable_output()) = { output.begin(), output.end() };
}); });
m.def("TestOperatorDefs", [](vector<OperatorDef*> defs) {
for (auto* def : defs) {
std::cout << def->DebugString() << std::endl;
}
});
} }
} // namespace python } // namespace python
......
...@@ -27,7 +27,7 @@ void AddTensorMethods(pybind11::module& m) { ...@@ -27,7 +27,7 @@ void AddTensorMethods(pybind11::module& m) {
.def_property_readonly("size", &Tensor::size) .def_property_readonly("size", &Tensor::size)
.def_property_readonly("dtype", [](Tensor* self) { .def_property_readonly("dtype", [](Tensor* self) {
return TypeMetaToString(self->meta()); return TypeMetaToString(self->meta());
}).def_property_readonly("ctx", [](Tensor* self) { }).def_property_readonly("device", [](Tensor* self) {
if (self->has_memory()) { if (self->has_memory()) {
Map<string, string> mem_info = self->memory()->info(); Map<string, string> mem_info = self->memory()->info();
return std::tuple<string, int>( return std::tuple<string, int>(
......
...@@ -41,7 +41,7 @@ from dragon.vm.theano.tensor import grad as grad ...@@ -41,7 +41,7 @@ from dragon.vm.theano.tensor import grad as grad
from dragon.core.scope import name_scope, get_default_name_scope from dragon.core.scope import name_scope, get_default_name_scope
from dragon.core.scope import phase_scope, get_default_phase from dragon.core.scope import phase_scope, get_default_phase
from dragon.core.scope import device_scope, get_default_device from dragon.core.scope import device_scope, get_default_device
from dragon.core.scope import WorkspaceScope as workspace_scope from dragon.core.scope import WorkspaceScope as ws_scope
# Version # Version
from dragon.version import version from dragon.version import version
......
...@@ -20,8 +20,9 @@ import dragon.core.logging as logging ...@@ -20,8 +20,9 @@ import dragon.core.logging as logging
option = {} option = {}
# The current device, 'CPU', 'CUDA' or 'CNML' # The current device
option['device'] = 'CPU' # enumeration in ('cpu', 'cuda', 'cnml')
option['device'] = 'cpu'
# The device index # The device index
option['device_id'] = 0 option['device_id'] = 0
...@@ -73,7 +74,7 @@ def EnableCPU(): ...@@ -73,7 +74,7 @@ def EnableCPU():
""" """
global option global option
option['device'] = 'CPU' option['device'] = 'cpu'
def EnableCUDA(gpu_id=0, use_cudnn=True): def EnableCUDA(gpu_id=0, use_cudnn=True):
...@@ -92,7 +93,7 @@ def EnableCUDA(gpu_id=0, use_cudnn=True): ...@@ -92,7 +93,7 @@ def EnableCUDA(gpu_id=0, use_cudnn=True):
""" """
global option global option
option['device'] = 'CUDA' option['device'] = 'cuda'
option['device_id'] = gpu_id option['device_id'] = gpu_id
option['use_cudnn'] = use_cudnn option['use_cudnn'] = use_cudnn
...@@ -111,7 +112,7 @@ def EnableCNML(mlu_id=0): ...@@ -111,7 +112,7 @@ def EnableCNML(mlu_id=0):
""" """
global option global option
option['device'] = 'CNML' option['device'] = 'cnml'
option['device_id'] = mlu_id option['device_id'] = mlu_id
......
...@@ -15,7 +15,7 @@ from __future__ import absolute_import ...@@ -15,7 +15,7 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import dragon.import_c_api as C import dragon.import_c_api as _C
def IsCUDADriverSufficient(): def IsCUDADriverSufficient():
...@@ -27,7 +27,7 @@ def IsCUDADriverSufficient(): ...@@ -27,7 +27,7 @@ def IsCUDADriverSufficient():
``True`` if your device(s) support CUDA otherwise ``False``. ``True`` if your device(s) support CUDA otherwise ``False``.
""" """
return C.IsCUDADriverSufficient() return _C.IsCUDADriverSufficient()
def GetDevice(): def GetDevice():
...@@ -39,7 +39,7 @@ def GetDevice(): ...@@ -39,7 +39,7 @@ def GetDevice():
The device index. The device index.
""" """
return C.cudaGetDevice() return _C.cudaGetDevice()
def SynchronizeStream(device_id=None, stream_id=0): def SynchronizeStream(device_id=None, stream_id=0):
...@@ -55,5 +55,5 @@ def SynchronizeStream(device_id=None, stream_id=0): ...@@ -55,5 +55,5 @@ def SynchronizeStream(device_id=None, stream_id=0):
The stream index. The stream index.
""" """
return C.cudaStreamSynchronize( return _C.cudaStreamSynchronize(
device_id if device_id else -1, stream_id) device_id if device_id else -1, stream_id)
\ No newline at end of file
...@@ -93,7 +93,7 @@ class GraphGradientMaker(object): ...@@ -93,7 +93,7 @@ class GraphGradientMaker(object):
""" """
if forward_op.type in C.NO_GRADIENT_OPERATORS: if forward_op.type in C.NO_GRADIENT_OPERATORS:
for input in forward_op.input: blacklist.add(input) for input in forward_op.input: blacklist.add(input)
return (True, None) return True, None
# Generate virtual grads for targets if necessary # Generate virtual grads for targets if necessary
gen_grads = [] gen_grads = []
...@@ -107,11 +107,11 @@ class GraphGradientMaker(object): ...@@ -107,11 +107,11 @@ class GraphGradientMaker(object):
for output in forward_op.output: for output in forward_op.output:
if inputs_to_grads.get(output, None) is None: if inputs_to_grads.get(output, None) is None:
# check failed: skip backward # check failed: skip backward
if output in blacklist: return (True, gen_grads) if output in blacklist: return True, gen_grads
if len(forward_op.output) == 1: return (True, gen_grads) if len(forward_op.output) == 1: return True, gen_grads
# Pass, even if missing some grads # Pass, even if missing some grads
return (False, gen_grads) return False, gen_grads
@classmethod @classmethod
def Make(cls, forward_ops, targets, input_grads=None, auto_names=True): def Make(cls, forward_ops, targets, input_grads=None, auto_names=True):
......
...@@ -16,8 +16,8 @@ from __future__ import division ...@@ -16,8 +16,8 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import math import math
import numpy as np import numpy
import dragon as dg import dragon
class OperatorHelper(object): class OperatorHelper(object):
...@@ -39,11 +39,11 @@ class OperatorHelper(object): ...@@ -39,11 +39,11 @@ class OperatorHelper(object):
@classmethod @classmethod
def get_index_and_name(cls, prefix='Op'): def get_index_and_name(cls, prefix='Op'):
name = dg.workspace.GetDummyName(prefix, domain='Operator') name = dragon.workspace.GetDummyName(prefix, domain='Operator')
try: try:
_, op_idx = name.split('_') _, op_idx = name.split('_')
except: except:
name = dg.workspace.GetDummyName(prefix, domain='Operator') name = dragon.workspace.GetDummyName(prefix, domain='Operator')
_, op_idx = name.split('_') _, op_idx = name.split('_')
return int(op_idx), name return int(op_idx), name
...@@ -216,7 +216,7 @@ class OperatorHelper(object): ...@@ -216,7 +216,7 @@ class OperatorHelper(object):
for i in range(3): for i in range(3):
try: try:
if i == 0: if i == 0:
outputs[0].shape[i] = np.prod(inputs[0].shape[:axis]) outputs[0].shape[i] = numpy.prod(inputs[0].shape[:axis])
if i >= 1: if i >= 1:
outputs[0].shape[i] = inputs[0].shape[axis] outputs[0].shape[i] = inputs[0].shape[axis]
except: pass except: pass
...@@ -581,7 +581,7 @@ class OperatorHelper(object): ...@@ -581,7 +581,7 @@ class OperatorHelper(object):
if axis is None: if axis is None:
try: try:
fake_shape = inputs[0].shape[:] fake_shape = inputs[0].shape[:]
total_count = np.prod(fake_shape) total_count = numpy.prod(fake_shape)
outputs[0].shape = [total_count * repeats] outputs[0].shape = [total_count * repeats]
except: except:
outputs[0].shape = [None] outputs[0].shape = [None]
...@@ -643,7 +643,7 @@ class OperatorHelper(object): ...@@ -643,7 +643,7 @@ class OperatorHelper(object):
outputs[0].shape = [None] * len(shape) outputs[0].shape = [None] * len(shape)
n_elements, n_elements_known = None, None n_elements, n_elements_known = None, None
try: try:
n_elements = int(np.prod(inputs[0].shape)) n_elements = int(numpy.prod(inputs[0].shape))
except: except:
pass pass
for i, s in enumerate(shape): for i, s in enumerate(shape):
...@@ -654,7 +654,7 @@ class OperatorHelper(object): ...@@ -654,7 +654,7 @@ class OperatorHelper(object):
except: except:
pass pass
try: try:
n_elements_known = int(np.prod(outputs[0].shape)) n_elements_known = int(numpy.prod(outputs[0].shape))
except: except:
pass pass
for i, s in enumerate(shape): for i, s in enumerate(shape):
...@@ -738,6 +738,16 @@ class OperatorHelper(object): ...@@ -738,6 +738,16 @@ class OperatorHelper(object):
outputs[0].shape = [count] outputs[0].shape = [count]
return outputs return outputs
@classmethod
def _apply_Multinomial(cls, arguments, inputs, outputs):
outputs[0].dtype = 'int64'
try:
outputs[0].shape = inputs[0].shape[:]
outputs[0].shape[-1] = arguments['num_samples']
except:
pass
return outputs
############################################### ###############################################
# # # #
# Vision # # Vision #
......
...@@ -15,18 +15,18 @@ from __future__ import absolute_import ...@@ -15,18 +15,18 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import numpy as np import numpy
TENSOR_TYPE_TO_NP_TYPE = { TENSOR_TYPE_TO_NP_TYPE = {
'bool': np.bool, 'bool': numpy.bool,
'int8': np.int8, 'int8': numpy.int8,
'uint8': np.uint8, 'uint8': numpy.uint8,
'int32': np.int32, 'int32': numpy.int32,
'int64': np.int64, 'int64': numpy.int64,
'float16': np.float16, 'float16': numpy.float16,
'float32': np.float32, 'float32': numpy.float32,
'float64': np.float64, 'float64': numpy.float64,
} }
......
...@@ -15,7 +15,7 @@ from __future__ import absolute_import ...@@ -15,7 +15,7 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import dragon.import_c_api as C import dragon.import_c_api as _C
_GLOBAL_MPI_IS_INIT = False _GLOBAL_MPI_IS_INIT = False
...@@ -40,7 +40,7 @@ def Init(): ...@@ -40,7 +40,7 @@ def Init():
This function can only be called once. This function can only be called once.
""" """
C.MPIInit() _C.MPIInit()
global _GLOBAL_MPI_IS_INIT global _GLOBAL_MPI_IS_INIT
global _GLOBAL_MPI_SNAPSHOT_RANKS global _GLOBAL_MPI_SNAPSHOT_RANKS
_GLOBAL_MPI_IS_INIT = True _GLOBAL_MPI_IS_INIT = True
...@@ -68,7 +68,7 @@ def Rank(): ...@@ -68,7 +68,7 @@ def Rank():
""" """
_check_init() _check_init()
return C.MPIRank() return _C.MPIRank()
def Size(): def Size():
...@@ -81,7 +81,7 @@ def Size(): ...@@ -81,7 +81,7 @@ def Size():
""" """
_check_init() _check_init()
return C.MPISize() return _C.MPISize()
def CreateGroup(root=0, incl=[], excl=[]): def CreateGroup(root=0, incl=[], excl=[]):
...@@ -103,7 +103,7 @@ def CreateGroup(root=0, incl=[], excl=[]): ...@@ -103,7 +103,7 @@ def CreateGroup(root=0, incl=[], excl=[]):
""" """
_check_init() _check_init()
return C.MPICreateGroup(root, incl, excl) return _C.MPICreateGroup(root, incl, excl)
def Snapshot(incl): def Snapshot(incl):
...@@ -226,4 +226,4 @@ def Finalize(): ...@@ -226,4 +226,4 @@ def Finalize():
""" """
_check_init() _check_init()
C.MPIFinalize() _C.MPIFinalize()
\ No newline at end of file \ No newline at end of file
...@@ -21,7 +21,7 @@ import numpy as np ...@@ -21,7 +21,7 @@ import numpy as np
from google.protobuf.message import Message from google.protobuf.message import Message
import dragon.config as cfg import dragon.config as cfg
import dragon.import_c_api as C import dragon.import_c_api as _C
from dragon.proto import dragon_pb2 as pb from dragon.proto import dragon_pb2 as pb
from dragon.core.scope import get_default_device from dragon.core.scope import get_default_device
...@@ -97,7 +97,7 @@ def MakeCXXOperatorDef( ...@@ -97,7 +97,7 @@ def MakeCXXOperatorDef(
op_type, inputs=(), outputs=(), op_type, inputs=(), outputs=(),
name='', uid=None, device_option=None, name='', uid=None, device_option=None,
arg=None, engine=None, **kwargs): arg=None, engine=None, **kwargs):
c_def = C.OperatorDef() c_def = _C.OperatorDef()
py_def = MakeOperatorDef( py_def = MakeOperatorDef(
op_type, inputs, outputs, name, uid, op_type, inputs, outputs, name, uid,
device_option, arg, engine, **kwargs) device_option, arg, engine, **kwargs)
...@@ -118,7 +118,7 @@ def MakeDeviceOption( ...@@ -118,7 +118,7 @@ def MakeDeviceOption(
_PREDEFINED_DEVICE_LIMITS = 16 _PREDEFINED_DEVICE_LIMITS = 16
_PREDEFINED_DEVICE_ENGINES = ['', 'CUDNN'] _PREDEFINED_DEVICE_ENGINES = ['', 'CUDNN']
_PREDEFINED_DEVICE_DICT = {'CPU': 0, 'CUDA': 1, 'CNML': 2} _PREDEFINED_DEVICE_DICT = {'cpu': 0, 'cuda': 1, 'cnml': 2}
_PREDEFINED_DEVICE_OPTION_DICT = {} _PREDEFINED_DEVICE_OPTION_DICT = {}
...@@ -127,8 +127,8 @@ for i in range(_PREDEFINED_DEVICE_LIMITS): ...@@ -127,8 +127,8 @@ for i in range(_PREDEFINED_DEVICE_LIMITS):
for engine in _PREDEFINED_DEVICE_ENGINES: for engine in _PREDEFINED_DEVICE_ENGINES:
_PREDEFINED_DEVICE_OPTION_DICT[(device, i, engine)] = \ _PREDEFINED_DEVICE_OPTION_DICT[(device, i, engine)] = \
MakeDeviceOption(identify, i, engine) MakeDeviceOption(identify, i, engine)
if device == 'CUDA': if device == 'cuda':
_PREDEFINED_DEVICE_OPTION_DICT[('CUDA', i)] = \ _PREDEFINED_DEVICE_OPTION_DICT[('cuda', i)] = \
MakeDeviceOption(identify, i, 'CUDNN') MakeDeviceOption(identify, i, 'CUDNN')
......
...@@ -14,7 +14,7 @@ from __future__ import division ...@@ -14,7 +14,7 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import threading import threading
import dragon.import_c_api as C import dragon.import_c_api as _C
from contextlib import contextmanager from contextlib import contextmanager
...@@ -76,7 +76,7 @@ class WorkspaceScope(object): ...@@ -76,7 +76,7 @@ class WorkspaceScope(object):
-------- --------
>>> import dragon as dg >>> import dragon as dg
>>> with WorkspaceScope('session1'): pass >>> with WorkspaceScope('session1'): pass
>>> with dg.workspace_scope('session2'): pass >>> with dg.ws_scope('session2'): pass
""" """
def __init__(self, ws_name): def __init__(self, ws_name):
...@@ -88,11 +88,11 @@ class WorkspaceScope(object): ...@@ -88,11 +88,11 @@ class WorkspaceScope(object):
self.prev = 'default' self.prev = 'default'
def __enter__(self): def __enter__(self):
self.prev = C.CurrentWorkspace() self.prev = _C.CurrentWorkspace()
C.SwitchWorkspace(self.ws, True) _C.SwitchWorkspace(self.ws, True)
def __exit__(self, type, value, traceback): def __exit__(self, type, value, traceback):
C.SwitchWorkspace(self.prev, True) _C.SwitchWorkspace(self.prev, True)
_GLOBAL_TENSOR_STACK = _ThreadLocalStack() _GLOBAL_TENSOR_STACK = _ThreadLocalStack()
...@@ -133,7 +133,7 @@ def device_scope(device_type, device_id=0, engine='AUTO'): ...@@ -133,7 +133,7 @@ def device_scope(device_type, device_id=0, engine='AUTO'):
Parameters Parameters
---------- ----------
device_type : {'CPU', 'GPU', 'CUDA', 'CNML'}, required device_type : {'cpu', 'gpu', 'cuda', 'cnml'}, required
The type of device. The type of device.
device_id : int, optional device_id : int, optional
The index of the device. The index of the device.
...@@ -143,9 +143,9 @@ def device_scope(device_type, device_id=0, engine='AUTO'): ...@@ -143,9 +143,9 @@ def device_scope(device_type, device_id=0, engine='AUTO'):
""" """
device_type, device_id, device_engine = \ device_type, device_id, device_engine = \
device_type.upper(), device_id, engine.upper() device_type.upper(), device_id, engine.upper()
assert device_type in ['CPU', 'GPU', 'CUDA', 'CNML'] assert device_type in ['cpu', 'gpu', 'cuda', 'cnml']
# Default names # Default names
if device_type == 'GPU': device_type = 'CUDA' if device_type == 'gpu': device_type = 'cuda'
if device_engine == 'AUTO': device_engine = 'CUDNN' if device_engine == 'AUTO': device_engine = 'CUDNN'
return _GLOBAL_DEVICE_STACK.get_controller({ return _GLOBAL_DEVICE_STACK.get_controller({
'device_type': device_type, 'device_type': device_type,
......
...@@ -45,11 +45,11 @@ class Tensor(object): ...@@ -45,11 +45,11 @@ class Tensor(object):
Parameters Parameters
---------- ----------
name : None or str name : str, optional
The name of Tensor. The name of Tensor.
shape : None or list shape : list, optional
The shape of Tensor. The shape of Tensor.
dtype : None or str dtype : str, optional
The type of Tensor. The type of Tensor.
Returns Returns
...@@ -94,7 +94,7 @@ class Tensor(object): ...@@ -94,7 +94,7 @@ class Tensor(object):
Parameters Parameters
---------- ----------
value : number value : number, optional, default=0
The constant value. The constant value.
Returns Returns
...@@ -105,14 +105,14 @@ class Tensor(object): ...@@ -105,14 +105,14 @@ class Tensor(object):
""" """
return self.Fill('constant', value=value) return self.Fill('constant', value=value)
def Uniform(self, low=-1, high=1): def Uniform(self, low=0, high=1):
"""Register as a variable with uniform initializer. """Register as a variable with uniform initializer.
Parameters Parameters
---------- ----------
low : number low : number, optional, default=0
The lower bound of uniform distribution. The lower bound of uniform distribution.
high : number high : number, optional, default=1
The higher bound of uniform distribution. The higher bound of uniform distribution.
Returns Returns
...@@ -128,9 +128,9 @@ class Tensor(object): ...@@ -128,9 +128,9 @@ class Tensor(object):
Parameters Parameters
---------- ----------
mu : number mu : number, optional, default=0
The mu of normal distribution. The mu of normal distribution.
sigma : number sigma : number, optional, default=1
The sigma of normal distribution. The sigma of normal distribution.
Returns Returns
...@@ -146,9 +146,9 @@ class Tensor(object): ...@@ -146,9 +146,9 @@ class Tensor(object):
Parameters Parameters
---------- ----------
mu : number mu : number, optional, default=0
The mu of normal distribution. The mu of normal distribution.
sigma : number sigma : number, optional, default=1
The sigma of normal distribution. The sigma of normal distribution.
Returns Returns
...@@ -164,9 +164,9 @@ class Tensor(object): ...@@ -164,9 +164,9 @@ class Tensor(object):
Parameters Parameters
---------- ----------
mean : number mean : number, optional, default=0
The mean(mu) of normal distribution. The mean(mu) of normal distribution.
std : number std : number, optional, default=1
The std(sigma) of normal distribution. The std(sigma) of normal distribution.
Returns Returns
...@@ -177,12 +177,12 @@ class Tensor(object): ...@@ -177,12 +177,12 @@ class Tensor(object):
""" """
return self.Normal(mu=mean, sigma=std) return self.Normal(mu=mean, sigma=std)
def GlorotUniform(self, scale=3.0): def GlorotUniform(self, scale=3.):
"""Register as a variable with glorot uniform initializer. """Register as a variable with glorot uniform initializer.
Parameters Parameters
---------- ----------
scale : number scale : number, optional, default=3.
The scale factor. The scale factor.
Returns Returns
...@@ -193,12 +193,12 @@ class Tensor(object): ...@@ -193,12 +193,12 @@ class Tensor(object):
""" """
return self.Fill('glorot_uniform', scale=scale) return self.Fill('glorot_uniform', scale=scale)
def GlorotNormal(self, scale=2.0): def GlorotNormal(self, scale=2.):
"""Register as a variable with glorot normal initializer. """Register as a variable with glorot normal initializer.
Parameters Parameters
---------- ----------
scale : number scale : number, optional, default=2.
The scale factor. The scale factor.
Returns Returns
...@@ -244,7 +244,7 @@ class Tensor(object): ...@@ -244,7 +244,7 @@ class Tensor(object):
Parameters Parameters
---------- ----------
value : None or str value : str
The name to set. The name to set.
Returns Returns
...@@ -270,7 +270,7 @@ class Tensor(object): ...@@ -270,7 +270,7 @@ class Tensor(object):
Parameters Parameters
---------- ----------
str name : str
The name. The name.
Returns Returns
...@@ -284,6 +284,11 @@ class Tensor(object): ...@@ -284,6 +284,11 @@ class Tensor(object):
def shape(self): def shape(self):
"""Return or Set the shape. """Return or Set the shape.
Parameters
---------
value : sequence of int
The shape to set.
Returns Returns
------- -------
sequence of int sequence of int
...@@ -344,7 +349,7 @@ class Tensor(object): ...@@ -344,7 +349,7 @@ class Tensor(object):
---------- ----------
dtype : str dtype : str
The specific dtype. The specific dtype.
inplace : boolean inplace : boolean, optional, default=False
Whether to modify the inputs. Whether to modify the inputs.
Returns Returns
...@@ -651,6 +656,99 @@ class Tensor(object): ...@@ -651,6 +656,99 @@ class Tensor(object):
""" """
return self.__mul__(-1.0) return self.__mul__(-1.0)
def __gt__(self, other):
"""Compute *self* > *other* element-wise.
Parameters
----------
other : Tensor or number
The other tensor.
Returns
-------
Tensor
The output tensor.
"""
if not isinstance(other, Tensor):
other = self._from_constants(other)
return self.CreateOperator('Compare', [self, other], operation='GT')
def __ge__(self, other):
"""Compute *self* > *other* element-wise.
Parameters
----------
other : Tensor or number
The other tensor.
Returns
-------
Tensor
The output tensor.
"""
if not isinstance(other, Tensor):
other = self._from_constants(other)
return self.CreateOperator('Compare', [self, other], operation='GE')
def __lt__(self, other):
"""Compute *self* < *other* element-wise.
Parameters
----------
other : Tensor or number
The other tensor.
Returns
-------
Tensor
The output tensor.
"""
if not isinstance(other, Tensor):
other = self._from_constants(other)
return self.CreateOperator('Compare', [self, other], operation='LT')
def __le__(self, other):
"""Compute *self* <= *other* element-wise.
Parameters
----------
other : Tensor or number
The other tensor.
Returns
-------
Tensor
The output tensor.
"""
if not isinstance(other, Tensor):
other = self._from_constants(other)
return self.CreateOperator('Compare', [self, other], operation='LE')
def __eq__(self, other):
"""Compute *self* == *other* element-wise.
Parameters
----------
other : Tensor or number
The other tensor.
Returns
-------
Tensor
The output tensor.
"""
if not isinstance(other, Tensor):
other = self._from_constants(other)
return self.CreateOperator('Compare', [self, other], operation='EQ')
def __hash__(self):
return id(self)
def __call__(self, *args, **kwargs): def __call__(self, *args, **kwargs):
"""Print the expressions. """Print the expressions.
...@@ -984,7 +1082,7 @@ class Tensor(object): ...@@ -984,7 +1082,7 @@ class Tensor(object):
---------- ----------
value : number or Tensor value : number or Tensor
The value to convert. The value to convert.
dtype : str, optional dtype : str, optional, default='float32'
The data type of the tensor. The data type of the tensor.
Returns Returns
......
...@@ -15,11 +15,8 @@ from __future__ import absolute_import ...@@ -15,11 +15,8 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import numpy as np import numpy
import dragon as dg import dragon
from google.protobuf.message import Message
import dragon.import_c_api as C
from dragon.core.tensor import Tensor from dragon.core.tensor import Tensor
from dragon.core.proto_utils import GetDeviceOption from dragon.core.proto_utils import GetDeviceOption
...@@ -50,7 +47,7 @@ def FromShape(shape, dtype='float32', name=None): ...@@ -50,7 +47,7 @@ def FromShape(shape, dtype='float32', name=None):
tensor.shape = list(shape) tensor.shape = list(shape)
if not isinstance(shape, (tuple, list)): if not isinstance(shape, (tuple, list)):
raise TypeError('The shape should be a tuple or list.') raise TypeError('The shape should be a tuple or list.')
C.TensorFromShape( dragon.C.TensorFromShape(
_stringify_tensor(tensor), _stringify_tensor(tensor),
list(shape), dtype) list(shape), dtype)
return tensor return tensor
...@@ -73,7 +70,7 @@ def SetShape(tensor, shape, dtype='float32'): ...@@ -73,7 +70,7 @@ def SetShape(tensor, shape, dtype='float32'):
None None
""" """
C.TensorFromShape(_stringify_tensor(tensor), shape, dtype) dragon.C.TensorFromShape(_stringify_tensor(tensor), shape, dtype)
def FromTensor(src, src_ctx=None, name=None, ctx=None): def FromTensor(src, src_ctx=None, name=None, ctx=None):
...@@ -100,9 +97,9 @@ def FromTensor(src, src_ctx=None, name=None, ctx=None): ...@@ -100,9 +97,9 @@ def FromTensor(src, src_ctx=None, name=None, ctx=None):
""" """
tensor = _try_get_tensor(name) tensor = _try_get_tensor(name)
if src_ctx is None: src_ctx = GetDeviceOption('CPU') if src_ctx is None: src_ctx = GetDeviceOption('cpu')
if ctx is None: ctx = GetDeviceOption('CPU') if ctx is None: ctx = GetDeviceOption('cpu')
C.TensorFromTensor( dragon.C.TensorFromTensor(
_stringify_tensor(tensor), _stringify_tensor(src), _stringify_tensor(tensor), _stringify_tensor(src),
_stringify_proto(ctx), _stringify_proto(src_ctx)) _stringify_proto(ctx), _stringify_proto(src_ctx))
return tensor return tensor
...@@ -130,9 +127,9 @@ def FromPyArray(array, name=None): ...@@ -130,9 +127,9 @@ def FromPyArray(array, name=None):
""" """
tensor = _try_get_tensor(name) tensor = _try_get_tensor(name)
if not isinstance(array, np.ndarray): if not isinstance(array, numpy.ndarray):
raise TypeError('The given nd-array should be numpy.ndarray.') raise TypeError('The given nd-array should be numpy.ndarray.')
C.TensorFromPyArray(_stringify_tensor(tensor), array) dragon.C.TensorFromPyArray(_stringify_tensor(tensor), array)
return tensor return tensor
...@@ -157,7 +154,7 @@ def SetPyArray(tensor, array): ...@@ -157,7 +154,7 @@ def SetPyArray(tensor, array):
The wrapper of ``TensorFromPyArrayCC``. The wrapper of ``TensorFromPyArrayCC``.
""" """
C.TensorFromPyArray(_stringify_tensor(tensor), array) dragon.C.TensorFromPyArray(_stringify_tensor(tensor), array)
def ToPyArray(tensor, readonly=False): def ToPyArray(tensor, readonly=False):
...@@ -178,7 +175,7 @@ def ToPyArray(tensor, readonly=False): ...@@ -178,7 +175,7 @@ def ToPyArray(tensor, readonly=False):
The array sharing the memory with original tensor. The array sharing the memory with original tensor.
""" """
return C.TensorToPyArray(_stringify_tensor(tensor), readonly) return dragon.C.TensorToPyArray(_stringify_tensor(tensor), readonly)
def GetStorage(tensor): def GetStorage(tensor):
...@@ -196,8 +193,8 @@ def GetStorage(tensor): ...@@ -196,8 +193,8 @@ def GetStorage(tensor):
""" """
tensor = _stringify_tensor(tensor) tensor = _stringify_tensor(tensor)
if not dg.workspace.HasTensor(tensor): return None if not dragon.workspace.HasTensor(tensor): return None
return C.GetTensor(tensor) return dragon.C.GetTensor(tensor)
def _stringify_proto(obj): def _stringify_proto(obj):
...@@ -213,9 +210,5 @@ def _stringify_tensor(obj): ...@@ -213,9 +210,5 @@ def _stringify_tensor(obj):
def _try_get_tensor(name=None): def _try_get_tensor(name=None):
"""Try to create or get a tensor""" """Try to create or get a tensor"""
if name is None or name == '': if name is None or name == '': return Tensor()
return Tensor() else: return Tensor.Ref(name)
else: \ No newline at end of file
tensor = Tensor('')
tensor.set_name(name)
return tensor
\ No newline at end of file
...@@ -25,20 +25,16 @@ from __future__ import division ...@@ -25,20 +25,16 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import os import os
import numpy as np import numpy
import threading import threading
import six.moves.cPickle as pickle import six.moves.cPickle as pickle
from google.protobuf.message import Message import dragon.import_c_api as _C
import dragon.import_c_api as C
import dragon.core.logging as logging import dragon.core.logging as logging
import dragon.proto.dragon_pb2 as pb
from dragon.config import GetGlobalOptions from dragon.config import GetGlobalOptions
import dragon.core.mpi as mpi from dragon.core import mpi, mapping, proto_utils
import dragon.proto.dragon_pb2 as pb
import dragon.core.proto_utils as pb_utils
import dragon.core.mapping as mapping
def CurrentWorkspace(): def CurrentWorkspace():
...@@ -50,7 +46,7 @@ def CurrentWorkspace(): ...@@ -50,7 +46,7 @@ def CurrentWorkspace():
The workspace name. The workspace name.
""" """
return C.CurrentWorkspace() return _C.CurrentWorkspace()
def SwitchWorkspace(workspace_name, create_if_missing=True): def SwitchWorkspace(workspace_name, create_if_missing=True):
...@@ -70,7 +66,7 @@ def SwitchWorkspace(workspace_name, create_if_missing=True): ...@@ -70,7 +66,7 @@ def SwitchWorkspace(workspace_name, create_if_missing=True):
""" """
if workspace_name == '': if workspace_name == '':
raise ValueError('The workspace name should not be empty.') raise ValueError('The workspace name should not be empty.')
C.SwitchWorkspace(workspace_name, create_if_missing) _C.SwitchWorkspace(workspace_name, create_if_missing)
def MoveWorkspace(target_ws, source_ws): def MoveWorkspace(target_ws, source_ws):
...@@ -90,7 +86,7 @@ def MoveWorkspace(target_ws, source_ws): ...@@ -90,7 +86,7 @@ def MoveWorkspace(target_ws, source_ws):
""" """
if target_ws == '' or source_ws == '': if target_ws == '' or source_ws == '':
raise ValueError('The target or source name can not be empty.') raise ValueError('The target or source name can not be empty.')
C.MoveWorkspace(target_ws, source_ws) _C.MoveWorkspace(target_ws, source_ws)
def ResetWorkspace(workspace_name=''): def ResetWorkspace(workspace_name=''):
...@@ -110,7 +106,7 @@ def ResetWorkspace(workspace_name=''): ...@@ -110,7 +106,7 @@ def ResetWorkspace(workspace_name=''):
None None
""" """
C.ResetWorkspace(workspace_name) _C.ResetWorkspace(workspace_name)
def ClearWorkspace(workspace_name=''): def ClearWorkspace(workspace_name=''):
...@@ -130,7 +126,7 @@ def ClearWorkspace(workspace_name=''): ...@@ -130,7 +126,7 @@ def ClearWorkspace(workspace_name=''):
None None
""" """
C.ClearWorkspace(workspace_name) _C.ClearWorkspace(workspace_name)
def CreateGraph(graph_def): def CreateGraph(graph_def):
...@@ -150,7 +146,7 @@ def CreateGraph(graph_def): ...@@ -150,7 +146,7 @@ def CreateGraph(graph_def):
option = GetGlobalOptions() option = GetGlobalOptions()
LogMetaGraph(graph_def) LogMetaGraph(graph_def)
ExportMetaGraph(graph_def) ExportMetaGraph(graph_def)
return C.CreateGraph( return _C.CreateGraph(
_stringify_proto(graph_def), _stringify_proto(graph_def),
option['log_optimized_graph'], option['log_optimized_graph'],
) )
...@@ -173,7 +169,7 @@ def RunOperator(op_def, verbose=False): ...@@ -173,7 +169,7 @@ def RunOperator(op_def, verbose=False):
""" """
if isinstance(op_def, pb.OperatorDef): if isinstance(op_def, pb.OperatorDef):
op_def = op_def.SerializeToString() op_def = op_def.SerializeToString()
C.RunOperator(op_def, verbose) _C.RunOperator(op_def, verbose)
def HasTensor(tensor): def HasTensor(tensor):
...@@ -190,7 +186,7 @@ def HasTensor(tensor): ...@@ -190,7 +186,7 @@ def HasTensor(tensor):
The query result. The query result.
""" """
return C.HasTensor(_stringify_tensor(tensor)) return _C.HasTensor(_stringify_tensor(tensor))
def CreateTensor(tensor): def CreateTensor(tensor):
...@@ -206,7 +202,7 @@ def CreateTensor(tensor): ...@@ -206,7 +202,7 @@ def CreateTensor(tensor):
None None
""" """
return C.CreateTensor(_stringify_tensor(tensor)) return _C.CreateTensor(_stringify_tensor(tensor))
def CreateFiller(filler_def): def CreateFiller(filler_def):
...@@ -229,7 +225,7 @@ def CreateFiller(filler_def): ...@@ -229,7 +225,7 @@ def CreateFiller(filler_def):
""" """
filler_def = filler_def if isinstance(filler_def, str) \ filler_def = filler_def if isinstance(filler_def, str) \
else filler_def.SerializePartialToString() else filler_def.SerializePartialToString()
C.CreateFiller(filler_def) _C.CreateFiller(filler_def)
def GetFillerType(tensor): def GetFillerType(tensor):
...@@ -250,7 +246,7 @@ def GetFillerType(tensor): ...@@ -250,7 +246,7 @@ def GetFillerType(tensor):
The filler type. The filler type.
""" """
return C.GetFillerType(_stringify_tensor(tensor)) return _C.GetFillerType(_stringify_tensor(tensor))
def GetTensorName(tensor): def GetTensorName(tensor):
...@@ -271,7 +267,7 @@ def GetTensorName(tensor): ...@@ -271,7 +267,7 @@ def GetTensorName(tensor):
The query result may be different from the one used in the frontend. The query result may be different from the one used in the frontend.
""" """
return C.GetTensorName(_stringify_tensor(tensor)) return _C.GetTensorName(_stringify_tensor(tensor))
def SetTensorAlias(tensor, alias): def SetTensorAlias(tensor, alias):
...@@ -289,7 +285,7 @@ def SetTensorAlias(tensor, alias): ...@@ -289,7 +285,7 @@ def SetTensorAlias(tensor, alias):
None None
""" """
return C.SetTensorAlias(_stringify_tensor(tensor), alias) return _C.SetTensorAlias(_stringify_tensor(tensor), alias)
def FetchTensor(tensor): def FetchTensor(tensor):
...@@ -306,7 +302,7 @@ def FetchTensor(tensor): ...@@ -306,7 +302,7 @@ def FetchTensor(tensor):
The values copied from the backend. The values copied from the backend.
""" """
return C.FetchTensor(_stringify_tensor(tensor)) return _C.FetchTensor(_stringify_tensor(tensor))
def FeedTensor(tensor, array, force_cpu=False, dtype=None): def FeedTensor(tensor, array, force_cpu=False, dtype=None):
...@@ -329,14 +325,14 @@ def FeedTensor(tensor, array, force_cpu=False, dtype=None): ...@@ -329,14 +325,14 @@ def FeedTensor(tensor, array, force_cpu=False, dtype=None):
Examples Examples
-------- --------
>>> import dragon as dg >>> import dragon
>>> a = dg.Tensor().Variable() >>> a = dragon.Tensor().Variable()
>>> dg.workspace.FeedTensor(a, 1) >>> dragon.workspace.FeedTensor(a, 1)
>>> a_value = dg.workspace.FetchTensor(a) >>> a_value = dragon.workspace.FetchTensor(a)
>>> a_value, a_value.dtype >>> a_value, a_value.dtype
>>> [ 1.], "float32" >>> [ 1.], "float32"
>>> dg.workspace.FeedTensor(a, [[1, 2, 3]], dtype='float16') >>> dragon.workspace.FeedTensor(a, [[1, 2, 3]], dtype='float16')
>>> a_value = a.get_value() >>> a_value = a.get_value()
>>> a_value, a_value.dtype >>> a_value, a_value.dtype
>>> [[ 1. 2. 3.]], "float16" >>> [[ 1. 2. 3.]], "float16"
...@@ -344,13 +340,13 @@ def FeedTensor(tensor, array, force_cpu=False, dtype=None): ...@@ -344,13 +340,13 @@ def FeedTensor(tensor, array, force_cpu=False, dtype=None):
""" """
name = tensor.name if hasattr(tensor, 'name') else str(tensor) name = tensor.name if hasattr(tensor, 'name') else str(tensor)
if force_cpu is True: if force_cpu is True:
dev = pb_utils.GetDeviceOption('CPU') dev = proto_utils.GetDeviceOption('cpu')
else: else:
dev = pb_utils.GetDefaultDeviceOption() dev = proto_utils.GetDefaultDeviceOption()
if dev is None: dev = pb_utils.GetGlobalDeviceOption() if dev is None: dev = proto_utils.GetGlobalDeviceOption()
if not isinstance(array, np.ndarray): if not isinstance(array, numpy.ndarray):
auto_data_type = np.float32 if dtype is None else dtype auto_data_type = numpy.float32 if dtype is None else dtype
else: else:
auto_data_type = array.dtype if dtype is None else dtype auto_data_type = array.dtype if dtype is None else dtype
...@@ -365,8 +361,8 @@ def FeedTensor(tensor, array, force_cpu=False, dtype=None): ...@@ -365,8 +361,8 @@ def FeedTensor(tensor, array, force_cpu=False, dtype=None):
format(preset_data_type, dtype)) format(preset_data_type, dtype))
auto_data_type = preset_data_type auto_data_type = preset_data_type
nd_array = np.array(array, dtype=auto_data_type, copy=False) nd_array = numpy.array(array, dtype=auto_data_type, copy=False)
C.FeedTensor(name, nd_array, _stringify_proto(dev)) _C.FeedTensor(name, nd_array, _stringify_proto(dev))
def ResetTensor(tensor): def ResetTensor(tensor):
...@@ -384,7 +380,7 @@ def ResetTensor(tensor): ...@@ -384,7 +380,7 @@ def ResetTensor(tensor):
None None
""" """
return C.ResetTensor(_stringify_tensor(tensor)) return _C.ResetTensor(_stringify_tensor(tensor))
def RunGraph( def RunGraph(
...@@ -427,7 +423,7 @@ def RunGraph( ...@@ -427,7 +423,7 @@ def RunGraph(
# Run the graph according to the specified include/exclude rule # Run the graph according to the specified include/exclude rule
runtime_stage = stage if stage else 'default' runtime_stage = stage if stage else 'default'
rule = _PREDEFINED_GRAPH_RUNTIME_STAGES[runtime_stage] rule = _PREDEFINED_GRAPH_RUNTIME_STAGES[runtime_stage]
C.RunGraph(str(graph_name), str(rule['include']), str(rule['exclude'])) _C.RunGraph(str(graph_name), str(rule['include']), str(rule['exclude']))
# Try to return the outputs # Try to return the outputs
# Force to return may lead to asserts if outputs are not computed # Force to return may lead to asserts if outputs are not computed
...@@ -462,7 +458,7 @@ def FlowGradients(inputs, targets, input_grads=None, ignored_grads=None): ...@@ -462,7 +458,7 @@ def FlowGradients(inputs, targets, input_grads=None, ignored_grads=None):
if (option['log_optimized_graph'] or if (option['log_optimized_graph'] or
option['log_meta_graph']) else False option['log_meta_graph']) else False
C.FlowGradients( _C.FlowGradients(
inputs, targets, inputs, targets,
input_grads if input_grads else [], input_grads if input_grads else [],
ignored_grads if ignored_grads else [], ignored_grads if ignored_grads else [],
...@@ -520,8 +516,7 @@ def ExportMetaGraph(graph_def): ...@@ -520,8 +516,7 @@ def ExportMetaGraph(graph_def):
def Snapshot( def Snapshot(
tensors, filename, tensors, filename,
prefix='', suffix='.bin', prefix='', suffix='.bin',
format='default', format='default'):
):
"""Snapshot tensors into a binary file. """Snapshot tensors into a binary file.
Parameters Parameters
...@@ -566,7 +561,7 @@ def Snapshot( ...@@ -566,7 +561,7 @@ def Snapshot(
logging.info('Model Format: Pickle') logging.info('Model Format: Pickle')
elif format is 'caffe': elif format is 'caffe':
names = [tensor.name for tensor in tensors] names = [tensor.name for tensor in tensors]
C.Snapshot(file_path, names, 1) _C.Snapshot(file_path, names, 1)
else: raise TypeError('Unknown binary format: {}'.format(format)) else: raise TypeError('Unknown binary format: {}'.format(format))
...@@ -606,7 +601,7 @@ def Restore(binary_file, format='default'): ...@@ -606,7 +601,7 @@ def Restore(binary_file, format='default'):
elif format == 'caffe': elif format == 'caffe':
# Caffe models can't save the tensor name # Caffe models can't save the tensor name
# We simply use "layer_name/param:X" # We simply use "layer_name/param:X"
C.Restore(binary_file, 1) _C.Restore(binary_file, 1)
else: else:
raise TypeError('Unknown binary format: {}'.format(format)) raise TypeError('Unknown binary format: {}'.format(format))
...@@ -636,7 +631,7 @@ def GetDummyName(basename, suffix='', domain='', zero_based=True): ...@@ -636,7 +631,7 @@ def GetDummyName(basename, suffix='', domain='', zero_based=True):
The unique dummy name. The unique dummy name.
""" """
return C.GetDummyName(basename, suffix, domain, zero_based) return _C.GetDummyName(basename, suffix, domain, zero_based)
def _stringify_proto(obj): def _stringify_proto(obj):
......
...@@ -69,9 +69,15 @@ class OpSchema(object): ...@@ -69,9 +69,15 @@ class OpSchema(object):
def Impl(*args, **kwargs): def Impl(*args, **kwargs):
inputs = args[0] inputs = args[0]
if isinstance(inputs, (list, tuple)): if isinstance(inputs, (list, tuple)):
dtype = None
for idx, input in enumerate(inputs):
if isinstance(input, Tensor) and \
input.dtype is not None:
dtype = input.dtype
break
for idx, input in enumerate(inputs): for idx, input in enumerate(inputs):
if not isinstance(input, Tensor): if not isinstance(input, Tensor):
inputs[idx] = Tensor.Convert(input, dtype=None) inputs[idx] = Tensor.Convert(input, dtype=dtype)
return op_func(inputs + list(args[1:]), **kwargs) return op_func(inputs + list(args[1:]), **kwargs)
else: else:
if not isinstance(inputs, Tensor): if not isinstance(inputs, Tensor):
......
...@@ -752,8 +752,8 @@ def Arange(start, stop=None, step=1, dtype='float32', **kwargs): ...@@ -752,8 +752,8 @@ def Arange(start, stop=None, step=1, dtype='float32', **kwargs):
Parameters Parameters
---------- ----------
start : int or Tensor inputs : Tensor
The start of the range. The input tensor.
stop : int or Tensor, optional stop : int or Tensor, optional
The stop of range. The stop of range.
step : int or Tensor, optional step : int or Tensor, optional
...@@ -769,4 +769,34 @@ def Arange(start, stop=None, step=1, dtype='float32', **kwargs): ...@@ -769,4 +769,34 @@ def Arange(start, stop=None, step=1, dtype='float32', **kwargs):
""" """
arguments = ParseArgs(locals()) arguments = ParseArgs(locals())
arguments['dtype'] = arguments['dtype'].lower() arguments['dtype'] = arguments['dtype'].lower()
return Tensor.CreateOperator('Arange', [], **arguments) return Tensor.CreateOperator('Arange', [], **arguments)
\ No newline at end of file
@OpSchema.Inputs(1)
def Multinomial(inputs, num_samples=1, normalize=False, **kwargs):
"""Return a tensor where each row contains ``num_samples``,
sampled from the multinomial distribution.
If ``normalize`` is *True*, negative inputs is accepted,
and will be normalized by a softmax function. (*TensorFlow* Style).
Otherwise, inputs should be non-negative. (*Torch* Style).
**Type Constraints**: (*int8*, *uint8*, *int32*, *int64*, *float32*, *float64*)
Parameters
----------
inputs : Tensor
The input tensor.
num_samples : int, optional, default=1
The number of samples.
normalize : boolean, optional, default=False
Whether to normalize the inputs.
Returns
-------
Tensor
A *int64* tensor contains the indices.
"""
return Tensor.CreateOperator('Multinomial', **ParseArgs(locals()))
\ No newline at end of file
...@@ -39,6 +39,7 @@ def Copy(inputs, **kwargs): ...@@ -39,6 +39,7 @@ def Copy(inputs, **kwargs):
return Tensor.CreateOperator('Copy', **arguments) return Tensor.CreateOperator('Copy', **arguments)
@OpSchema.ConvertConstantInputs()
@OpSchema.Inputs(2) @OpSchema.Inputs(2)
def Equal(inputs, to_uint8=False, **kwargs): def Equal(inputs, to_uint8=False, **kwargs):
"""``Equal`` comparing between A and B. """``Equal`` comparing between A and B.
...@@ -61,9 +62,10 @@ def Equal(inputs, to_uint8=False, **kwargs): ...@@ -61,9 +62,10 @@ def Equal(inputs, to_uint8=False, **kwargs):
""" """
arguments = ParseArgs(locals()) arguments = ParseArgs(locals())
return Tensor.CreateOperator('Compare', operation='EQUAL', **arguments) return Tensor.CreateOperator('Compare', operation='EQ', **arguments)
@OpSchema.ConvertConstantInputs()
@OpSchema.Inputs(2) @OpSchema.Inputs(2)
def Less(inputs, to_uint8=False, **kwargs): def Less(inputs, to_uint8=False, **kwargs):
"""``Less`` comparing between A and B. """``Less`` comparing between A and B.
...@@ -86,12 +88,65 @@ def Less(inputs, to_uint8=False, **kwargs): ...@@ -86,12 +88,65 @@ def Less(inputs, to_uint8=False, **kwargs):
""" """
arguments = ParseArgs(locals()) arguments = ParseArgs(locals())
return Tensor.CreateOperator('Compare', operation='LESS', **arguments) return Tensor.CreateOperator('Compare', operation='LT', **arguments)
@OpSchema.ConvertConstantInputs()
@OpSchema.Inputs(2)
def LessEqual(inputs, to_uint8=False, **kwargs):
"""``LessEqual`` comparing between A and B.
Set ``to_uint8`` if you expect the ``uint8`` results instead of ``bool``.
**Type Constraints**: (*bool*, *int8*, *uint8*, *int32*, *int64*, *float16*, *float32*, *float64*)
Parameters
----------
inputs : sequence of Tensor
The inputs, represent A and B respectively.
to_uint8 : bool
``True`` to convert to ``uint8`` results.
Returns
-------
Tensor
The comparing results.
"""
arguments = ParseArgs(locals())
return Tensor.CreateOperator('Compare', operation='LE', **arguments)
@OpSchema.ConvertConstantInputs()
@OpSchema.Inputs(2) @OpSchema.Inputs(2)
def Greater(inputs, to_uint8=False, **kwargs): def Greater(inputs, to_uint8=False, **kwargs):
"""``Less`` comparing between A and B. """``Greater`` comparing between A and B.
Set ``to_uint8`` if you expect the ``uint8`` results instead of ``bool``.
**Type Constraints**: (*bool*, *int8*, *uint8*, *int32*, *int64*, *float16*, *float32*, *float64*)
Parameters
----------
inputs : sequence of Tensor
The inputs, represent A and B respectively.
to_uint8 : bool
``True`` to convert to ``uint8`` results.
Returns
-------
Tensor
The comparing results.
"""
arguments = ParseArgs(locals())
return Tensor.CreateOperator('Compare', operation='GT', **arguments)
@OpSchema.ConvertConstantInputs()
@OpSchema.Inputs(2)
def GreaterEqual(inputs, to_uint8=False, **kwargs):
"""``GreaterEqual`` comparing between A and B.
Set ``to_uint8`` if you expect the ``uint8`` results instead of ``bool``. Set ``to_uint8`` if you expect the ``uint8`` results instead of ``bool``.
...@@ -111,4 +166,4 @@ def Greater(inputs, to_uint8=False, **kwargs): ...@@ -111,4 +166,4 @@ def Greater(inputs, to_uint8=False, **kwargs):
""" """
arguments = ParseArgs(locals()) arguments = ParseArgs(locals())
return Tensor.CreateOperator('Compare', operation='GREATER', **arguments) return Tensor.CreateOperator('Compare', operation='GE', **arguments)
\ No newline at end of file \ No newline at end of file
...@@ -20,9 +20,8 @@ from .. import * ...@@ -20,9 +20,8 @@ from .. import *
def RNNParamSet( def RNNParamSet(
inputs, layer_id, param_id, param_type, inputs, layer_id, param_id, param_type,
rnn_mode, input_size, hidden_size, rnn_mode, input_size, hidden_size,
num_layers=1, num_directions=1, **kwargs num_layers=1, num_directions=1, **kwargs):
):
arguments = ParseArgs(locals()) arguments = ParseArgs(locals())
arguments['inputs'] = inputs[1] arguments['inputs'] = inputs[1]
arguments['existing_outputs'] = inputs[0] arguments['existing_outputs'] = inputs[0]
return Tensor.CreateOperator(op_type='RNNParamSet', **arguments) return Tensor.CreateOperator('RNNParamSet', **arguments)
\ No newline at end of file \ No newline at end of file
...@@ -13,24 +13,21 @@ from __future__ import absolute_import ...@@ -13,24 +13,21 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import numpy
import dragon
import warnings import warnings
import dragon as dg
import numpy as np
from dragon.core.tensor import Tensor from dragon.core.tensor import Tensor
from dragon.core.tensor_utils import FromShape from dragon.core.tensor_utils import FromShape
from dragon.operators.rnn.rnn_param import RNNParamSet
from .rnn_param import RNNParamSet
class RNNBase(object): class RNNBase(object):
"""A simple class wrapping general RNN ops. """A simple class wrapping general RNN ops."""
"""
def __init__(self, def __init__(self,
mode, input_size, hidden_size, num_layers=1, mode, input_size, hidden_size, num_layers=1,
bidirectional=False, dropout=0, name=None bidirectional=False, dropout=0, name=None):
):
eligible_rnn_modes = ('rnn_tanh', 'rnn_relu', 'lstm', 'gru') eligible_rnn_modes = ('rnn_tanh', 'rnn_relu', 'lstm', 'gru')
if mode.lower() not in eligible_rnn_modes: if mode.lower() not in eligible_rnn_modes:
raise ValueError('Unknown rnn mode: {}.' raise ValueError('Unknown rnn mode: {}.'
...@@ -54,26 +51,23 @@ class RNNBase(object): ...@@ -54,26 +51,23 @@ class RNNBase(object):
elif self.mode == 'gru': gate_size = 3 * self.hidden_size elif self.mode == 'gru': gate_size = 3 * self.hidden_size
else: gate_size = self.hidden_size else: gate_size = self.hidden_size
# 1. Plan weights # 1. Plan weights
self._matrix_weights = []; self._bias_weights = [] self._matrix_shape, self._bias_shape = [], []
for layer in range(self.num_layers): for layer in range(self.num_layers):
for direction in range(self.num_directions): for direction in range(self.num_directions):
layer_input_size = self.input_size if layer == 0 \ layer_input_size = self.input_size if layer == 0 \
else self.hidden_size * self.num_directions else self.hidden_size * self.num_directions
w_names = ['layer_{}/{}/{}'.format(layer, p, 'L' if direction == 0 else 'R') w_ih_shape = [gate_size, layer_input_size]
for p in ('matrix_ih', 'matrix_hh', 'bias_ih', 'bias_hh')] w_hh_shape = [gate_size, self.hidden_size]
w_ih = Tensor(name=w_names[0], shape=[gate_size, layer_input_size]) b_ih_shape, b_hh_shape = [gate_size], [gate_size]
w_hh = Tensor(name=w_names[1], shape=[gate_size, self.hidden_size])
b_ih = Tensor(name=w_names[2], shape=[gate_size,])
b_hh = Tensor(name=w_names[3], shape=[gate_size,])
# W (0 ~ 3), R (4 ~ 7) # W (0 ~ 3), R (4 ~ 7)
self._matrix_weights.extend([w_ih, w_hh]) self._matrix_shape.extend([w_ih_shape, w_hh_shape])
# Bw (0 ~ 3), Br (4 ~ 7) # Bw (0 ~ 3), Br (4 ~ 7)
self._bias_weights.extend([b_ih, b_hh]) self._bias_shape.extend([b_ih_shape, b_hh_shape])
# 2. Compute total number of parameters # 2. Compute total number of parameters
self._weights_count = 0 self._weights_count = 0
for w in self._matrix_weights + self._bias_weights: for shape in self._matrix_shape + self._bias_shape:
self._weights_count += np.prod(w.shape) self._weights_count += numpy.prod(shape)
# 3. Register the packed weights # 3. Register the packed weights
self.weights = FromShape(shape=[self._weights_count], self.weights = FromShape(shape=[self._weights_count],
...@@ -101,8 +95,8 @@ class RNNBase(object): ...@@ -101,8 +95,8 @@ class RNNBase(object):
############################################## ##############################################
def _uniform_init(self, shape, dtype='float32'): def _uniform_init(self, shape, dtype='float32'):
stdv = 1.0 / np.sqrt(self.hidden_size) stdv = 1.0 / numpy.sqrt(self.hidden_size)
return np.random.uniform(-stdv, stdv, shape).astype(dtype) return numpy.random.uniform(-stdv, stdv, shape).astype(dtype)
def _orthogonal_init(self, shape, gain=1, dtype='float32'): def _orthogonal_init(self, shape, gain=1, dtype='float32'):
num_rows = 1 num_rows = 1
...@@ -110,16 +104,16 @@ class RNNBase(object): ...@@ -110,16 +104,16 @@ class RNNBase(object):
num_cols = shape[-1] num_cols = shape[-1]
flat_shape = (num_cols, num_rows) if num_rows < num_cols \ flat_shape = (num_cols, num_rows) if num_rows < num_cols \
else (num_rows, num_cols) else (num_rows, num_cols)
W = np.random.randn(*flat_shape) W = numpy.random.randn(*flat_shape)
q, r = np.linalg.qr(W) q, r = numpy.linalg.qr(W)
# Make Q uniform # Make Q uniform
d = np.diag(r) d = numpy.diag(r)
q *= np.sign(d) q *= numpy.sign(d)
if num_rows < num_cols: q = q.T if num_rows < num_cols: q = q.T
return gain * q.reshape(shape).astype(dtype) return gain * q.reshape(shape).astype(dtype)
def _zero_init(self, shape, dtype='float32'): def _zero_init(self, shape, dtype='float32'):
return np.zeros(shape, dtype=dtype) return numpy.zeros(shape, dtype=dtype)
############################################## ##############################################
# # # #
...@@ -137,20 +131,19 @@ class RNNBase(object): ...@@ -137,20 +131,19 @@ class RNNBase(object):
raise ValueError('Unknown param type: ' + type) raise ValueError('Unknown param type: ' + type)
def _set_param(self, layer_id, param_id, param_type, param): def _set_param(self, layer_id, param_id, param_type, param):
if not isinstance(param, Tensor): if isinstance(param, numpy.ndarray):
if isinstance(param, np.ndarray): param_temp = dragon.Tensor.Ref('/tmp/rnn_param')
paramT = Tensor('/tmp/rnn_param').Variable() param_temp.set_value(param)
paramT.set_value(param) param = param_temp
param = paramT else: raise ValueError('Excepted a numpy array.')
else: raise ValueError('Excepted a tensor or numpy array.')
self.weights.expressions = dict() # Clear cached expressions self.weights.expressions = dict() # Clear cached expressions
outputs = RNNParamSet([self.weights, param], layer_id, param_id, param_type, outputs = RNNParamSet([self.weights, param], layer_id, param_id, param_type,
rnn_mode=self.mode, input_size=self.input_size, hidden_size=self.hidden_size, rnn_mode=self.mode, input_size=self.input_size, hidden_size=self.hidden_size,
num_layers=self.num_layers, num_directions=self.num_directions) num_layers=self.num_layers, num_directions=self.num_directions)
for k, v in outputs.expressions.items(): dg.workspace.RunOperator(v) for k, v in outputs.expressions.items(): dragon.workspace.RunOperator(v)
def _reset_params(self): def _reset_params(self):
np.random.seed(dg.config.GetRandomSeed()) numpy.random.seed(dragon.config.GetRandomSeed())
if self.mode == 'lstm': num_gates = 4 if self.mode == 'lstm': num_gates = 4
elif self.mode == 'gru': num_gates = 3 elif self.mode == 'gru': num_gates = 3
else: num_gates = 1 else: num_gates = 1
...@@ -166,8 +159,8 @@ class RNNBase(object): ...@@ -166,8 +159,8 @@ class RNNBase(object):
bias_init = getattr(self, '_{}_init'.format(bias_init)) bias_init = getattr(self, '_{}_init'.format(bias_init))
pseudo_layer_id = layer * self.num_directions + direction pseudo_layer_id = layer * self.num_directions + direction
packed_id = pseudo_layer_id * 2 + int(param_id / num_gates) packed_id = pseudo_layer_id * 2 + int(param_id / num_gates)
matrix_shape = self._matrix_weights[packed_id].shape[:] matrix_shape = self._matrix_shape[packed_id][:]
bias_shape = self._bias_weights[packed_id].shape[:] bias_shape = self._bias_shape[packed_id][:]
matrix_shape[0] = bias_shape[0] = int(matrix_shape[0] / num_gates) matrix_shape[0] = bias_shape[0] = int(matrix_shape[0] / num_gates)
self._set_param(layer_id=pseudo_layer_id, param_id=param_id, self._set_param(layer_id=pseudo_layer_id, param_id=param_id,
param_type='matrix', param=matrix_init(matrix_shape)) param_type='matrix', param=matrix_init(matrix_shape))
...@@ -202,6 +195,7 @@ class RNNBase(object): ...@@ -202,6 +195,7 @@ class RNNBase(object):
if not self._init_params: self._reset_params() if not self._init_params: self._reset_params()
arguments = { arguments = {
'op_type': 'Recurrent',
'inputs': [x, self.weights] + 'inputs': [x, self.weights] +
([hx] if hx else []) + ([hx] if hx else []) +
([cx] if cx else []), ([cx] if cx else []),
...@@ -213,11 +207,11 @@ class RNNBase(object): ...@@ -213,11 +207,11 @@ class RNNBase(object):
'dropout_ratio': self.dropout, 'dropout_ratio': self.dropout,
} }
if required_cell: n_out = 3 if required_cell: num_outputs = 3
elif required_hidden: n_out = 2 elif required_hidden: num_outputs = 2
else: n_out = 1 else: num_outputs = 1
return Tensor.CreateOperator(num_outputs=n_out, op_type='Recurrent', **arguments) return Tensor.CreateOperator(num_outputs=num_outputs, **arguments)
def __call__(self, *args, **kwargs): def __call__(self, *args, **kwargs):
return self.create(*args, **kwargs) return self.create(*args, **kwargs)
\ No newline at end of file
...@@ -24,7 +24,7 @@ from .operators import arithmetic as math_ops ...@@ -24,7 +24,7 @@ from .operators import arithmetic as math_ops
from .operators import control_flow as control_flow_ops from .operators import control_flow as control_flow_ops
from .operators import misc as misc_ops from .operators import misc as misc_ops
from .operators import mpi as mpi_ops from .operators import mpi as mpi_ops
from .operators import ndarray as array_ops from .operators import array as array_ops
from .operators import norm as norm_ops from .operators import norm as norm_ops
from .operators import recurrent as recurrent_ops from .operators import recurrent as recurrent_ops
from .operators import contrib as contrib_ops from .operators import contrib as contrib_ops
...@@ -137,12 +137,15 @@ ExpandDims = array_ops.ExpandDims ...@@ -137,12 +137,15 @@ ExpandDims = array_ops.ExpandDims
Squeeze = array_ops.Squeeze Squeeze = array_ops.Squeeze
Shape = array_ops.Shape Shape = array_ops.Shape
Arange = array_ops.Arange Arange = array_ops.Arange
Multinomial = array_ops.Multinomial
# Control Flow # Control Flow
Copy = control_flow_ops.Copy Copy = control_flow_ops.Copy
Equal = control_flow_ops.Equal Equal = control_flow_ops.Equal
Less = control_flow_ops.Less Less = control_flow_ops.Less
Grater = control_flow_ops.Greater LessEqual = control_flow_ops.LessEqual
Greater = control_flow_ops.Greater
GreaterEqual = control_flow_ops.GreaterEqual
# Misc # Misc
Cast = AsType = misc_ops.Cast Cast = AsType = misc_ops.Cast
......
...@@ -22,7 +22,7 @@ from __future__ import print_function ...@@ -22,7 +22,7 @@ from __future__ import print_function
import pprint import pprint
import dragon.core.workspace as ws from dragon.core import workspace
from dragon.core.tensor import Tensor from dragon.core.tensor import Tensor
...@@ -93,7 +93,7 @@ class BaseUpdater(object): ...@@ -93,7 +93,7 @@ class BaseUpdater(object):
defaults = self.__dict__.get('_defaults') defaults = self.__dict__.get('_defaults')
if item in defaults: if item in defaults:
if self._registered: if self._registered:
return ws.FetchTensor(self._slot + '/' + item) return workspace.FetchTensor(self._slot + '/' + item)
else: return defaults[item] else: return defaults[item]
return self.__dict__[item] return self.__dict__[item]
...@@ -101,7 +101,7 @@ class BaseUpdater(object): ...@@ -101,7 +101,7 @@ class BaseUpdater(object):
defaults = self.__dict__.get('_defaults') defaults = self.__dict__.get('_defaults')
if defaults is not None and key in defaults: if defaults is not None and key in defaults:
if self._registered: if self._registered:
ws.FeedTensor(self._slot + '/' + key, value, workspace.FeedTensor(self._slot + '/' + key, value,
dtype='float32', force_cpu=True) dtype='float32', force_cpu=True)
else: else:
self._defaults[key] = value self._defaults[key] = value
...@@ -111,7 +111,7 @@ class BaseUpdater(object): ...@@ -111,7 +111,7 @@ class BaseUpdater(object):
def register_in_workspace(self): def register_in_workspace(self):
if not self._registered: if not self._registered:
for k, v in self._defaults.items(): for k, v in self._defaults.items():
ws.FeedTensor(self._slot + "/" + k, v, workspace.FeedTensor(self._slot + "/" + k, v,
dtype='float32', force_cpu=True) dtype='float32', force_cpu=True)
self._registered = True self._registered = True
if self._verbose: if self._verbose:
......
...@@ -13,7 +13,6 @@ from __future__ import absolute_import ...@@ -13,7 +13,6 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import sys
import numpy as np import numpy as np
import numpy.random as npr import numpy.random as npr
from multiprocessing import Process from multiprocessing import Process
...@@ -105,18 +104,8 @@ class DataTransformer(Process): ...@@ -105,18 +104,8 @@ class DataTransformer(Process):
self._max_random_scale - self._min_random_scale) \ self._max_random_scale - self._min_random_scale) \
+ self._min_random_scale + self._min_random_scale
if random_scale != 1.0: if random_scale != 1.0:
if sys.version_info >= (3, 0): im = cv2.resize(im, None, fx=random_scale,
im = cv2.resize(im, None, interpolation=cv2.INTER_LINEAR, fy=random_scale, interpolation=cv2.INTER_LINEAR)
fx=random_scale, fy=random_scale)
else:
# Fuck Fuck Fuck opencv-python2, it always has a BUG
# that leads to duplicate cuDA handles created at gpu:0
new_shape = (
int(np.ceil(im.shape[1] * random_scale)),
int(np.ceil(im.shape[0] * random_scale)))
im = PIL.Image.fromarray(im)
im = im.resize(new_shape, PIL.Image.BILINEAR)
im = np.array(im)
# Padding # Padding
if self._padding > 0: if self._padding > 0:
......
# ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# Codes are based on:
#
# <https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/utils/timer.py>
#
# ------------------------------------------------------------
import time
class Timer(object):
"""A simple timer."""
def __init__(self):
self.total_time = 0.
self.calls = 0
self.start_time = 0.
self.diff = 0.
self.average_time = 0.
def tic(self):
self.start_time = time.time()
def toc(self, average=True):
self.diff = time.time() - self.start_time
self.total_time += self.diff
self.calls += 1
self.average_time = self.total_time / self.calls
if average:
return self.average_time
else:
return self.diff
...@@ -89,7 +89,7 @@ def native_run_graph(graph_def, inputs, initializer, init_func=None): ...@@ -89,7 +89,7 @@ def native_run_graph(graph_def, inputs, initializer, init_func=None):
# Create an anonymous workspace # Create an anonymous workspace
ws = Workspace() ws = Workspace()
with dg.workspace_scope(ws.name): with dg.ws_scope(ws.name):
# Register all the initializer before feeding them # Register all the initializer before feeding them
for name in initializer: for name in initializer:
dg.Tensor(name=name).Variable() dg.Tensor(name=name).Variable()
......
...@@ -27,7 +27,7 @@ class Workspace(object): ...@@ -27,7 +27,7 @@ class Workspace(object):
def __getattr__(self, attr): def __getattr__(self, attr):
def f(*args, **kwargs): def f(*args, **kwargs):
with dg.workspace_scope(self.name, ): with dg.ws_scope(self.name, ):
return getattr(dg.workspace, attr)(*args, **kwargs) return getattr(dg.workspace, attr)(*args, **kwargs)
return f return f
......
...@@ -290,7 +290,7 @@ class _DefaultGraphStack(_DefaultStack): ...@@ -290,7 +290,7 @@ class _DefaultGraphStack(_DefaultStack):
@tf_contextlib.contextmanager @tf_contextlib.contextmanager
def get_controller(self, default): def get_controller(self, default):
with super(_DefaultGraphStack, self).get_controller(default) as g: with super(_DefaultGraphStack, self).get_controller(default) as g:
with dragon.workspace_scope(g._workspace): with dragon.ws_scope(g._workspace):
yield g yield g
......
...@@ -121,7 +121,6 @@ class _CosineDecayRestarts(_DecayBase): ...@@ -121,7 +121,6 @@ class _CosineDecayRestarts(_DecayBase):
def run(self, inputs, outputs): def run(self, inputs, outputs):
gs = self.get(inputs[0]) gs = self.get(inputs[0])
global_step = min(gs - self.last_steps, self.decay_steps) global_step = min(gs - self.last_steps, self.decay_steps)
print(global_step, self.decay_steps)
cosine_decay = 0.5 * (1 + math.cos(math.pi * global_step / self.decay_steps)) cosine_decay = 0.5 * (1 + math.cos(math.pi * global_step / self.decay_steps))
decayed = (1. - self.alpha) * cosine_decay + self.alpha decayed = (1. - self.alpha) * cosine_decay + self.alpha
new_lr = self.learning_rate * decayed new_lr = self.learning_rate * decayed
......
...@@ -178,12 +178,12 @@ def GraphDef_Device(graph_def): ...@@ -178,12 +178,12 @@ def GraphDef_Device(graph_def):
""" """
from dragon.config import option from dragon.config import option
if option['device'] is not 'None': if option['device'] is not 'None':
supports = {'CPU': 0, 'CUDA': 1, 'CNML': 2} supports = {'cpu': 0, 'cuda': 1, 'cnml': 2}
device_option = pb.DeviceOption() device_option = pb.DeviceOption()
device_option.device_type = supports[option['device']] device_option.device_type = supports[option['device']]
device_option.device_id = option['device_id'] device_option.device_id = option['device_id']
device_option.random_seed = option['random_seed'] device_option.random_seed = option['random_seed']
if option['device'] == 'CUDA': if option['device'] == 'cuda':
if option['use_cudnn']: device_option.engine = 'CUDNN' if option['use_cudnn']: device_option.engine = 'CUDNN'
graph_def.device_option.CopyFrom(device_option) graph_def.device_option.CopyFrom(device_option)
......
...@@ -14,17 +14,16 @@ from __future__ import division ...@@ -14,17 +14,16 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
# Import Dynamic Methods # Import Dynamic Methods
import dragon.vm.torch.ops.builtin import dragon.vm.torch.ops.tensor
# Import Core Methods # Import Core Methods
from dragon.vm.torch.tensor import * from dragon.vm.torch.tensor import *
from dragon.vm.torch.tensor_uitls import from_numpy from dragon.vm.torch.c_api import Size, from_numpy
from dragon.vm.torch.c_api import Size
from dragon.vm.torch.serialization import save, load from dragon.vm.torch.serialization import save, load
# Import Subpackages # Import Subpackages
import dragon.vm.torch.cuda import dragon.vm.torch.cuda
from dragon.vm.torch.ops import * from dragon.vm.torch.ops.builtin import *
from dragon.vm.torch.autograd import * from dragon.vm.torch.autograd import *
import dragon.vm.torch.nn import dragon.vm.torch.nn
import dragon.vm.torch.optim import dragon.vm.torch.optim
......
...@@ -14,6 +14,10 @@ from __future__ import division ...@@ -14,6 +14,10 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
import copy import copy
import numpy
import importlib
from dragon.core import mapping, tensor_utils
class Size(tuple): class Size(tuple):
...@@ -27,30 +31,68 @@ class Size(tuple): ...@@ -27,30 +31,68 @@ class Size(tuple):
return 'torch.Size([{}])'.format(', '.join([str(s) for s in self])) return 'torch.Size([{}])'.format(', '.join([str(s) for s in self]))
class Context(object): class device(object):
def __init__(self, device_type='CPU', device_id=0): def __init__(self, type='cpu', index=0):
self._device_type = device_type self.type, self.index = type, index
self._device_id = device_id
@property def copy(self):
def device_type(self): return copy.deepcopy(self)
return self._device_type
@device_type.setter def __eq__(self, other):
def device_type(self, value): return self.type == other.type and \
self._device_type = value self.index == other.index
@property def __str__(self):
def device_id(self): return '{}:{}'.format(self.type, self.index)
return self._device_id
@device_id.setter def __repr__(self):
def device_id(self, value): return 'device(type={}, index={})'.format(self.type, self.index)
self._device_id = value
def copy(self):
return copy.deepcopy(self)
def __str__(self): def from_numpy(data):
return '{}:{}'.format( """Create a tensor from the given numpy array.
self._device_type, self._device_id)
\ No newline at end of file Parameters
----------
data : numpy.ndarray
The array with various data type.
Return
------
dragon.vm.torch.Tensor
The torch tensor.
"""
if not isinstance(data, numpy.ndarray):
raise TypeError('The data should be a numpy.ndarray.')
if str(data.dtype) not in mapping.TENSOR_TYPE_TO_TORCH_TENSOR:
raise ValueError('Unsupported type({}) to torch tensor.'.format(data.dtype))
module = importlib.import_module('dragon.vm.torch.tensor')
return getattr(module, mapping.TENSOR_TYPE_TO_TORCH_TENSOR[str(data.dtype)])(data)
def from_dragon(tensor, own_storage=False):
"""Create a torch tensor from a existing dragon tensor.
Set ``own_storage`` as ``True`` for automatically releasing the storage.
Parameters
----------
tensor : Tensor or str
The dragon tensor.
own_storage : boolean
Whether to release storage during deconstructing.
Returns
-------
dragon.vm.torch.Tensor
The torch tensor.
"""
storage = tensor_utils.GetStorage(tensor)
if storage is None: return None
module = importlib.import_module('dragon.vm.torch.tensor')
T = getattr(module, mapping.TENSOR_TYPE_TO_TORCH_TENSOR[storage.dtype])()
T._storage, T._own_storage, T._tensor = storage, own_storage, tensor
T._device = device(*storage.device)
return T
\ No newline at end of file
...@@ -32,10 +32,10 @@ import dragon as dg ...@@ -32,10 +32,10 @@ import dragon as dg
import dragon.import_c_api as C import dragon.import_c_api as C
from dragon.config import option from dragon.config import option
from .c_api import Context from .c_api import device as _Device
from .jit import JITRecorder, is_jit_enforced from .jit import JITRecorder, is_jit_enforced
from .autograd.grad_mode import is_grad_enabled from .autograd.grad_mode import is_grad_enabled
from .tensor import RuntimeTensor from .tensor import _RuntimeTensor
from .pool import TensorPool from .pool import TensorPool
...@@ -66,9 +66,9 @@ def RunOperator( ...@@ -66,9 +66,9 @@ def RunOperator(
outputs_name.append(output) outputs_name.append(output)
else: else:
# Legacy mode, a torch tensor is excepted # Legacy mode, a torch tensor is excepted
if isinstance(output, Context): if isinstance(output, _Device):
name = TensorPool.get('${JOIN}' if requires_grad else '${DETACH}') name = TensorPool.get('${JOIN}' if requires_grad else '${DETACH}')
outputs[ix] = RuntimeTensor(name, ctx=output) outputs[ix] = _RuntimeTensor(name, device=output)
outputs_name.append(outputs[ix].name) outputs_name.append(outputs[ix].name)
# Key + Inputs + Outputs => Op # Key + Inputs + Outputs => Op
......
...@@ -19,16 +19,15 @@ from __future__ import absolute_import ...@@ -19,16 +19,15 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import numpy
import dragon
import warnings
from collections import OrderedDict from collections import OrderedDict
import numpy as np from dragon.core import proto_utils, logging
import dragon as dg
import dragon.core.proto_utils as pb_utils
import dragon.core.logging as logging
from dragon.core.scope import get_default_name_scope from dragon.core.scope import get_default_name_scope
from dragon.vm.torch.c_api import Context from dragon.vm.torch.c_api import device as Device
from dragon.vm.torch.tensor import Tensor, Parameter from dragon.vm.torch.tensor import Tensor, Parameter
from dragon.vm.torch.execution import RunOperator from dragon.vm.torch.execution import RunOperator
from dragon.vm.torch.environ import add_submodule, get_module_name from dragon.vm.torch.environ import add_submodule, get_module_name
...@@ -39,8 +38,9 @@ class Module(object): ...@@ -39,8 +38,9 @@ class Module(object):
self._modules = OrderedDict() self._modules = OrderedDict()
self._parameters = OrderedDict() self._parameters = OrderedDict()
self._buffers = OrderedDict() self._buffers = OrderedDict()
self._module_key = self._def = None self._device = Device()
self._ctx = Context() self._module_key = None
self._module_def = None
self.training = True self.training = True
def __getattr__(self, item): def __getattr__(self, item):
...@@ -106,20 +106,8 @@ class Module(object): ...@@ -106,20 +106,8 @@ class Module(object):
module.state_dict(destination, prefix + name + '.', to_numpy=to_numpy) module.state_dict(destination, prefix + name + '.', to_numpy=to_numpy)
return destination return destination
def _load_state_dict_key_mismatch(self, full_name, name, is_missing): pass
def load_state_dict(self, state_dict, strict=True, verbose=True): def load_state_dict(self, state_dict, strict=True, verbose=True):
if verbose: logging.info('Load the state dict.') if verbose: logging.info('Load the state dict.')
def submodule_key_mismatch(full_name, is_missing):
module = self
names = full_name.split(".")
for module_name in names[:-1]:
if module_name in module._modules:
module = module._modules[module_name]
else:
return
module._load_state_dict_key_mismatch(full_name, names[-1], is_missing)
unexpected = [] unexpected = []
own_state = self.state_dict() own_state = self.state_dict()
for name, param in state_dict.items(): for name, param in state_dict.items():
...@@ -133,28 +121,24 @@ class Module(object): ...@@ -133,28 +121,24 @@ class Module(object):
', '.join([str(d) for d in param_shape]))) ', '.join([str(d) for d in param_shape])))
if isinstance(param, Tensor): if isinstance(param, Tensor):
own_state[name].copy_(param) own_state[name].copy_(param)
elif isinstance(param, np.ndarray): elif isinstance(param, numpy.ndarray):
dg.tensor_utils.SetPyArray(own_state[name], param) dragon.tensor_utils.SetPyArray(own_state[name], param)
else: else:
raise ValueError('Excepted the type of source state is either ' raise ValueError('Excepted the type of source state is either '
'dragon.vm.torch.Tensor or numpy.ndarray, got {}.'.format(type(param))) 'dragon.vm.torch.Tensor or numpy.ndarray, got {}.'.format(type(param)))
if verbose: if verbose:
logging.info('Tensor({}) loaded, Size: ({})'.format(name, logging.info('Tensor({}) loaded, Size: ({})'.format(name,
', '.join([str(d) for d in param_shape]))) ', '.join([str(d) for d in param_shape])))
else:
unexpected.append(name)
if strict: if strict:
missing = set(own_state.keys()) - set(state_dict.keys()) missing = set(own_state.keys()) - set(state_dict.keys())
# pass the mismatch info to submodules so that they have a chance to
# raise a custom class-specific error
for name in unexpected:
submodule_key_mismatch(name, False)
for name in missing:
submodule_key_mismatch(name, True)
error_msg = '' error_msg = ''
if len(unexpected) > 0: if len(unexpected) > 0:
error_msg += 'Unexpected key(s) in state_dict: {}. '.format( error_msg += 'Unexpected key(s) in state_dict: {}.\n'.format(
', '.join('"{}"'.format(k) for k in unexpected)) ', '.join('"{}"'.format(k) for k in unexpected))
if len(missing) > 0: if len(missing) > 0:
error_msg += 'Missing key(s) in state_dict: {}. '.format( error_msg += 'Missing key(s) in state_dict: {}.'.format(
', '.join('"{}"'.format(k) for k in missing)) ', '.join('"{}"'.format(k) for k in missing))
if len(error_msg) > 0: if len(error_msg) > 0:
raise KeyError(error_msg) raise KeyError(error_msg)
...@@ -201,7 +185,7 @@ class Module(object): ...@@ -201,7 +185,7 @@ class Module(object):
add_submodule(module, name_v2 if name_v2 else name) add_submodule(module, name_v2 if name_v2 else name)
def __call__(self, *args, **kwargs): def __call__(self, *args, **kwargs):
with dg.name_scope(get_module_name(self)): with dragon.name_scope(get_module_name(self)):
return self.forward(*args, **kwargs) return self.forward(*args, **kwargs)
def forward(self, *inputs, **kwargs): def forward(self, *inputs, **kwargs):
...@@ -209,7 +193,10 @@ class Module(object): ...@@ -209,7 +193,10 @@ class Module(object):
def name_scope(self, remove_separator=True): def name_scope(self, remove_separator=True):
scope = get_default_name_scope() scope = get_default_name_scope()
if remove_separator and scope[-1] == '/': scope = scope[:-1] if remove_separator and \
len(scope) > 0 and \
scope[-1] == '/':
scope = scope[:-1]
return scope return scope
def children(self): def children(self):
...@@ -281,17 +268,17 @@ class Module(object): ...@@ -281,17 +268,17 @@ class Module(object):
return self return self
def cpu(self): def cpu(self):
self._ctx = Context() self._device = Device()
# Remove key and op to re-create a one with new ctx # Remove key and op to re-create a one with new device
self._module_key = self._def = None self._module_key = self._module_def = None
return self._apply(lambda t: t.cpu(), return self._apply(lambda t: t.cpu(),
lambda m: m.cpu()) lambda m: m.cpu())
def cuda(self, device=None): def cuda(self, device=None):
if device is None: device = dg.config.GetGPU() if device is None: device = dragon.config.GetGPU()
self._ctx = Context('CUDA', device) self._device = Device('cuda', device)
# Remove key and op to re-create a one with new ctx # Remove key and op to re-create a one with new device
self._module_key = self._def = None self._module_key = self._module_def = None
return self._apply(lambda t: t.cuda(device), return self._apply(lambda t: t.cuda(device),
lambda m: m.cuda(device)) lambda m: m.cuda(device))
...@@ -312,7 +299,7 @@ class Module(object): ...@@ -312,7 +299,7 @@ class Module(object):
def _gen_module_key(self): def _gen_module_key(self):
self._module_key = '{}{}'.format( self._module_key = '{}{}'.format(
self.name_scope(False), self._ctx) self.name_scope(False), self._device)
@property @property
def module_key(self): def module_key(self):
...@@ -320,37 +307,37 @@ class Module(object): ...@@ -320,37 +307,37 @@ class Module(object):
self._gen_module_key() self._gen_module_key()
return self._module_key return self._module_key
def _gen_def(self): def _gen_module_def(self):
self._def = pb_utils.MakeCXXOperatorDef( self._module_def = \
name='runtime', proto_utils.MakeCXXOperatorDef(
uid=self.module_key, name='runtime',
op_type=self.op_meta['op_type'], uid=self.module_key,
device_option=pb_utils.GetDeviceOption( op_type=self.op_meta['op_type'],
self._ctx.device_type, device_option=proto_utils.
self._ctx.device_id, GetDeviceOption(
engine='CUDNN'), self._device.type,
**self.op_meta['arguments'] self._device.index,
) engine='CUDNN'),
**self.op_meta['arguments']
def register_op(self): pass )
def register_op(self):
pass
def register_output(self): def register_output(self):
return self._ctx.copy() return self._device.copy()
def unify_devices(self, inputs): def unify_devices(self, inputs):
for ix, t in enumerate(inputs): for ix, t in enumerate(inputs):
if t._ctx.device_type != self._ctx.device_type or \ if t._device != self._device:
t._ctx.device_id != self._ctx.device_id: raise ValueError('Module({}) is defined at {}, '
print(self._ctx, self.module_key) '\nFound Input({}) is at {}.'.format(
raise ValueError('Module({}) is defined at {}:{}, ' self.name_scope(True),
'\nFound Input({}) is at {}:{}.'.format( self._device, ix, t._device))
self.name_scope(True),
self._ctx.device_type, self._ctx.device_id,
ix, t._ctx.device_type, t._ctx.device_id))
def run(self, inputs, outputs, auto_grad=True, callback=None): def run(self, inputs, outputs, auto_grad=True, callback=None):
if self._def is None: self._gen_def() if self._module_def is None: self._gen_module_def()
meta = (self.module_key, self._def) meta = (self.module_key, self._module_def)
return RunOperator( return RunOperator(
inputs, outputs, meta, inputs, outputs, meta,
auto_grad=auto_grad, auto_grad=auto_grad,
...@@ -366,7 +353,7 @@ class Module(object): ...@@ -366,7 +353,7 @@ class Module(object):
return self.train(False) return self.train(False)
def zero_grad(self): def zero_grad(self):
raise NotImplementedError('Deprecated. ' warnings.warn('Module.zero_grad() is deprecated. '
'Use ``torch.optim.Optimizer.zero_grad()`` instead.') 'Use ``torch.optim.Optimizer.zero_grad()`` instead.')
def extra_repr(self): def extra_repr(self):
......
...@@ -21,14 +21,13 @@ from dragon.vm.torch.tensor import Parameter ...@@ -21,14 +21,13 @@ from dragon.vm.torch.tensor import Parameter
from .modules.conv import Conv2d, ConvTranspose2d from .modules.conv import Conv2d, ConvTranspose2d
from .modules.depthwise_conv import DepthwiseConv2d from .modules.depthwise_conv import DepthwiseConv2d
from .modules.pooling import MaxPool2d, AvgPool2d from .modules.pooling import MaxPool2d, AvgPool2d
from .modules.linear import Linear
from .modules.activation import ( from .modules.activation import (
ReLU, LeakyReLU, ELU, SELU, ReLU, LeakyReLU, ELU, SELU,
Tanh, Sigmoid, Softmax, Tanh, Sigmoid, Softmax,
) )
from .modules.linear import Linear
from .modules.loss import ( from .modules.loss import (
BCEWithLogitsLoss, BCEWithLogitsLoss,
NLLLoss, CrossEntropyLoss, NLLLoss, CrossEntropyLoss,
...@@ -36,11 +35,16 @@ from .modules.loss import ( ...@@ -36,11 +35,16 @@ from .modules.loss import (
SigmoidFocalLoss, SoftmaxFocalLoss, SigmoidFocalLoss, SoftmaxFocalLoss,
) )
from .modules.rnn import (
RNNBase, RNNCellBase,
RNN, LSTM, GRU,
LSTMCell,
)
from .modules.container import Container, Sequential, ModuleList from .modules.container import Container, Sequential, ModuleList
from .modules.batchnorm import BatchNorm1d, BatchNorm2d, BatchNorm3d from .modules.batchnorm import BatchNorm1d, BatchNorm2d, BatchNorm3d
from .modules.groupnorm import GroupNorm1d, GroupNorm2d, GroupNorm3d from .modules.groupnorm import GroupNorm1d, GroupNorm2d, GroupNorm3d
from .modules.affine import Affine from .modules.affine import Affine
from .modules.dropout import Dropout, Dropout2d, Dropout3d from .modules.dropout import Dropout, Dropout2d, Dropout3d
from .modules.dropblock import DropBlock2d from .modules.dropblock import DropBlock2d
from .modules.rnn import RNNBase, RNN, LSTM, GRU
from . import init from . import init
\ No newline at end of file
...@@ -14,7 +14,7 @@ from __future__ import division ...@@ -14,7 +14,7 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
from dragon.vm.torch.nn import Module, Parameter from dragon.vm.torch.nn import Module, Parameter
from dragon.vm.torch.ops.creation import zeros, ones from dragon.vm.torch.ops.builtin import zeros, ones
class Affine(Module): class Affine(Module):
......
...@@ -15,7 +15,7 @@ from __future__ import print_function ...@@ -15,7 +15,7 @@ from __future__ import print_function
from dragon.vm.torch.tensor import Tensor from dragon.vm.torch.tensor import Tensor
from dragon.vm.torch.nn import Module, Parameter from dragon.vm.torch.nn import Module, Parameter
from dragon.vm.torch.ops.creation import zeros, ones from dragon.vm.torch.ops.builtin import zeros, ones
from dragon.vm.torch.module import RunOperator from dragon.vm.torch.module import RunOperator
...@@ -62,10 +62,10 @@ class _BatchNorm(Module): ...@@ -62,10 +62,10 @@ class _BatchNorm(Module):
'track_running_stats={track_running_stats}'.format(**self.__dict__) 'track_running_stats={track_running_stats}'.format(**self.__dict__)
def make_meta_from_phase(self, phase): def make_meta_from_phase(self, phase):
"""Make the custom meta by referring the phase and ctx. """Make the custom meta by referring the phase and device.
We extend this method as the original module can only We extend this method as the original module can only
detect the mutation of ctx(i.e. cpu -> cuda), detect the mutation of device(i.e. cpu -> cuda),
but not the (train -> test). but not the (train -> test).
""" """
...@@ -75,8 +75,8 @@ class _BatchNorm(Module): ...@@ -75,8 +75,8 @@ class _BatchNorm(Module):
self._module_key += '/{}'.format(phase) self._module_key += '/{}'.format(phase)
self.op_meta['arguments']['use_stats'] = 0 \ self.op_meta['arguments']['use_stats'] = 0 \
if phase == 'TRAIN' else 1 if phase == 'TRAIN' else 1
self._gen_def() self._gen_module_def()
self.op_metas[phase] = (self._module_key, self._def) self.op_metas[phase] = (self._module_key, self._module_def)
if self._module_key is None: if self._module_key is None:
# Init or Context has changed # Init or Context has changed
......
...@@ -15,7 +15,7 @@ from __future__ import print_function ...@@ -15,7 +15,7 @@ from __future__ import print_function
from dragon.vm.torch.tensor import Tensor from dragon.vm.torch.tensor import Tensor
from dragon.vm.torch.nn import Module, Parameter from dragon.vm.torch.nn import Module, Parameter
from dragon.vm.torch.ops.creation import zeros, ones from dragon.vm.torch.ops.builtin import zeros, ones
class _GroupNorm(Module): class _GroupNorm(Module):
......
...@@ -17,16 +17,17 @@ from __future__ import absolute_import ...@@ -17,16 +17,17 @@ from __future__ import absolute_import
from __future__ import division from __future__ import division
from __future__ import print_function from __future__ import print_function
import math
import warnings import warnings
import numbers import numbers
import numpy as np import numpy
import dragon as dg import dragon
from dragon.vm.torch.tensor import Tensor from dragon.vm.torch.tensor import Tensor
from dragon.vm.torch.nn import Module, Parameter from dragon.vm.torch.nn import Module, Parameter
from dragon.operators.rnn.rnn_param import RNNParamSet from dragon.operators.rnn.rnn_param import RNNParamSet
from dragon.vm.torch.module import RunOperator from dragon.vm.torch.module import RunOperator
from dragon.vm.torch.autograd.grad_mode import is_grad_enabled from dragon.vm.torch.ops.builtin import zeros as Zeros, xw_plus_b
class RNNBase(Module): class RNNBase(Module):
...@@ -49,8 +50,8 @@ class RNNBase(Module): ...@@ -49,8 +50,8 @@ class RNNBase(Module):
if not bias: if not bias:
raise NotImplementedError('Bias is required.') raise NotImplementedError('Bias is required.')
if not isinstance(dropout, numbers.Number) or not 0 <= dropout <= 1 or \ if not isinstance(dropout, numbers.Number) or \
isinstance(dropout, bool): not 0 <= dropout <= 1 or isinstance(dropout, bool):
raise ValueError("dropout should be a number in range [0, 1] " raise ValueError("dropout should be a number in range [0, 1] "
"representing the probability of an element being " "representing the probability of an element being "
"zeroed") "zeroed")
...@@ -83,8 +84,8 @@ class RNNBase(Module): ...@@ -83,8 +84,8 @@ class RNNBase(Module):
_ = self.module_key _ = self.module_key
self._module_key += '/{}'.format(phase) self._module_key += '/{}'.format(phase)
self.op_meta['arguments']['phase'] = phase self.op_meta['arguments']['phase'] = phase
self._gen_def() self._gen_module_def()
self.op_metas[phase] = (self._module_key, self._def) self.op_metas[phase] = (self._module_key, self._module_def)
if self._module_key is None: if self._module_key is None:
# Init or Context has changed # Init or Context has changed
...@@ -106,45 +107,37 @@ class RNNBase(Module): ...@@ -106,45 +107,37 @@ class RNNBase(Module):
self.unify_devices(inputs) self.unify_devices(inputs)
outputs = [self.register_output() for _ in range(2)] outputs = [self.register_output() for _ in range(2)]
requires_grad = False meta = self.make_meta_from_phase(
for input in inputs: 'TRAIN' if self.training else 'TEST')
if input.requires_grad: requires_grad = True
requires_grad = requires_grad and is_grad_enabled()
meta = self.make_meta_from_phase(
'TRAIN' if requires_grad else 'TEST')
return RunOperator(inputs, outputs, meta) return RunOperator(inputs, outputs, meta)
def _plan_params(self): def _plan_params(self):
if self.mode == 'lstm': gate_size = 4 * self.hidden_size if self.mode == 'lstm': gate_size = 4 * self.hidden_size
elif self.mode == 'gru': gate_size = 3 * self.hidden_size elif self.mode == 'gru': gate_size = 3 * self.hidden_size
else: gate_size = self.hidden_size else: gate_size = self.hidden_size
# 1. plan weights # 1. Plan weights
self._matrix_weights = []; self._bias_weights = [] self._matrix_shape, self._bias_shape = [], []
for layer in range(self.num_layers): for layer in range(self.num_layers):
for direction in range(self.num_directions): for direction in range(self.num_directions):
layer_input_size = self.input_size if layer == 0 \ layer_input_size = self.input_size if layer == 0 \
else self.hidden_size * self.num_directions else self.hidden_size * self.num_directions
w_names = ['layer_{}/{}/{}'.format(layer, p, 'L' if direction == 0 else 'R') w_ih_shape = [gate_size, layer_input_size]
for p in ('matrix_ih', 'matrix_hh', 'bias_ih', 'bias_hh')] w_hh_shape = [gate_size, self.hidden_size]
w_ih = dg.Tensor(name=w_names[0], shape=[gate_size, layer_input_size]) b_ih_shape, b_hh_shape = [gate_size], [gate_size]
w_hh = dg.Tensor(name=w_names[1], shape=[gate_size, self.hidden_size])
b_ih = dg.Tensor(name=w_names[2], shape=[gate_size,])
b_hh = dg.Tensor(name=w_names[3], shape=[gate_size,])
# W (0 ~ 3), R (4 ~ 7) # W (0 ~ 3), R (4 ~ 7)
self._matrix_weights.extend([w_ih, w_hh]) self._matrix_shape.extend([w_ih_shape, w_hh_shape])
# Bw (0 ~ 3), Br (4 ~ 7) # Bw (0 ~ 3), Br (4 ~ 7)
self._bias_weights.extend([b_ih, b_hh]) self._bias_shape.extend([b_ih_shape, b_hh_shape])
# 2. compute total number of parameters # 2. Compute total number of parameters
self._weights_count = 0 self._weights_count = 0
for w in self._matrix_weights + self._bias_weights: for shape in self._matrix_shape + self._bias_shape:
self._weights_count += np.prod(w.shape) self._weights_count += numpy.prod(shape)
# 3. register the packed weights # 3. Register the packed weights
self.weights = Parameter(Tensor(int(self._weights_count))) self.weights = Parameter(Tensor(int(self._weights_count)))
# 4. create the initialization grids # 4. Create the initialization grids
if self.mode == 'lstm': num_params_per_layer = 8 if self.mode == 'lstm': num_params_per_layer = 8
elif self.mode == 'gru': num_params_per_layer = 6 elif self.mode == 'gru': num_params_per_layer = 6
else: num_params_per_layer = 2 else: num_params_per_layer = 2
...@@ -159,7 +152,7 @@ class RNNBase(Module): ...@@ -159,7 +152,7 @@ class RNNBase(Module):
for _ in range(self.num_layers) for _ in range(self.num_layers)
] ]
# 5. set the init flag # 5. Set the init flag
self._init_params = False self._init_params = False
############################################## ##############################################
...@@ -169,8 +162,8 @@ class RNNBase(Module): ...@@ -169,8 +162,8 @@ class RNNBase(Module):
############################################## ##############################################
def _uniform_init(self, shape, dtype='float32'): def _uniform_init(self, shape, dtype='float32'):
stdv = 1.0 / np.sqrt(self.hidden_size) stdv = 1.0 / numpy.sqrt(self.hidden_size)
return np.random.uniform(-stdv, stdv, shape).astype(dtype) return numpy.random.uniform(-stdv, stdv, shape).astype(dtype)
def _orthogonal_init(self, shape, gain=1, dtype='float32'): def _orthogonal_init(self, shape, gain=1, dtype='float32'):
num_rows = 1 num_rows = 1
...@@ -178,16 +171,16 @@ class RNNBase(Module): ...@@ -178,16 +171,16 @@ class RNNBase(Module):
num_cols = shape[-1] num_cols = shape[-1]
flat_shape = (num_cols, num_rows) if num_rows < num_cols \ flat_shape = (num_cols, num_rows) if num_rows < num_cols \
else (num_rows, num_cols) else (num_rows, num_cols)
W = np.random.randn(*flat_shape) W = numpy.random.randn(*flat_shape)
q, r = np.linalg.qr(W) q, r = numpy.linalg.qr(W)
# Make Q uniform # Make Q uniform
d = np.diag(r) d = numpy.diag(r)
q *= np.sign(d) q *= numpy.sign(d)
if num_rows < num_cols: q = q.T if num_rows < num_cols: q = q.T
return gain * q.reshape(shape).astype(dtype) return gain * q.reshape(shape).astype(dtype)
def _zero_init(self, shape, dtype='float32'): def _zero_init(self, shape, dtype='float32'):
return np.zeros(shape, dtype=dtype) return numpy.zeros(shape, dtype=dtype)
############################################## ##############################################
# # # #
...@@ -205,20 +198,19 @@ class RNNBase(Module): ...@@ -205,20 +198,19 @@ class RNNBase(Module):
raise ValueError('Unknown param type: ' + type) raise ValueError('Unknown param type: ' + type)
def _set_param(self, layer_id, param_id, param_type, param): def _set_param(self, layer_id, param_id, param_type, param):
if not isinstance(param, Tensor): if isinstance(param, numpy.ndarray):
if isinstance(param, np.ndarray): param_temp = dragon.Tensor.Ref('/tmp/rnn_param')
paramT = dg.Tensor('/tmp/rnn_param').Variable() param_temp.set_value(param)
paramT.set_value(param) param = param_temp
param = paramT else: raise ValueError('Excepted a numpy array.')
else: raise ValueError('Excepted a tensor or numpy array.')
W = self.weights.dragon() W = self.weights.dragon()
outputs = RNNParamSet([W, param], layer_id, param_id, param_type, outputs = RNNParamSet([W, param], layer_id, param_id, param_type,
rnn_mode=self.mode, input_size=self.input_size, hidden_size=self.hidden_size, rnn_mode=self.mode, input_size=self.input_size, hidden_size=self.hidden_size,
num_layers=self.num_layers, num_directions=self.num_directions) num_layers=self.num_layers, num_directions=self.num_directions)
for k, v in outputs.expressions.items(): dg.workspace.RunOperator(v) for k, v in outputs.expressions.items(): dragon.workspace.RunOperator(v)
def _reset_params(self): def _reset_params(self):
np.random.seed(dg.config.GetRandomSeed()) numpy.random.seed(dragon.config.GetRandomSeed())
if self.mode == 'lstm': num_gates = 4 if self.mode == 'lstm': num_gates = 4
elif self.mode == 'gru': num_gates = 3 elif self.mode == 'gru': num_gates = 3
else: num_gates = 1 else: num_gates = 1
...@@ -233,8 +225,8 @@ class RNNBase(Module): ...@@ -233,8 +225,8 @@ class RNNBase(Module):
bias_init = getattr(self, '_{}_init'.format(bias_init)) bias_init = getattr(self, '_{}_init'.format(bias_init))
pseudo_layer_id = layer * self.num_directions + direction pseudo_layer_id = layer * self.num_directions + direction
packed_id = pseudo_layer_id * 2 + int(param_id / num_gates) packed_id = pseudo_layer_id * 2 + int(param_id / num_gates)
matrix_shape = self._matrix_weights[packed_id].shape[:] matrix_shape = self._matrix_shape[packed_id][:]
bias_shape = self._bias_weights[packed_id].shape[:] bias_shape = self._bias_shape[packed_id][:]
matrix_shape[0] = bias_shape[0] = int(matrix_shape[0] / num_gates) matrix_shape[0] = bias_shape[0] = int(matrix_shape[0] / num_gates)
self._set_param(layer_id=pseudo_layer_id, param_id=param_id, self._set_param(layer_id=pseudo_layer_id, param_id=param_id,
param_type='matrix', param=matrix_init(matrix_shape)) param_type='matrix', param=matrix_init(matrix_shape))
...@@ -375,4 +367,57 @@ class GRU(RNNBase): ...@@ -375,4 +367,57 @@ class GRU(RNNBase):
""" """
super(GRU, self).__init__('gru', input_size, hidden_size, super(GRU, self).__init__('gru', input_size, hidden_size,
num_layers, bias, batch_first, dropout, bidirectional) num_layers, bias, batch_first, dropout, bidirectional)
\ No newline at end of file
class RNNCellBase(Module):
def __init__(self, input_size, hidden_size, bias, num_chunks):
super(RNNCellBase, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.bias = bias
self.weight_ih = Parameter(Tensor(num_chunks * hidden_size, input_size))
self.weight_hh = Parameter(Tensor(num_chunks * hidden_size, hidden_size))
if bias:
self.bias_ih = Parameter(Tensor(num_chunks * hidden_size))
self.bias_hh = Parameter(Tensor(num_chunks * hidden_size))
else:
self.register_parameter('bias_ih', None)
self.register_parameter('bias_hh', None)
self.reset_parameters()
def extra_repr(self):
s = '{input_size}, {hidden_size}'
if 'bias' in self.__dict__ and self.bias is not True:
s += ', bias={bias}'
if 'nonlinearity' in self.__dict__ and self.nonlinearity != "tanh":
s += ', nonlinearity={nonlinearity}'
return s.format(**self.__dict__)
def reset_parameters(self):
stdv = 1.0 / math.sqrt(self.hidden_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
class LSTMCell(RNNCellBase):
def __init__(self, input_size, hidden_size, bias=True):
super(LSTMCell, self).__init__(
input_size, hidden_size, bias, num_chunks=4)
self.register_op()
def register_op(self):
self.op_meta = {'op_type': 'LSTMCell', 'arguments': {}}
def forward(self, input, hx=None):
if hx is None:
zeros = Zeros(
input.size(0), self.hidden_size,
dtype=input.dtype, device=input.device)
hx = (zeros, zeros)
wx = xw_plus_b(input, self.weight_ih, self.bias_ih)
wh = xw_plus_b(hx[0], self.weight_hh, self.bias_hh)
inputs = [wx + wh, hx[1]]
self.unify_devices(inputs)
outputs = [self.register_output() for _ in range(2)]
return self.run(inputs, outputs)
\ No newline at end of file
...@@ -7,31 +7,4 @@ ...@@ -7,31 +7,4 @@
# #
# <https://opensource.org/licenses/BSD-2-Clause> # <https://opensource.org/licenses/BSD-2-Clause>
# #
# ------------------------------------------------------------ # ------------------------------------------------------------
\ No newline at end of file
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from .creation import (
zeros, zeros_like,
ones, ones_like,
rand, randn,
)
from .arithmetic import (
add, sub, mul, div,
log, exp, sqrt,
maximum, minimum, clamp,
)
from .array import (
squeeze, unsqueeze,
sum, mean, argmin, argmax, max, min, topk,
cat, gather, narrow, one_hot,
)
from .vision import (
nn_resize, bilinear_resize,
roi_pool, roi_align,
)
\ No newline at end of file
# ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from dragon.vm.torch.tensor import Tensor
from dragon.vm.torch.ops.primitive import MakeContext, WrapScalar
from dragon.vm.torch.ops.factory import get_module
from dragon.vm.torch.ops.modules.arithmetic import (
Fundamental, Log, Exp, Sqrt,
Maximum, Minimum, Clamp,
)
def _fundamental(input, value, op='Add', out=None):
if not isinstance(value, Tensor):
value = WrapScalar(value, input.dtype, input._ctx)
ctx = MakeContext(inputs=[input, value])
key = '{}/{}'.format(op, ctx)
module = get_module(Fundamental, key, ctx, op_type=op)
return module.forward(input, value, out)
def _rfundamental(input, value, op='RAdd', out=None):
if not isinstance(value, Tensor):
value = WrapScalar(value, input.dtype, input._ctx)
ctx = MakeContext(inputs=[input, value])
key = '{}/{}'.format(op, ctx)
module = get_module(Fundamental, key, ctx, op_type=op)
return module.forward(value, input, out)
def _maximum(input, other, out=None):
if not isinstance(input, Tensor):
input = WrapScalar(input, other.dtype, other._ctx)
elif not isinstance(other, Tensor):
other = WrapScalar(other, input.dtype, input._ctx)
ctx = MakeContext(inputs=[input])
key = 'Maximum/{}'.format(ctx)
module = get_module(Maximum, key, ctx)
return module.forward(input, other, out)
def _minimum(input, other, out=None):
if not isinstance(input, Tensor):
input = WrapScalar(input, other.dtype, other._ctx)
elif not isinstance(other, Tensor):
other = WrapScalar(other, input.dtype, input._ctx)
ctx = MakeContext(inputs=[input])
key = 'Minimum/{}'.format(ctx)
module = get_module(Minimum, key, ctx)
return module.forward(input, other, out)
def _clamp(input, min=None, max=None, out=None):
ctx = MakeContext(inputs=[input])
key = 'Clamp/{}/min:{}/max:{}'.format(ctx, min, max)
module = get_module(Clamp, key, ctx, min=min, max=max)
return module.forward(input, out)
def _exp(input, out=None):
ctx = MakeContext(inputs=[input])
key = 'Exp/{}'.format(ctx)
module = get_module(Exp, key, ctx)
return module.forward(input, out)
def _log(input, out=None):
ctx = MakeContext(inputs=[input])
key = 'Log/{}'.format(ctx)
module = get_module(Log, key, ctx)
return module.forward(input, out)
def _sqrt(input, out=None):
ctx = MakeContext(inputs=[input])
key = 'Sqrt/{}'.format(ctx)
module = get_module(Sqrt, key, ctx)
return module.forward(input, out)
def add(input, value, out=None):
"""Add the ``input`` and ``value`` into the output tensor.
Parameters
----------
input : dragon.vm.torch.Tensor
The input tensor.
value : dragon.vm.torch.Tensor, number
The value tensor.
out : dragon.vm.torch.Tensor, optional
The output tensor.
Returns
-------
dragon.vm.torch.Tensor
The output tensor.
"""
return _fundamental(input, value, out=out, op='Add')
def sub(input, value, out=None):
"""Subtract the ``input`` and ``value`` into the output tensor.
Parameters
----------
input : dragon.vm.torch.Tensor
The input tensor.
value : dragon.vm.torch.Tensor or number
The value tensor.
out : dragon.vm.torch.Tensor, optional
The output tensor.
Returns
-------
torch.Tensor
The output tensor.
"""
return _fundamental(input, value, out=out, op='Sub')
def mul(input, value, out=None):
"""Multiply the ``input`` and ``value`` into the output tensor.
Parameters
----------
input : dragon.vm.torch.Tensor
The input tensor.
value : dragon.vm.torch.Tensor or number
The value tensor.
out : dragon.vm.torch.Tensor, optional
The output tensor.
Returns
-------
dragon.vm.torch.Tensor
The output tensor.
"""
return _fundamental(input, value, out=out, op='Mul')
def div(input, value, out=None):
"""Divide the ``input`` and ``value`` into the output tensor.
Parameters
----------
input : dragon.vm.torch.Tensor
The input tensor.
value : dragon.vm.torch.Tensor or number
The value tensor.
out : dragon.vm.torch.Tensor, optional
The output tensor.
Returns
-------
dragon.vm.torch.Tensor
The output tensor.
"""
return _fundamental(input, value, out=out, op='Div')
def maximum(input, other, out=None):
"""Return the max value of given two tensors.
Parameters
----------
input : dragon.vm.torch.Tensor or number
The input tensor.
other : dragon.vm.torch.Tensor or number
The input tensor.
out : dragon.vm.torch.Tensor, optional
The output tensor.
Returns
-------
dragon.vm.torch.Tensor
The output tensor.
"""
return _maximum(input, other, out)
def minimum(input, other, out=None):
"""Return the min value of given two tensors.
Parameters
----------
input : dragon.vm.torch.Tensor or number
The input tensor.
other : dragon.vm.torch.Tensor or number
The input tensor.
out : dragon.vm.torch.Tensor, optional
The output tensor.
Returns
-------
dragon.vm.torch.Tensor
The output tensor.
"""
return _minimum(input, other, out)
def clamp(input, min=None, max=None, out=None):
"""Clamp all elements into the range [min, max].
Parameters
----------
input : dragon.vm.torch.Tensor
The input tensor.
min : number, optional
The min value.
max : number, optional
The max value.
out : dragon.vm.torch.Tensor, optional
The output tensor.
Returns
-------
dragon.vm.torch.Tensor
The output tensor.
"""
return _clamp(input, min, max, out)
def log(input, out=None):
"""Compute the natural logarithm of input.
Parameters
----------
input : dragon.vm.torch.Tensor
The input tensor.
out : dragon.vm.torch.Tensor, optional
The output tensor.
Returns
-------
dragon.vm.torch.Tensor
The output tensor.
"""
return _log(input, out)
def exp(input, out=None):
"""Compute the exponential of input.
Parameters
----------
input : dragon.vm.torch.Tensor
The input tensor.
out : dragon.vm.torch.Tensor, optional
The output tensor.
Returns
-------
dragon.vm.torch.Tensor
The output tensor.
"""
return _exp(input, out)
def sqrt(input, out=None):
"""Compute the square-root of input.
Parameters
----------
input : dragon.vm.torch.Tensor
The input tensor.
out : dragon.vm.torch.Tensor, optional
The output tensor.
Returns
-------
dragon.vm.torch.Tensor
The output tensor.
"""
return _sqrt(input, out)
\ No newline at end of file
# ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from dragon.vm.torch.tensor import LeafTensor
from dragon.vm.torch.ops.array import (
_fill, _uniform, _normal,
)
def zeros(*sizes, **kwargs):
"""Return a float tensor with values of ``0``.
Parameters
----------
sizes : tuple, list or int
The sizes indicating the shape of the output tensor.
out : dragon.vm.torch.Tensor
The optional output tensor.
Returns
-------
vm.torch.FloatTensor
The output tensor.
"""
out = kwargs['out'] if 'out' in kwargs else None
if out is None:
out = LeafTensor(sizes, requires_grad=kwargs['requires_grad'] \
if 'requires_grad' in kwargs else False)
return _fill(out, shape=sizes, value=0)
def zeros_like(input, out=None, **kwargs):
"""Return a float tensor with values of ``0``, shape as the input.
Parameters
----------
input : dragon.vm.torch.Tensor
The tensor for indicating shape.
out : dragon.vm.torch.Tensor
The optional output tensor.
Returns
-------
vm.torch.FloatTensor
The output tensor.
"""
if not hasattr(input, 'shape'):
raise ValueError('Input does not have the shape attribute.')
if out is None:
out = LeafTensor(input.shape, requires_grad=kwargs['requires_grad'] \
if 'requires_grad' in kwargs else False)
return _fill(out, shape=input.shape, value=0)
def ones(*sizes, **kwargs):
"""Return a float tensor with values of ``1``.
Parameters
----------
sizes : tuple, list or int
The sizes indicating the shape of the output tensor.
out : dragon.vm.torch.Tensor
The optional output tensor.
Returns
-------
vm.torch.FloatTensor
The output tensor.
"""
out = kwargs['out'] if 'out' in kwargs else None
if out is None:
out = LeafTensor(sizes, requires_grad=kwargs['requires_grad'] \
if 'requires_grad' in kwargs else False)
return _fill(out, shape=sizes, value=1)
def ones_like(input, out=None, **kwargs):
"""Return a float tensor with values of ``1``, shape as the input.
Parameters
----------
input : dragon.vm.torch.Tensor
The tensor for indicating shape.
out : dragon.vm.torch.Tensor
The optional output tensor.
Returns
-------
vm.torch.FloatTensor
The output tensor.
"""
if not hasattr(input, 'shape'):
raise ValueError('Input does not have the shape attribute.')
if out is None:
out = LeafTensor(input.shape, requires_grad=kwargs['requires_grad'] \
if 'requires_grad' in kwargs else False)
return _fill(out, shape=input.shape, value=1)
def rand(*sizes, **kwargs):
"""Return a float tensor with a uniform distribution of U(0, 1).
Parameters
----------
sizes : tuple, list or int
The sizes indicating the shape of the output tensor.
out : dragon.vm.torch.Tensor
The optional output tensor.
Returns
-------
vm.torch.FloatTensor
The output tensor.
"""
out = kwargs['out'] if 'out' in kwargs else None
if out is None:
out = LeafTensor(sizes, requires_grad=kwargs['requires_grad'] \
if 'requires_grad' in kwargs else False)
return _uniform(out, sizes, low=0, high=1)
def randn(*sizes, **kwargs):
"""Return a float tensor with a normal distribution of N(0, 1).
Parameters
----------
sizes : tuple, list or int
The sizes indicating the shape of the output tensor.
out : dragon.vm.torch.Tensor
The optional output tensor.
Returns
-------
vm.torch.FloatTensor
The output tensor.
"""
out = kwargs['out'] if 'out' in kwargs else None
if out is None:
out = LeafTensor(sizes, requires_grad=kwargs['requires_grad'] \
if 'requires_grad' in kwargs else False)
return _normal(out, sizes, mean=0, std=1)
\ No newline at end of file
...@@ -21,13 +21,13 @@ def has_module(key): ...@@ -21,13 +21,13 @@ def has_module(key):
return key in _GLOBAL_TORCH_BUILTIN_MODULES return key in _GLOBAL_TORCH_BUILTIN_MODULES
def register_module(cls, key, ctx, **kwargs): def register_module(cls, key, dev, **kwargs):
global _GLOBAL_TORCH_BUILTIN_MODULES global _GLOBAL_TORCH_BUILTIN_MODULES
module = cls(key, ctx, **kwargs) module = cls(key, dev, **kwargs)
_GLOBAL_TORCH_BUILTIN_MODULES[key] = module _GLOBAL_TORCH_BUILTIN_MODULES[key] = module
return module return module
def get_module(cls, key, ctx, **kwargs): def get_module(cls, key, dev, **kwargs):
if has_module(key): return _GLOBAL_TORCH_BUILTIN_MODULES[key] if has_module(key): return _GLOBAL_TORCH_BUILTIN_MODULES[key]
return register_module(cls, key, ctx, **kwargs) return register_module(cls, key, dev, **kwargs)
\ No newline at end of file \ No newline at end of file
...@@ -17,8 +17,8 @@ from dragon.vm.torch.ops.modules.base import BaseModule ...@@ -17,8 +17,8 @@ from dragon.vm.torch.ops.modules.base import BaseModule
class Fundamental(BaseModule): class Fundamental(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Fundamental, self).__init__(key, ctx, **kwargs) super(Fundamental, self).__init__(key, dev, **kwargs)
self.op_type = kwargs.get('op_type', 'Add') self.op_type = kwargs.get('op_type', 'Add')
self.register_op() self.register_op()
...@@ -32,8 +32,8 @@ class Fundamental(BaseModule): ...@@ -32,8 +32,8 @@ class Fundamental(BaseModule):
class Maximum(BaseModule): class Maximum(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Maximum, self).__init__(key, ctx, **kwargs) super(Maximum, self).__init__(key, dev, **kwargs)
self.register_op() self.register_op()
def register_op(self): def register_op(self):
...@@ -46,8 +46,8 @@ class Maximum(BaseModule): ...@@ -46,8 +46,8 @@ class Maximum(BaseModule):
class Minimum(BaseModule): class Minimum(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Minimum, self).__init__(key, ctx, **kwargs) super(Minimum, self).__init__(key, dev, **kwargs)
self.register_op() self.register_op()
def register_op(self): def register_op(self):
...@@ -60,8 +60,8 @@ class Minimum(BaseModule): ...@@ -60,8 +60,8 @@ class Minimum(BaseModule):
class Clamp(BaseModule): class Clamp(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Clamp, self).__init__(key, ctx, **kwargs) super(Clamp, self).__init__(key, dev, **kwargs)
self.min = kwargs.get('min', None) self.min = kwargs.get('min', None)
self.max = kwargs.get('max', None) self.max = kwargs.get('max', None)
if self.min is not None: self.min = float(self.min) if self.min is not None: self.min = float(self.min)
...@@ -84,8 +84,8 @@ class Clamp(BaseModule): ...@@ -84,8 +84,8 @@ class Clamp(BaseModule):
class Log(BaseModule): class Log(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Log, self).__init__(key, ctx, **kwargs) super(Log, self).__init__(key, dev, **kwargs)
self.register_op() self.register_op()
def register_op(self): def register_op(self):
...@@ -98,8 +98,8 @@ class Log(BaseModule): ...@@ -98,8 +98,8 @@ class Log(BaseModule):
class Exp(BaseModule): class Exp(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Exp, self).__init__(key, ctx, **kwargs) super(Exp, self).__init__(key, dev, **kwargs)
self.register_op() self.register_op()
def register_op(self): def register_op(self):
...@@ -112,8 +112,8 @@ class Exp(BaseModule): ...@@ -112,8 +112,8 @@ class Exp(BaseModule):
class Sqrt(BaseModule): class Sqrt(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Sqrt, self).__init__(key, ctx, **kwargs) super(Sqrt, self).__init__(key, dev, **kwargs)
self.register_op() self.register_op()
def register_op(self): def register_op(self):
...@@ -122,4 +122,44 @@ class Sqrt(BaseModule): ...@@ -122,4 +122,44 @@ class Sqrt(BaseModule):
def forward(self, x, y): def forward(self, x, y):
inputs = [x]; self.unify_devices(inputs) inputs = [x]; self.unify_devices(inputs)
outputs = [y] if y else [self.register_output()] outputs = [y] if y else [self.register_output()]
return self.run(inputs, outputs)
class MM(BaseModule):
def __init__(self, key, dev, **kwargs):
super(MM, self).__init__(key, dev, **kwargs)
self.transA = kwargs.get('transA', False)
self.transB = kwargs.get('transB', False)
self.register_op()
def register_op(self):
self.op_meta = {
'op_type': 'Matmul',
'arguments': {
'transA': self.transA,
'transB': self.transB,
}}
def forward(self, x1, x2, y):
inputs = [x1, x2]; self.unify_devices(inputs)
outputs = [y] if y else [self.register_output()]
return self.run(inputs, outputs)
class FullyConnected(BaseModule):
def __init__(self, key, dev, **kwargs):
super(FullyConnected, self).__init__(key, dev, **kwargs)
self.transW = kwargs.get('transW', True)
self.register_op()
def register_op(self):
self.op_meta = {
'op_type': 'FullyConnected',
'arguments': {'transW': self.transW},
}
def forward(self, x, w, b=None, y=None):
inputs = [x, w] + ([b] if b else [])
self.unify_devices(inputs)
outputs = [y] if y else [self.register_output()]
return self.run(inputs, outputs) return self.run(inputs, outputs)
\ No newline at end of file
...@@ -14,7 +14,7 @@ from __future__ import division ...@@ -14,7 +14,7 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
from dragon.vm.torch.autograd import no_grad from dragon.vm.torch.autograd import no_grad
from dragon.vm.torch.tensor import ReferenceTensor from dragon.vm.torch.tensor import _ReferenceTensor
from dragon.vm.torch.ops.modules.base import BaseModule from dragon.vm.torch.ops.modules.base import BaseModule
...@@ -25,8 +25,8 @@ class Indexing(BaseModule): ...@@ -25,8 +25,8 @@ class Indexing(BaseModule):
and the resulting memory is deep copied. and the resulting memory is deep copied.
""" """
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Indexing, self).__init__(key, ctx, **kwargs) super(Indexing, self).__init__(key, dev, **kwargs)
self.n_starts = kwargs.get('n_starts', 0) self.n_starts = kwargs.get('n_starts', 0)
self.n_sizes = kwargs.get('n_sizes', 0) self.n_sizes = kwargs.get('n_sizes', 0)
self.register_op() self.register_op()
...@@ -62,8 +62,8 @@ class Concat(BaseModule): ...@@ -62,8 +62,8 @@ class Concat(BaseModule):
Concatenate the inputs along the given axis. Concatenate the inputs along the given axis.
""" """
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Concat, self).__init__(key, ctx, **kwargs) super(Concat, self).__init__(key, dev, **kwargs)
self.axis = kwargs.get('axis', 0) self.axis = kwargs.get('axis', 0)
self.register_op() self.register_op()
...@@ -90,8 +90,8 @@ class Gather(BaseModule): ...@@ -90,8 +90,8 @@ class Gather(BaseModule):
input.shape[:axis] + indices.shape + input.shape[axis + 1:] input.shape[:axis] + indices.shape + input.shape[axis + 1:]
""" """
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Gather, self).__init__(key, ctx, **kwargs) super(Gather, self).__init__(key, dev, **kwargs)
self.axis = kwargs.get('axis', 0) self.axis = kwargs.get('axis', 0)
self.register_op() self.register_op()
...@@ -111,8 +111,8 @@ class Gather(BaseModule): ...@@ -111,8 +111,8 @@ class Gather(BaseModule):
class Reduce(BaseModule): class Reduce(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Reduce, self).__init__(key, ctx, **kwargs) super(Reduce, self).__init__(key, dev, **kwargs)
self.operation = kwargs.get('operation', 'SUM') self.operation = kwargs.get('operation', 'SUM')
self.dim = kwargs.get('dim', None) self.dim = kwargs.get('dim', None)
self.keepdim = kwargs.get('keepdim', True) self.keepdim = kwargs.get('keepdim', True)
...@@ -135,8 +135,8 @@ class Reduce(BaseModule): ...@@ -135,8 +135,8 @@ class Reduce(BaseModule):
class ArgReduce(BaseModule): class ArgReduce(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(ArgReduce, self).__init__(key, ctx, **kwargs) super(ArgReduce, self).__init__(key, dev, **kwargs)
self.operation = kwargs.get('operation', 'ARGMAX') self.operation = kwargs.get('operation', 'ARGMAX')
self.axis = kwargs.get('axis', None) self.axis = kwargs.get('axis', None)
self.keepdim = kwargs.get('keepdim', True) self.keepdim = kwargs.get('keepdim', True)
...@@ -179,8 +179,8 @@ class ArgReduce(BaseModule): ...@@ -179,8 +179,8 @@ class ArgReduce(BaseModule):
class Reshape(BaseModule): class Reshape(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Reshape, self).__init__(key, ctx, **kwargs) super(Reshape, self).__init__(key, dev, **kwargs)
self.n_dim = kwargs.get('n_dim', 0) self.n_dim = kwargs.get('n_dim', 0)
self.register_op() self.register_op()
...@@ -201,14 +201,14 @@ class Reshape(BaseModule): ...@@ -201,14 +201,14 @@ class Reshape(BaseModule):
def forward(self, x, shape): def forward(self, x, shape):
inputs = [x]; self.unify_devices(inputs) inputs = [x]; self.unify_devices(inputs)
outputs = [ReferenceTensor(x)] outputs = [_ReferenceTensor(x)]
callback = lambda A: self.update_arguments(A, shape) callback = lambda A: self.update_arguments(A, shape)
return self.run(inputs, outputs, callback=callback) return self.run(inputs, outputs, callback=callback)
class Squeeze(BaseModule): class Squeeze(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Squeeze, self).__init__(key, ctx, **kwargs) super(Squeeze, self).__init__(key, dev, **kwargs)
self.dim = kwargs.get('dim', None) self.dim = kwargs.get('dim', None)
self.register_op() self.register_op()
...@@ -220,13 +220,13 @@ class Squeeze(BaseModule): ...@@ -220,13 +220,13 @@ class Squeeze(BaseModule):
def forward(self, x, out=None): def forward(self, x, out=None):
inputs = [x]; self.unify_devices(inputs) inputs = [x]; self.unify_devices(inputs)
outputs = [out] if out else [ReferenceTensor(x)] outputs = [out] if out else [_ReferenceTensor(x)]
return self.run(inputs, outputs) return self.run(inputs, outputs)
class UnSqueeze(BaseModule): class UnSqueeze(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(UnSqueeze, self).__init__(key, ctx, **kwargs) super(UnSqueeze, self).__init__(key, dev, **kwargs)
self.dim = kwargs.get('dim', None) self.dim = kwargs.get('dim', None)
self.register_op() self.register_op()
...@@ -238,13 +238,13 @@ class UnSqueeze(BaseModule): ...@@ -238,13 +238,13 @@ class UnSqueeze(BaseModule):
def forward(self, x, out=None): def forward(self, x, out=None):
inputs = [x]; self.unify_devices(inputs) inputs = [x]; self.unify_devices(inputs)
outputs = [out] if out else [ReferenceTensor(x)] outputs = [out] if out else [_ReferenceTensor(x)]
return self.run(inputs, outputs) return self.run(inputs, outputs)
class Permute(BaseModule): class Permute(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Permute, self).__init__(key, ctx, **kwargs) super(Permute, self).__init__(key, dev, **kwargs)
self.n_perm = kwargs.get('n_perm', 0) self.n_perm = kwargs.get('n_perm', 0)
self.register_op() self.register_op()
...@@ -270,8 +270,8 @@ class Permute(BaseModule): ...@@ -270,8 +270,8 @@ class Permute(BaseModule):
class Repeat(BaseModule): class Repeat(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Repeat, self).__init__(key, ctx, **kwargs) super(Repeat, self).__init__(key, dev, **kwargs)
self.n_times = kwargs.get('n_times', 0) self.n_times = kwargs.get('n_times', 0)
self.register_op() self.register_op()
...@@ -298,8 +298,8 @@ class Repeat(BaseModule): ...@@ -298,8 +298,8 @@ class Repeat(BaseModule):
class OneHot(BaseModule): class OneHot(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(OneHot, self).__init__(key, ctx, **kwargs) super(OneHot, self).__init__(key, dev, **kwargs)
self.depth = kwargs.get('depth', 1) self.depth = kwargs.get('depth', 1)
self.register_op() self.register_op()
...@@ -318,8 +318,8 @@ class OneHot(BaseModule): ...@@ -318,8 +318,8 @@ class OneHot(BaseModule):
class Cast(BaseModule): class Cast(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Cast, self).__init__(key, ctx, **kwargs) super(Cast, self).__init__(key, dev, **kwargs)
self.dtype = kwargs.get('dtype', 'float32') self.dtype = kwargs.get('dtype', 'float32')
self.inplace = kwargs.get('inplace', False) self.inplace = kwargs.get('inplace', False)
self.register_op() self.register_op()
...@@ -343,4 +343,26 @@ class Cast(BaseModule): ...@@ -343,4 +343,26 @@ class Cast(BaseModule):
self.unify_devices([x]) self.unify_devices([x])
with no_grad(): with no_grad():
y = self.run([], [x]) y = self.run([], [x])
return y return y
\ No newline at end of file
class Multinomial(BaseModule):
def __init__(self, key, dev, **kwargs):
super(Multinomial, self).__init__(key, dev, **kwargs)
self.num_samples = kwargs.get('num_samples', 1)
self.normalize = kwargs.get('normalize', False)
self.register_op()
def register_op(self):
self.op_meta = {
'op_type': 'Multinomial',
'arguments': {
'num_samples': self.num_samples,
'normalize': self.normalize,
},
}
def forward(self, x, y):
inputs = [x]; self.unify_devices(inputs)
outputs = [y] if y else [self.register_output()]
return self.run(inputs, outputs)
\ No newline at end of file
...@@ -16,17 +16,17 @@ from __future__ import print_function ...@@ -16,17 +16,17 @@ from __future__ import print_function
import numpy as np import numpy as np
import dragon as dg import dragon as dg
from dragon.core import proto_utils as pb_utils from dragon.core import proto_utils
from dragon.vm.torch.module import Module from dragon.vm.torch.module import Module
class BaseModule(Module): class BaseModule(Module):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(BaseModule, self).__init__() super(BaseModule, self).__init__()
self._module_key = key self._module_key = key
self._ctx = ctx self._device = dev
self._args_dev = pb_utils.GetDeviceOption( self._args_dev = proto_utils.\
'CPU').SerializeToString() GetDeviceOption('cpu').SerializeToString()
def set_argument_i64(self, name, value): def set_argument_i64(self, name, value):
dg.C.FeedTensor(name, np.array( dg.C.FeedTensor(name, np.array(
......
...@@ -17,8 +17,8 @@ from dragon.vm.torch.ops.modules.base import BaseModule ...@@ -17,8 +17,8 @@ from dragon.vm.torch.ops.modules.base import BaseModule
class Copy(BaseModule): class Copy(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Copy, self).__init__(key, ctx, **kwargs) super(Copy, self).__init__(key, dev, **kwargs)
self.register_op() self.register_op()
def register_op(self): def register_op(self):
...@@ -26,4 +26,24 @@ class Copy(BaseModule): ...@@ -26,4 +26,24 @@ class Copy(BaseModule):
def forward(self, dst, src): def forward(self, dst, src):
outputs = [dst]; self.unify_devices(outputs) outputs = [dst]; self.unify_devices(outputs)
return self.run([src], outputs) return self.run([src], outputs)
\ No newline at end of file
class Compare(BaseModule):
def __init__(self, key, dev, **kwargs):
super(Compare, self).__init__(key, dev, **kwargs)
self.operation = kwargs.get('operation', 'NONE')
self.register_op()
def register_op(self):
self.op_meta = {
'op_type': 'Compare',
'arguments': {
'operation': self.operation,
'to_uint8': True,
}}
def forward(self, x1, x2, y):
inputs = [x1, x2]; self.unify_devices(inputs)
outputs = [y] if y else [self.register_output()]
return self.run(inputs, outputs)
\ No newline at end of file
...@@ -17,8 +17,8 @@ from dragon.vm.torch.ops.modules.base import BaseModule ...@@ -17,8 +17,8 @@ from dragon.vm.torch.ops.modules.base import BaseModule
class _InitModule(BaseModule): class _InitModule(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(_InitModule, self).__init__(key, ctx, **kwargs) super(_InitModule, self).__init__(key, dev, **kwargs)
self.n_dim = kwargs.get('n_dim', 0) self.n_dim = kwargs.get('n_dim', 0)
self.dtype = kwargs.get('dtype', 'float32') self.dtype = kwargs.get('dtype', 'float32')
...@@ -33,8 +33,8 @@ class _InitModule(BaseModule): ...@@ -33,8 +33,8 @@ class _InitModule(BaseModule):
class Fill(_InitModule): class Fill(_InitModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Fill, self).__init__(key, ctx, **kwargs) super(Fill, self).__init__(key, dev, **kwargs)
self.value = kwargs.get('value', 0.0) self.value = kwargs.get('value', 0.0)
self.register_op() self.register_op()
...@@ -53,8 +53,8 @@ class Fill(_InitModule): ...@@ -53,8 +53,8 @@ class Fill(_InitModule):
class RandomNormal(_InitModule): class RandomNormal(_InitModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(RandomNormal, self).__init__(key, ctx, **kwargs) super(RandomNormal, self).__init__(key, dev, **kwargs)
self.mean = kwargs.get('mean', 0.0) self.mean = kwargs.get('mean', 0.0)
self.std = kwargs.get('std', 1.0) self.std = kwargs.get('std', 1.0)
self.register_op() self.register_op()
...@@ -75,8 +75,8 @@ class RandomNormal(_InitModule): ...@@ -75,8 +75,8 @@ class RandomNormal(_InitModule):
class RandomUniform(_InitModule): class RandomUniform(_InitModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(RandomUniform, self).__init__(key, ctx, **kwargs) super(RandomUniform, self).__init__(key, dev, **kwargs)
self.low = kwargs.get('low', 0.0) self.low = kwargs.get('low', 0.0)
self.high = kwargs.get('high', 1.0) self.high = kwargs.get('high', 1.0)
self.register_op() self.register_op()
......
...@@ -18,8 +18,8 @@ from dragon.vm.torch.ops.modules.base import BaseModule ...@@ -18,8 +18,8 @@ from dragon.vm.torch.ops.modules.base import BaseModule
class Update(BaseModule): class Update(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Update, self).__init__(key, ctx, **kwargs) super(Update, self).__init__(key, dev, **kwargs)
self.op_type = kwargs.get('op_type', 'Update') self.op_type = kwargs.get('op_type', 'Update')
self.lr_mult = kwargs.get('lr_mult', 1.0) self.lr_mult = kwargs.get('lr_mult', 1.0)
self.decay_mult = kwargs.get('decay_mult', 1.0) self.decay_mult = kwargs.get('decay_mult', 1.0)
...@@ -42,8 +42,8 @@ class Update(BaseModule): ...@@ -42,8 +42,8 @@ class Update(BaseModule):
class Collective(BaseModule): class Collective(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Collective, self).__init__(key, ctx, **kwargs) super(Collective, self).__init__(key, dev, **kwargs)
self.mode = kwargs.get('mode', None) self.mode = kwargs.get('mode', None)
if self.mode is None: if self.mode is None:
raise ValueError('Got invalid collective mode: {}'.format(self.mode)) raise ValueError('Got invalid collective mode: {}'.format(self.mode))
...@@ -71,8 +71,8 @@ class Collective(BaseModule): ...@@ -71,8 +71,8 @@ class Collective(BaseModule):
class Accumulate(BaseModule): class Accumulate(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Accumulate, self).__init__(key, ctx, **kwargs) super(Accumulate, self).__init__(key, dev, **kwargs)
self.register_op() self.register_op()
def register_op(self): def register_op(self):
......
...@@ -17,8 +17,8 @@ from dragon.vm.torch.ops.modules.base import BaseModule ...@@ -17,8 +17,8 @@ from dragon.vm.torch.ops.modules.base import BaseModule
class Resize2d(BaseModule): class Resize2d(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(Resize2d, self).__init__(key, ctx, **kwargs) super(Resize2d, self).__init__(key, dev, **kwargs)
self.op_type = kwargs.get('op_type', 'NNResize') self.op_type = kwargs.get('op_type', 'NNResize')
self.dsize = kwargs.get('dsize', None) self.dsize = kwargs.get('dsize', None)
self.fx = kwargs.get('fx', None) self.fx = kwargs.get('fx', None)
...@@ -51,8 +51,8 @@ class Resize2d(BaseModule): ...@@ -51,8 +51,8 @@ class Resize2d(BaseModule):
class RoIPool(BaseModule): class RoIPool(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(RoIPool, self).__init__(key, ctx, **kwargs) super(RoIPool, self).__init__(key, dev, **kwargs)
self.pool_h = kwargs.get('pooled_h', 0) self.pool_h = kwargs.get('pooled_h', 0)
self.pool_w = kwargs.get('pooled_w', 0) self.pool_w = kwargs.get('pooled_w', 0)
self.spatial_scale = kwargs.get('spatial_scale', 1.0) self.spatial_scale = kwargs.get('spatial_scale', 1.0)
...@@ -74,8 +74,8 @@ class RoIPool(BaseModule): ...@@ -74,8 +74,8 @@ class RoIPool(BaseModule):
class RoIAlign(BaseModule): class RoIAlign(BaseModule):
def __init__(self, key, ctx, **kwargs): def __init__(self, key, dev, **kwargs):
super(RoIAlign, self).__init__(key, ctx, **kwargs) super(RoIAlign, self).__init__(key, dev, **kwargs)
self.pool_h = kwargs.get('pooled_h', 0) self.pool_h = kwargs.get('pooled_h', 0)
self.pool_w = kwargs.get('pooled_w', 0) self.pool_w = kwargs.get('pooled_w', 0)
self.spatial_scale = kwargs.get('spatial_scale', 1.0) self.spatial_scale = kwargs.get('spatial_scale', 1.0)
......
...@@ -17,34 +17,33 @@ import numpy as np ...@@ -17,34 +17,33 @@ import numpy as np
import dragon as dg import dragon as dg
from dragon.vm.torch.tensor import * from dragon.vm.torch.tensor import *
from dragon.vm.torch.c_api import Context from dragon.vm.torch.c_api import device as _Device
def UnifyDevices(tensors, key='Inputs'): def UnifyDevices(tensors, key='Inputs'):
device_types = [t._ctx.device_type for t in tensors] types, indices = [t.device.type for t in tensors], [0]
device_ids = [0] if len(set(types)) != 1:
if len(set(device_types)) != 1:
raise ValueError('{} from different device type: [{}].' raise ValueError('{} from different device type: [{}].'
.format(key, ', '.join(device_types))) .format(key, ', '.join(types)))
if device_types[0] == 'CUDA': if types[0] == 'cuda':
device_ids = [t._ctx.device_id for t in tensors] indices = [t.device.index for t in tensors]
if len(set(device_ids)) != 1: if len(set(indices)) != 1:
raise ValueError('{} from different cuda device: [{}].' raise ValueError('{} from different cuda device: [{}].'
.format(key, ', '.join([str(d) for d in device_ids]))) .format(key, ', '.join([str(d) for d in indices])))
return Context(device_types[0], device_ids[0]) return _Device(types[0], indices[0])
def MakeContext(inputs=(), outputs=()): def MakeDevice(inputs=(), outputs=()):
# Case #1: [], [] -> CPU # Case #1: [], [] -> CPU
# Case #2: [...], [] -> Refer Inputs # Case #2: [...], [] -> Refer Inputs
# Case #3: [], [...] -> Refer Outputs # Case #3: [], [...] -> Refer Outputs
# Case #4: [...], [...] -> Refer Outputs # Case #4: [...], [...] -> Refer Outputs
if len(outputs) > 0: return UnifyDevices(outputs, 'Outputs') if len(outputs) > 0: return UnifyDevices(outputs, 'Outputs')
if len(inputs) > 0: return UnifyDevices(inputs, 'Inputs') if len(inputs) > 0: return UnifyDevices(inputs, 'Inputs')
return Context() return _Device()
def WrapScalar(scalar, dtype, ctx): def WrapScalar(scalar, dtype, device):
# We use (DType + Value) to hash different scalars # We use (DType + Value) to hash different scalars
# Setting a Tensor with same DType and shape will not deconstruct it # Setting a Tensor with same DType and shape will not deconstruct it
if 'float' in dtype: scalar = float(scalar) if 'float' in dtype: scalar = float(scalar)
...@@ -52,6 +51,6 @@ def WrapScalar(scalar, dtype, ctx): ...@@ -52,6 +51,6 @@ def WrapScalar(scalar, dtype, ctx):
name = '/share/scalar/{}/{}'.format(dtype, str(scalar)) name = '/share/scalar/{}/{}'.format(dtype, str(scalar))
if not dg.workspace.HasTensor(name): if not dg.workspace.HasTensor(name):
dg.workspace.FeedTensor(name, np.array(scalar, dtype=dtype)) dg.workspace.FeedTensor(name, np.array(scalar, dtype=dtype))
t = Tensor(name=name, dtype=dtype, ctx=ctx, own_storage=False) t = Tensor(name=name, dtype=dtype, device=device, own_storage=False)
t.requires_grad = False t.requires_grad = False
return t return t
# ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from dragon.vm.torch.tensor import Tensor
from dragon.vm.torch.ops.factory import get_module
from dragon.vm.torch.ops.primitive import MakeDevice
from dragon.vm.torch.ops.modules.array import Cast
from dragon.vm.torch.ops.builtin import (
_fill, _uniform, _normal, multinomial,
_fundamental, _rfundamental,
log, exp, sqrt, clamp,
_reshape, squeeze, unsqueeze,
_permute, _repeat, _indexing, narrow,
mean, sum, max, min,
gt, lt, eq, ge, le,
)
def _type_to(input, dtype='float32', inplace=False):
if dtype == input.dtype: return input
dev = MakeDevice(inputs=[input])
key = 'Cast/{}/dtype:{}/inplace:{}'.format(
dev, dtype, 'true' if inplace else 'false')
module = get_module(Cast, key, dev, dtype=dtype, inplace=inplace)
return module.forward(input)
Tensor.fill_ = lambda self, value: _fill(self, self.shape, value)
Tensor.uniform_ = lambda self, low=0, high=1: _uniform(self, self.shape, low, high)
Tensor.normal_ = lambda self, mean=0, std=1: _normal(self, self.shape, mean, std)
Tensor.multinomial = lambda *args, **kwargs: multinomial(*args, **kwargs)
Tensor.add = lambda self, value: _fundamental(self, value, 'Add')
Tensor.add_ = lambda self, value: _fundamental(self, value, 'Add', self)
Tensor.__radd__ = lambda self, value: _rfundamental(self, value, 'RAdd')
Tensor.sub = lambda self, value: _fundamental(self, value, 'Sub')
Tensor.sub_ = lambda self, value: _fundamental(self, value, 'Sub', self)
Tensor.__rsub__ = lambda self, value: _rfundamental(self, value, 'RSub')
Tensor.mul = lambda self, value: _fundamental(self, value, 'Mul')
Tensor.mul_ = lambda self, value: _fundamental(self, value, 'Mul', self)
Tensor.__rmul__ = lambda self, value: _rfundamental(self, value, 'RMul')
Tensor.div = lambda self, value: _fundamental(self, value, 'Div')
Tensor.div_ = lambda self, value: _fundamental(self, value, 'Div', self)
Tensor.__rdiv__ = lambda self, value: _rfundamental(self, value, 'RDiv')
Tensor.__rtruediv__ = lambda self, value: _rfundamental(self, value, 'RDiv')
Tensor.clamp = lambda *args, **kwargs: clamp(*args, **kwargs)
Tensor.clamp_ = lambda self, min=None, max=None: clamp(self, min, max, self)
Tensor.log = lambda *args, **kwargs: log(*args, **kwargs)
Tensor.exp = lambda *args, **kwargs: exp(*args, **kwargs)
Tensor.sqrt = lambda *args, **kwargs: sqrt(*args, **kwargs)
Tensor.squeeze = lambda *args, **kwargs: squeeze(*args, **kwargs)
Tensor.squeeze_ = lambda self, dim: squeeze(self, dim, self)
Tensor.unsqueeze = lambda *args, **kwargs: unsqueeze(*args, **kwargs)
Tensor.unsqueeze_ = lambda self, dim: unsqueeze(self, dim, self)
Tensor.view = lambda self, *shape: _reshape(self, shape)
Tensor.view_as = lambda *args, **kwargs: _reshape(*args, **kwargs)
Tensor.permute = lambda self, *dims: _permute(self, dims)
Tensor.repeat = lambda self, *args: _repeat(self, args)
Tensor.mean = lambda *args, **kwargs: mean(*args, **kwargs)
Tensor.sum = lambda *args, **kwargs: sum(*args, **kwargs)
Tensor.max = lambda *args, **kwargs: max(*args, **kwargs)
Tensor.min = lambda *args, **kwargs: min(*args, **kwargs)
Tensor.gt = lambda *args, **kwargs: gt(*args, **kwargs)
Tensor.ge = lambda *args, **kwargs: ge(*args, **kwargs)
Tensor.lt = lambda *args, **kwargs: lt(*args, **kwargs)
Tensor.le = lambda *args, **kwargs: le(*args, **kwargs)
Tensor.eq = lambda *args, **kwargs: eq(*args, **kwargs)
Tensor.narrow = lambda *args, **kwargs: narrow(*args, **kwargs)
Tensor._indexing = lambda *args, **kwargs: _indexing(*args, **kwargs)
Tensor.half = lambda self: _type_to(self, dtype='float16', inplace=False)
Tensor.half_ = lambda self: _type_to(self, dtype='float16', inplace=True)
Tensor.float = lambda self: _type_to(self, dtype='float32', inplace=False)
Tensor.float_ = lambda self: _type_to(self, dtype='float32', inplace=True)
Tensor.double = lambda self: _type_to(self, dtype='float64', inplace=False)
Tensor.double_ = lambda self: _type_to(self, dtype='float64', inplace=True)
Tensor.byte = lambda self: _type_to(self, dtype='uint8', inplace=False)
Tensor.byte_ = lambda self: _type_to(self, dtype='uint8', inplace=True)
Tensor.char = lambda self: _type_to(self, dtype='int8', inplace=False)
Tensor.char_ = lambda self: _type_to(self, dtype='int8', inplace=True)
Tensor.int = lambda self: _type_to(self, dtype='int32', inplace=False)
Tensor.int_ = lambda self: _type_to(self, dtype='int32', inplace=True)
Tensor.long = lambda self: _type_to(self, dtype='int64', inplace=False)
Tensor.long_ = lambda self: _type_to(self, dtype='int64', inplace=True)
Tensor.type = lambda self, dtype=None: _type_to(self, dtype=dtype) \
if dtype is not None else 'torch.' + self._type2str()
\ No newline at end of file
# ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import dragon.core.mpi as mpi
from dragon.vm.torch.ops.primitive import MakeContext
from dragon.vm.torch.ops.factory import get_module
from dragon.vm.torch.ops.modules.update import Accumulate
from dragon.vm.torch.ops.modules.update import Collective
from dragon.vm.torch.ops.modules.update import Update
def _accumulate(grads):
if len(grads) == 0: return
if not isinstance(grads, (list, tuple)): grads = [grads]
ctx = MakeContext(inputs=grads)
key = 'Accumulate/{}/alpha:1./beta:1.'.format(ctx)
module = get_module(Accumulate, key, ctx)
return module.forward(grads)
def _allreduce(grads):
if not mpi.Is_Init(): return
if not isinstance(grads, (list, tuple)): grads = [grads]
ctx = MakeContext(inputs=grads)
mode = mpi.GetParallelMode() + '_ALLREDUCE'
key = 'Collective/{}/{}'.format(ctx, mode.lower())
module = get_module(Collective, key, ctx, mode=mode)
return module.forward(grads)
def _update(param, grad, op_type, slot,
lr_mult=1.0, decay_mult=1.0):
ctx = MakeContext(inputs=[param])
key = '{}/{}/{}/{}'.format(op_type, ctx, slot, param.name)
module = get_module(Update, key, ctx, op_type=op_type,
lr_mult=lr_mult, decay_mult=decay_mult, slot=slot)
return module.forward(param, grad)
\ No newline at end of file
# ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from dragon.vm.torch.ops.primitive import MakeContext
from dragon.vm.torch.ops.factory import get_module
from dragon.vm.torch.ops.modules.vision import Resize2d
from dragon.vm.torch.ops.modules.vision import RoIPool, RoIAlign
def _resize_2d(input, op_type, dsize, fx, fy):
if dsize is None:
if fx < 0 or fy < 0:
raise ValueError('Set fx and fy if dsize is None.')
else:
if len(dsize) != 2:
raise ValueError('The dsize should be a list with 2 elements.')
if dsize is None and (fy == -1.0 or fx == -1.0):
raise RuntimeError('The dsize, fx/fy should be specified either.')
ctx = MakeContext(inputs=[input])
key = '{}/{}/dsize:{}/fx:{}/fy:{}'.format(
op_type, ctx, '2' if dsize else 'none', fx, fy)
module = get_module(Resize2d, key, ctx,
op_type=op_type, dsize=dsize, fx=fx, fy=fy)
return module.forward(input, dsize)
def nn_resize(input, dsize, fx=-1.0, fy=-1.0):
return _resize_2d(input, 'NNResize', dsize, fx, fy)
def bilinear_resize(input, dsize, fx=-1.0, fy=-1.0):
return _resize_2d(input, 'BilinearResize', dsize, fx, fy)
def roi_pool(feature, rois, pooled_h, pooled_w, spatial_scale):
ctx = MakeContext(inputs=[feature])
key = 'RoIPool/{}/pool_h:{}/pool_w:{}/spatial_scale:{}'.format(
ctx, pooled_h, pooled_w, spatial_scale)
module = get_module(
RoIPool, key, ctx,
pooled_h=pooled_h,
pooled_w=pooled_w,
spatial_scale=spatial_scale,
)
return module.forward(feature, rois)
def roi_align(feature, rois, pooled_h, pooled_w,
spatial_scale, sampling_ratio=2):
ctx = MakeContext(inputs=[feature])
key = 'RoIAlign/{}/pool_h:{}/pool_w:{}/' \
'spatial_scale:{}/sampling_ratio:{}'.format(
ctx, pooled_h, pooled_w, spatial_scale, sampling_ratio)
module = get_module(
RoIAlign, key, ctx,
pooled_h=pooled_h,
pooled_w=pooled_w,
spatial_scale=spatial_scale,
sampling_ratio=sampling_ratio,
)
return module.forward(feature, rois)
\ No newline at end of file
...@@ -22,7 +22,7 @@ from collections import defaultdict ...@@ -22,7 +22,7 @@ from collections import defaultdict
from dragon.vm.torch.tensor import Tensor from dragon.vm.torch.tensor import Tensor
from dragon.vm.torch.ops.update import ( from dragon.vm.torch.ops.builtin import (
_accumulate, _allreduce, _update, _accumulate, _allreduce, _update,
) )
...@@ -51,6 +51,10 @@ class Optimizer(object): ...@@ -51,6 +51,10 @@ class Optimizer(object):
for param_group in param_groups: for param_group in param_groups:
self.add_param_group(param_group) self.add_param_group(param_group)
self._update_type = None self._update_type = None
self._allow_parallel = False
if dragon.mpi.Is_Init():
local_rank, _ = dragon.mpi.AllowParallel()
if local_rank != -1: self._allow_parallel = True
self._mutable_parameters = {} self._mutable_parameters = {}
def __repr__(self): def __repr__(self):
...@@ -80,7 +84,7 @@ class Optimizer(object): ...@@ -80,7 +84,7 @@ class Optimizer(object):
return Tensor( return Tensor(
name=grad_name, name=grad_name,
own_storage=False, own_storage=False,
ctx=param._ctx) device=param.device)
return None return None
def _run_update_ops(self, group): def _run_update_ops(self, group):
...@@ -109,7 +113,7 @@ class Optimizer(object): ...@@ -109,7 +113,7 @@ class Optimizer(object):
self.feed_parameters(group) self.feed_parameters(group)
# Run a all-reduce op to accumulate grads if necessary # Run a all-reduce op to accumulate grads if necessary
_allreduce(grads) if self._allow_parallel: _allreduce(grads)
# Run regular update ops # Run regular update ops
for p, g in zip(params, grads): for p, g in zip(params, grads):
......
# ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import importlib
import numpy as np
import dragon.core.mapping as mapping
from dragon.core.tensor_utils import GetStorage
from dragon.vm.torch.c_api import Context
def from_numpy(data):
"""Create a tensor from the given numpy array.
Parameters
----------
data : ndarray
The array with various data type.
Return
------
dragon.vm.torch.Tensor
The torch tensor.
"""
if not isinstance(data, np.ndarray):
raise TypeError('The data should be a numpy.ndarray.')
if str(data.dtype) not in mapping.TENSOR_TYPE_TO_TORCH_TENSOR:
raise ValueError('Unsupported type({}) to torch tensor.'.format(data.dtype))
module = importlib.import_module('dragon.vm.torch.tensor')
return getattr(module, mapping.TENSOR_TYPE_TO_TORCH_TENSOR[str(data.dtype)])(data)
def to_numpy(tensor):
"""Create a numpy nd-array from the given tensor.
Parameters
----------
tensor : dragon.vm.torch.Tensor
The tensor with various data type.
Returns
-------
numpy.ndarray
The numpy array.
"""
return tensor.numpy()
def from_dragon(tensor, own_storage=False):
"""Create a torch tensor from a existing dragon tensor.
Set ``own_storage`` as ``True`` for automatically releasing the storage.
Parameters
----------
tensor : Tensor or str
The dragon tensor.
own_storage : boolean
Whether to release storage during deconstructing.
Returns
-------
dragon.vm.torch.Tensor
The torch tensor.
"""
storage = GetStorage(tensor)
if storage is None: return None
module = importlib.import_module('dragon.vm.torch.tensor')
T = getattr(module, mapping.TENSOR_TYPE_TO_TORCH_TENSOR[storage.dtype])()
T._storage, T._own_storage, T._tensor = storage, own_storage, tensor
T._ctx = Context(*storage.ctx)
return T
def to_str(tensor):
"""Return a format str representing the storage of a tensor.
Parameters
----------
tensor : dragon.vm.torch.Tensor
The tensor with various data type.
Returns
-------
str
The format str.
"""
return str(tensor)
\ No newline at end of file
# ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
import time
class Timer(object):
def __init__(self):
self.total_time = 0.
self.calls = 0
self.start_time = 0.
self.diff = 0.
def tic(self):
self.start_time = time.time()
def toc(self, average=False, every_n=-1, name=''):
self.diff = time.time() - self.start_time
self.total_time += self.diff
self.calls += 1
self.average_time = self.total_time / self.calls
if every_n > 0 and self.calls % every_n == 0:
print('[{}]: total = {:.5f}s, average = {:.5f}s'.format(
name, self.total_time, self.total_time / self.calls * every_n))
if average:
return self.average_time
else:
return self.diff
\ No newline at end of file
...@@ -77,7 +77,7 @@ class DataTransformer(Process): ...@@ -77,7 +77,7 @@ class DataTransformer(Process):
im = im.reshape((datum.height, datum.width, datum.channels)) im = im.reshape((datum.height, datum.width, datum.channels))
if datum.channels == 3 and \ if datum.channels == 3 and \
self.color_space == 'RGB': self.color_space == 'RGB':
im = im[:, :, ::-1] im = im[:, :, ::-1]
# Labels # Labels
......
...@@ -154,7 +154,7 @@ void ProposalOp<Context>::RunWithType() { ...@@ -154,7 +154,7 @@ void ProposalOp<Context>::RunWithType() {
template <class Context> template <class Context>
void ProposalOp<Context>::RunOnDevice() { void ProposalOp<Context>::RunOnDevice() {
ctx()->set_stream_id(0); // Enforce SyncStream ctx()->set_stream_id(0); // Enforce DefaultStream
num_images = Input(0).dim(0); num_images = Input(0).dim(0);
CHECK_EQ(Input(-1).dim(0), num_images) CHECK_EQ(Input(-1).dim(0), num_images)
......
...@@ -150,17 +150,17 @@ void MixedMemory::SwitchToCUDADevice(int device_id) { ...@@ -150,17 +150,17 @@ void MixedMemory::SwitchToCUDADevice(int device_id) {
const Map<string, string> MixedMemory::info() const { const Map<string, string> MixedMemory::info() const {
static map<State, string> STATE_TO_STRING { static map<State, string> STATE_TO_STRING {
{ UNINITIALIZED, "UNINITIALIZED" }, { UNINITIALIZED, "uninitialized" },
{ STATE_AT_CPU, "CPU" }, { STATE_AT_CPU, "cpu" },
{ STATE_AT_CUDA, "CUDA" }, { STATE_AT_CUDA, "cuda" },
{ STATE_AT_CNML, "CNML" }, { STATE_AT_CNML, "cnml" },
{ SYNCED, "DEVICE" }, { SYNCED, "device" },
}; };
Map<string, string> s2s; Map<string, string> s2s;
string _state_ = STATE_TO_STRING[state_]; string _state_ = STATE_TO_STRING[state_];
if (_state_ == "DEVICE") { if (_state_ == "device") {
if (cuda_ptr_) _state_ = "CUDA"; if (cuda_ptr_) _state_ = "cuda";
else if (cnml_ptr_) _state_ = "CNML"; else if (cnml_ptr_) _state_ = "cnml";
else LOG(FATAL) << "Device activated, " else LOG(FATAL) << "Device activated, "
<< "but got invalid mem pointer."; << "but got invalid mem pointer.";
} }
......
...@@ -126,7 +126,7 @@ OperatorBase* NewOperator( ...@@ -126,7 +126,7 @@ OperatorBase* NewOperator(
<< "\nOperator failed to pass the schema checking."; << "\nOperator failed to pass the schema checking.";
} }
OperatorDef mutable_def(def); OperatorDef mutable_def(def);
// Heuristically makes each random seed slightly differnet // Heuristically make each random seed slightly different
static unsigned int op_seed_uuid = 0; static unsigned int op_seed_uuid = 0;
mutable_def.mutable_device_option()->set_random_seed( mutable_def.mutable_device_option()->set_random_seed(
op_seed_uuid + def.device_option().random_seed()); op_seed_uuid + def.device_option().random_seed());
......
Markdown is supported
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!