Skip to content
Toggle navigation
P
Projects
G
Groups
S
Snippets
Help
SeetaResearch
/
Dragon
This project
Loading...
Sign in
Toggle navigation
Go to a project
Project
Repository
Issues
0
Merge Requests
0
Pipelines
Wiki
Snippets
Settings
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Commit 46a948ea
authored
Mar 12, 2018
by
Ting PAN
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Fix the crash of sharing mem in StopGradientOp
1 parent
387a3675
Hide whitespace changes
Inline
Side-by-side
Showing
7 changed files
with
46 additions
and
7 deletions
Dragon/python/dragon/docs/contents/vm/caffe/layer.rst
Dragon/python/dragon/docs/contents/vm/caffe/solver.rst
Dragon/python/dragon/vm/caffe/layers/__init__.py
Dragon/python/dragon/vm/caffe/layers/common.py
Dragon/python/dragon/vm/caffe/solver.py
Dragon/python/setup.py
Dragon/src/operators/misc/gradient_op.cc
Dragon/python/dragon/docs/contents/vm/caffe/layer.rst
View file @
46a948e
...
...
@@ -78,7 +78,8 @@ List Brief
`BNLayer`_ The implementation of ``BNLayer``.
`NormalizeLayer`_ The implementation of ``NormalizeLayer``.
`TileLayer`_ The extended implementation of ``TileLayer``.
`ExpandDimsLayer`_ The implementation of ``ExpandDimsLayer``
`ExpandDimsLayer`_ The implementation of ``ExpandDimsLayer``.
`StopGradientLayer`_ The implementation of ``StopGradientLayer``.
`ProposalLayer`_ The implementation of ``ProposalLayer``.
======================== =============================================================================
...
...
@@ -186,6 +187,7 @@ API Reference
.. _NormalizeLayer: #dragon.vm.caffe.layers.common.NormalizeLayer
.. _TileLayer: #dragon.vm.caffe.layers.common.TileLayer
.. _ExpandDimsLayer: #dragon.vm.caffe.layers.common.ExpandDimsLayer
.. _StopGradientLayer: #dragon.vm.caffe.layers.common.StopGradientLayer
.. _ProposalLayer: #dragon.vm.caffe.layers.common.ProposalLayer
.. _SoftmaxWithLossLayer: #dragon.vm.caffe.layers.loss.SoftmaxWithLossLayer
...
...
Dragon/python/dragon/docs/contents/vm/caffe/solver.rst
View file @
46a948e
...
...
@@ -16,6 +16,7 @@ List Brief
`Solver.net`_ Return the train net.
`Solver.test_nets`_ Return the test nets.
`Solver.iter`_ Return or Set the current iteration.
`Solver.lr`_ Return or Set the current learning rate.
==================== =============================================================================
API Reference
...
...
@@ -36,6 +37,7 @@ API Reference
.. _Solver.net: #dragon.vm.caffe.solver.Solver.net
.. _Solver.test_nets: #dragon.vm.caffe.solver.Solver.test_nets
.. _Solver.iter: #dragon.vm.caffe.solver.Solver.iter
.. _Solver.lr: #dragon.vm.caffe.solver.Solver.lr
.. _[LeCun et.al, 1998]: http://yann.lecun.com/exdb/publis/#lecun-98b
.. _[Sutskever et.al, 2012]: http://www.cs.utoronto.ca/~bonner/courses/2016s/csc321/lectures/lec6.pdf
...
...
Dragon/python/dragon/vm/caffe/layers/__init__.py
View file @
46a948e
...
...
@@ -57,5 +57,6 @@ from .common import InnerProductLayer, \
TileLayer
,
\
ReductionLayer
,
\
ExpandDimsLayer
,
\
StopGradientLayer
,
\
ProposalLayer
,
\
DenseConcatLayer
\ No newline at end of file
Dragon/python/dragon/vm/caffe/layers/common.py
View file @
46a948e
...
...
@@ -618,6 +618,20 @@ class ExpandDimsLayer(Layer):
return
ops
.
ExpandDims
(
input
,
**
self
.
_param
)
class
StopGradientLayer
(
Layer
):
"""
The implementation of ``StopGradientLayer``.
"""
def
__init__
(
self
,
LayerParameter
):
super
(
StopGradientLayer
,
self
)
.
__init__
(
LayerParameter
)
def
Setup
(
self
,
bottom
):
super
(
StopGradientLayer
,
self
)
.
Setup
(
bottom
)
input
=
bottom
[
0
]
if
isinstance
(
bottom
,
list
)
else
bottom
return
ops
.
StopGradient
(
input
,
**
self
.
_param
)
class
ProposalLayer
(
Layer
):
"""The implementation of ``ProposalLayer``.
...
...
Dragon/python/dragon/vm/caffe/solver.py
View file @
46a948e
...
...
@@ -399,6 +399,26 @@ class Solver(object):
def
iter
(
self
,
value
):
self
.
_iter
=
value
@property
def
lr
(
self
):
"""Return or Set the current learning rate. [**Extended**]
Parameters
----------
iter : float
The value of learning rate to set.
Returns
-------
The current learning rate.
"""
return
self
.
_optimizer
.
lr
@lr.setter
def
lr
(
self
,
value
):
self
.
_optimizer
.
lr
=
value
class
SGDSolver
(
Solver
):
"""The Momentum-SGD Solver, introduced by `[LeCun et.al, 1998]`_.
...
...
Dragon/python/setup.py
View file @
46a948e
...
...
@@ -36,7 +36,7 @@ find_packages('dragon')
find_modules
()
setup
(
name
=
'dragon'
,
version
=
'0.2.1.1
0
'
,
version
=
'0.2.1.1
1
'
,
description
=
'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework'
,
url
=
'https://github.com/neopenx/Dragon'
,
author
=
'Ting Pan'
,
...
...
Dragon/src/operators/misc/gradient_op.cc
View file @
46a948e
...
...
@@ -61,15 +61,13 @@ OPERATOR_SCHEMA(GradientGather).NumOutputs(1);
NO_GRADIENT
(
GradientGather
);
template
<
class
Context
>
void
StopGradientOp
<
Context
>::
RunOnDevice
()
{
ws
()
->
CreateAvatar
(
output
(
0
),
&
input
(
0
));
}
void
StopGradientOp
<
Context
>::
RunOnDevice
()
{}
DEPLOY_CPU
(
StopGradient
);
#ifdef WITH_CUDA
DEPLOY_CUDA
(
StopGradient
);
#endif
OPERATOR_SCHEMA
(
StopGradient
).
NumInputs
(
1
).
NumOutputs
(
1
);
OPERATOR_SCHEMA
(
StopGradient
).
NumInputs
(
1
).
NumOutputs
(
1
)
.
Inplace
({
{
0
,
0
}
});
;
NO_GRADIENT
(
StopGradient
);
}
// namespace dragon
}
//
namespace
dragon
\ No newline at end of file
Write
Preview
Markdown
is supported
Attach a file
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to post a comment