Summary: CopyGPUToGPU does not exist. Copy seems to do the trick. Didn't go into details of how copy works, not sure if it ends up triggering UVA.
Reviewed By: akyrola
Differential Revision: D5471014
fbshipit-source-id: d8bc1aed9b19070c92f3ffc76f5617bdd0054563
Summary: Quite common confusion is how to use StopGradient, and typical bug is to forget to specify input=output. This adds a sanity check to gradient builder that checks if some StopGradient outputs are orphaned.
Reviewed By: dzhulgakov
Differential Revision: D5458341
fbshipit-source-id: 056fef4f0ee53eb10e66e9be0ecb55b55f9cc3d7
Summary:
This will fix the test by querying how many instances of the optimizer are already created.
Because OSS tests doesn't run in isolation causing number of created instances of optimizer to be >= 0.
Reviewed By: akyrola
Differential Revision:
D5462433
Tags: easy
fbshipit-source-id: 7a9ab4fe5345f5d5138abb461ba7a990d9ace840
Summary:
In this revision, I mainly implemented the DRelu activation. See https://arxiv.org/pdf/1706.06978v1.pdf for details.
To sum up, different from standard relu and purely, which divide the scope into two parts with boundary at zero, DRelu calculate another value p to divide the activation into two part. P is the softmax value of the output of Batch Normalization. For f(x)=x part in relu, you can find similar patten in f(x)=px, and for f(x)=0 part in rely, you can find similar pattern in f(x)=a(1-p)x, in which a is a parameter to tune. Drelu activation result is the sum of these two parts, f(x) = a(1-p)x + px.
To implement DRelu, I take BatchNormalization as super class and then use the above formula for computation. In order to allow users to choose activation methods, which usually takes place when calling add_mlp function in processor_util.py, I pass the parameter transfer in model_option from UI to the details, just as what dropout do. Currently, I place it in extra_option, but can modify it if AML team needs to redesign the UI.
I also add units test for DRelu. We check the shape of output and also do the numeric unit tests.
For Unit test, I first check the numeric value of BatchNormalization, since there is no similar test before. I then compute the value of DRelu outputs and compare the results with current DRelu layer.
Reviewed By: chocjy
Differential Revision: D5341464
fbshipit-source-id: 896b4dcc49cfd5493d97a8b448401b19e9c80630
Summary: Adding pooling option as None, and SparseLookup will gather the embedding for each id.
Reviewed By: kittipatv
Differential Revision: D5421667
fbshipit-source-id: 1e8e2b550893ff3869dab12f8eb1fe24a063c3d5
Summary: Allowing CPU device scope instead of enforcing no device scope in data_parallel_model and data_parallel_rendevous.
Reviewed By: akyrola
Differential Revision: D5440492
fbshipit-source-id: bcd4344d64c710ea50ec8a65e3e9d102e35c66ea
Summary: - Minor fix for error message in layer model helper file
Reviewed By: chocjy
Differential Revision: D5440768
fbshipit-source-id: df47bfe68a0caa750f0d3c8def28a5585e465ee0
Summary: The diff added TensorInferenceFunction for ExpandDims operator, so that ExpandDims layer is no longer needed (it can be handled by functional layer)
Reviewed By: kittipatv
Differential Revision: D5430889
fbshipit-source-id: 4f895f2751663c45db4cc4f87e5114c63cda9fbb
Summary: added support of passing remap_funcs to clone_and_bind_net, so that it can pass it to clone method. Added other utils to ensure RecurrentNetwork operator is correctly cloned based on the remap_blob. The reason that RecurrentNetwork operator needs special treatment is that its arguments contain proto and blobs.
Reviewed By: kittipatv
Differential Revision: D5421532
fbshipit-source-id: 5de68365ce97df2de483f02ad260d78c8d35eead
Summary:
This removes/comments out/silences one or more unused parameters in the files.
We are going to enable `-Wunused-parameter` in fbcode and this fixes a case that automated tooling can't handle.
This diff is automatically generated.
Reviewers are added heuristically.
Reviewed By: dzhulgakov
Differential Revision: D5437217
fbshipit-source-id: c2fc5ed30e7ee47b8c40248f89a9f4304ce7c098
Summary: Add some comments to dag-memonger to help asaadaldien with his C++ port.
Reviewed By: asaadaldien
Differential Revision: D5435459
fbshipit-source-id: dd5d482efb017418d22f42ee79fbd4668bd31bdd
Summary:
Added operator RecurrentNetworkBlobFetcherOp that takes as input a scratch workspace name and prefix, and copies over all blobs in the scratch workspace into the global workspace. This essentially extracts all intermediate recurrent network computation for each timestep.
Added a wrapper in recurrent.py - retrieve_step_blobs(net, prefix='rnn') - which, when called after an rnn is run, will return a list of all blobs extracted from the net.
Reviewed By: akyrola
Differential Revision: D5421926
fbshipit-source-id: 0f35b466d77d3c719fb0e32de7dbcafc6c0d5225
Summary: Currently the dataset cursor blob is using a fixed name. When we read from multi input tables, the dataset cursor of each table is using the same blob. This messed up the split queue and crashed the reader pipelines (see the errors and failures in https://fb.quip.com/uzbIA7K0PgVe)
Reviewed By: dragonxlwang, rayleichen
Differential Revision: D5419863
fbshipit-source-id: 5983a3d8d2e286dc47c2ec38ed1dbbe30c7c9b49
Summary: This would allow us to inspect the binary size of the builds more easily.
Reviewed By: jonmorton
Differential Revision: D4553515
fbshipit-source-id: 95371bf67e66490a8653b874e1ff79cc987805e6
Summary: Add api model.add_loss(), which allows adding loss, such as optimization and regularization. See change in sparse_nn.py, in which 'model.loss = loss' is changed to 'model.add_loss(loss)'.
Reviewed By: xianjiec
Differential Revision: D5399056
fbshipit-source-id: 13b2ced4b75d129a5ee4a9b0e989606c04d2ca8b
Summary:
1. it was easy to pass grad_reference which was just ignored due to missing output_to_grad
2. threshold was not passed to the gradient checkinglogic
Reviewed By: dzhulgakov
Differential Revision: D5425226
fbshipit-source-id: 2eb41f2601d5e356f7872e57724d08ab2e742329
Summary:
- (Split diff from Arc Cosine)
- Implemented [[ https://arxiv.org/pdf/1702.08882.pdf | Semi-Random Features ]] Layer
- Created a buck unit test for SRF Layer
Reviewed By: chocjy
Differential Revision: D5374803
fbshipit-source-id: 0293fd91ed5bc19614d418c2fce9c1cfdd1128ae
Summary: As title. This helps with (quite common) cases where data input is stuck for reason or another, and the net execution never proceeds and is stuck forever.
Reviewed By: andrewwdye
Differential Revision: D5409885
fbshipit-source-id: 840261fd5964408f788fc0f50ece0d74193694ac
Summary: The number input dimension for NHWC should be the last dimension C. Since batch size is omitted, it should be 2 instead of 3.
Reviewed By: chocjy
Differential Revision: D5418538
fbshipit-source-id: a6939a863817b7566198ea2a665a1d236a2cf63d
Summary:
Fix case when optimizer isn't called within a device scope context.
Fix OptimizerContext lr blob names
Reviewed By: volkhin
Differential Revision: D5421046
fbshipit-source-id: 186a0d05f40d4442c5ba5736084626da73a0c0f1
Summary: Added function _RunComparison to data_parallel_model that checks if all shards in a given rendevous have the same value for a given blob_name
Reviewed By: wesolwsk
Differential Revision: D5394164
fbshipit-source-id: c2b07d0f8d5846fa9887d53b0be091a8c057f106
Summary: Fix a bug reported by dzhulgakov that occurs when input blobs is used twice in a same op --> it was released to the recycled blobs pool twice.
Reviewed By: dzhulgakov, volkhin
Differential Revision: D5414023
fbshipit-source-id: 861bb46fe901023cb9a496401736e6ecb77d5fae
Summary:
We want it to be able to register children of layers who
are not direct children of ModelLayer.
This requires us to find subclasses of ModelLayer recursively.
Reviewed By: kittipatv, kennyhorror
Differential Revision: D5397120
fbshipit-source-id: cb1e03d72e3bedb960b1b865877a76e413218a71
Summary: This diff makes functional layer return scalar if only one output. This diff also corrects all other corresponding implementations.
Reviewed By: kittipatv
Differential Revision: D5386853
fbshipit-source-id: 1f00582f6ec23384b2a6db94e19952836755ef42
Summary:
Added device scope checks to data_parallel_model and data_parallel_rendevous
Added test to check that checks are working correctly to data_parallel_model_test
Fixed device_scope error in test_synchronization_barrier
Reviewed By: akyrola
Differential Revision: D5403936
fbshipit-source-id: 849c1cd7452692efbc5ef74d2d60ede090c9c017
Summary: the init method should also make _parameters_info shared between self and param_model, since params is shared. Otherwise it can cause a inconsistence between _parameters_info and params. Examples of using param_model can be find in rnn_cell.py.
Reviewed By: kennyhorror
Differential Revision: D5405327
fbshipit-source-id: ca8079058e898f529906452163cda234cb30a7df
Summary: this diff adds optimizer into param_info, and the associated implementations for modelhelper and brew to set optimizer for each individual parameter.
Reviewed By: kennyhorror
Differential Revision: D5385432
fbshipit-source-id: 5d682f9d1ab077e04a5d76a24d71470f4e64fc92
Summary:
akirillov again presented me with a memonger-bug: his model that has kind of a 'back-and-forth structure' where blobs are passed left and right in a ladder-like structure, revealed a bug in memonger: I should pass the set of free blobs as a reference, not a copy so that the recyclings are properly accounted for. Hard to explain.
Since we have the graph verifier, we can be more confident with these changes.
I also added some helpful debug to the graph verifier.
Differential Revision: D5396925
fbshipit-source-id: 0bffb3a0bf8532afcd6b5bc9331c779768a8c5c5
Summary: Implemented python logic and tests to create an RNNCell for GRU. Uses the preexisting GRU Unit Op code.
Reviewed By: salexspb
Differential Revision: D5364893
fbshipit-source-id: 2451d7ec8c2eacb8d8c9b7c893bfd21b65fb9d18
Summary:
Just an implementation of the forward pass of the GRU Unit Op, not the full RNNCell.
Functions were created to mimic LSTM implementation as closely as possible.
Backwards pass implementations are defined in GRU_unit_op.{h, cc}
assertGradientChecks call added to gru_cell_test.py
Reviewed By: salexspb
Differential Revision: D5364856
fbshipit-source-id: 09cff4478091827763b40cc331e4e0abf0ec258f
Summary:
Just an implementation of the forward pass of the GRU Unit Op, not the full RNNCell.
Functions were created to mimic LSTM implementation as closely as possible.
Implementation defined in GRU_unit_op.{h, cc}
tests put in gru_cell_test.py, which import rnn_cell_test_util.py for sigmoid, tanh, and _prepare_rnn functions.
Reviewed By: jamesr66a
Differential Revision: D5363697
fbshipit-source-id: f9ba9fe0be01ffc868dd22027be8be4975b84998
Summary:
Moved sigmoid, tanh, and _prepare_lstm (renamed) to a util file.
Also renamed _prepare_lstm to _preapare_rnn since it is being used for both setting up and LSTM and GRU model.
The reason for this commit is to allow the creation of GRU Op and testing code without copying and pasting code for sigmoid, tanh, and setting up an rnn unit op mode.
Reviewed By: jamesr66a
Differential Revision: D5363675
fbshipit-source-id: 352bd70378031f1d81606c9267e625c6728b18fd
Summary: Our existing serialization routines take a significant amount of time for large numpy arrays in order to verify the type of each element in the array as well as converting each element to a canonical type. For large floating-point tensors, such as model parameters, this checking and converting takes a significant amount of time. Adding a fast track path for just float32 arrays as this is the most common use case to worry about.
Reviewed By: akyrola
Differential Revision: D5389953
fbshipit-source-id: 26f44cb2426ea3efb849e7707b27d5485f69956c
Summary:
numpy.random.rand generates samples from [0, 1) and therefore, the leaky relu test cases weren't testing negative inputs. Tests still pass after change.
Leaky relu can be used in-place, but gradient took X rather than Y. Technically, the result is no different as it's just used for a sign test in the gradient, but updated it to take Y to reduce confusion.
Differential Revision: D5390126
fbshipit-source-id: d0c428abbb2797eb33902a7d2a2f59d5e85daaa6
Summary: GetComputedParams tests namescopes with equality while GetParams tests with a prefix. Switching GetComputedParams to also use a prefix so that both functions have similar usages.
Reviewed By: akyrola
Differential Revision: D5389816
fbshipit-source-id: 0e43e4b491fccbad3b855b6b735dc2b91d7626c9