mirror of
https://github.com/zebrajr/pytorch.git
synced 2026-01-15 12:15:51 +00:00
Summary: This learns Shakespeare and then generates samples one character at a time. We want this to be an example of using our LSTM and RNNs in general. Now it takes 4ms to run the training net on current parameters (with batch size = 1). I don't have data on how much each operator takes yet. But overal python loop doesn't seem to influence much - with 1000 fake iterations in run_net it took 4s for each iteration as expected. Future work: * fixing convergence for batching * profiling on operator level * trying it out with GPUs * benchmarking against existing char-rnn implementations * stacking lstms (one lstm is different from two, one needs to take care of scoping) Reviewed By: urikz Differential Revision: D4430612 fbshipit-source-id: b36644fed9844683f670717d57f8527c25ad285c