Files
pytorch/caffe2/python
Lin Jiang 1f158adeee Add support for attention weight in SparseLookup (#26748)
Summary:
Support attention weights input to SparseLookup. In attention sum pooling, if attention weights can be pre-calculated before embedding lookup,  they can be passed to SparseLookup and processed by SparseLengthsWeightedSum op. One example is id_score attention sum pooling.

Essentially the net is converted from:
  LengthsSum(Mul(Gather(keys, w), att_weight))
to:
  SpaseLenghtsWeightedSum(keys, w, att_weight)

It unblocks potential efficiency gain with distributed training.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/26748

Test Plan: unit test

Reviewed By: chocjy

Differential Revision: D17553345

Pulled By: wheatkit

fbshipit-source-id: 60cc3c4b0bc1eade5459ac598e85286f3849a412
2019-10-08 20:22:25 -07:00
..
2019-08-22 11:20:40 -07:00
2018-10-16 16:36:58 -07:00
2018-11-12 15:59:46 -08:00
2019-09-13 11:38:14 -07:00
2019-08-09 11:43:04 -07:00
2019-06-13 15:21:55 -07:00
2018-10-31 11:16:38 -07:00
2018-11-12 15:59:46 -08:00
2019-08-06 10:58:43 -07:00