tensorflow - Is it possible to precompile graphs then run in parallel? -
i following cs224d code recursive neural networks (see: http://cs224d.stanford.edu/syllabus.html pset 3 solutions)
how code works new graph compiled each training example since each tree has different shape, suboptimal running minibatches, etc. have similar recursive graph larger input, need solution more parallelizable.
i have idea precompile graphs , load them asynchronously along data gpu, similar how queue works. offload minibatch of results, compute combined gradients, back-propagate. idea can account variable input, , take advantage of gpu (have cake , eat too).
is feasible?
Comments
Post a Comment