Batched translation speed is very slow compared to Fairseq · Issue #266 · marian-nmt/marian-dev · GitHub
![Don't load entire corpus into memory on start up (enhancement request) · Issue #148 · marian-nmt/marian-dev · GitHub Don't load entire corpus into memory on start up (enhancement request) · Issue #148 · marian-nmt/marian-dev · GitHub](https://user-images.githubusercontent.com/2486505/173464853-7df9bf76-214d-441a-9cce-996b056c2254.png)
Don't load entire corpus into memory on start up (enhancement request) · Issue #148 · marian-nmt/marian-dev · GitHub
![Tuning] Results are GPU-number and batch-size dependent · Issue #444 · tensorflow/tensor2tensor · GitHub Tuning] Results are GPU-number and batch-size dependent · Issue #444 · tensorflow/tensor2tensor · GitHub](https://user-images.githubusercontent.com/15141326/33256270-a3795912-d351-11e7-83e4-ea941ba95dd5.png)
Tuning] Results are GPU-number and batch-size dependent · Issue #444 · tensorflow/tensor2tensor · GitHub
![Information | Free Full-Text | Knowledge Distillation: A Method for Making Neural Machine Translation More Efficient Information | Free Full-Text | Knowledge Distillation: A Method for Making Neural Machine Translation More Efficient](https://pub.mdpi-res.com/information/information-13-00088/article_deploy/html/images/information-13-00088-g003.png?1645089359)