My current work focuses on language model post-training, from studying how reward signals shape learning and optimization to building efficient systems and infrastructure for it. Since generation lies at the core of these systems, I am also interested in efficient generation methods.
I previously worked at o9Solutions (ML team), and at Dematic (Operations Research and Applied AI team).
I obtained my Engineering degree (BSc./MSc.) at Arts et Metiers ParisTech and studied Operations Research (MSc.) at Georgia Tech.
FutureFill has been accepted to ICLR 2026. See you in Rio !
2025-Aug
Our work which provides an optimization perspective on what makes a good reward model for RLHF has been accepted to NeurIPS 2025. See you in San Jose !
@article{shang2026when,title={When Errors Can Be Beneficial: A Categorization of Imperfect Rewards for Policy Gradient},author={Shang, Shuning and Strauss, Hubert and Wei, Stanley and Arora, Sanjeev and Razin, Noam},journal={arXiv preprint arXiv:2604.25872},year={2026}}
Hardware-Efficient Attention for Fast Decoding
Ted
Zadouri, Hubert
Strauss, and Tri
Dao
In Conference on Language Modeling (COLM 2025), 2025
@inproceedings{zadouri2025hardwareefficientattentionfastdecoding,title={Hardware-Efficient Attention for Fast Decoding},author={Zadouri, Ted and Strauss, Hubert and Dao, Tri},booktitle={Conference on Language Modeling},year={2025},}
What Makes a Reward Model a Good Teacher? An Optimization Perspective
Noam
Razin, Zixuan
Wang, Hubert
Strauss, Stanley
Wei, Jason D
Lee, and Sanjeev
Arora
In Advances in Neural Information Processing Systems (NeurIPS 2025), 2025
@inproceedings{razin2025what,title={What Makes a Reward Model a Good Teacher? An Optimization Perspective},author={Razin, Noam and Wang, Zixuan and Strauss, Hubert and Wei, Stanley and Lee, Jason D and Arora, Sanjeev},booktitle={Advances in Neural Information Processing Systems},year={2025},}
FutureFill: Fast Generation from Convolutional Sequence Models
Naman
Agarwal, Xinyi
Chen, Evan
Dogariu, Devan
Shah, Hubert
Strauss, Vlad
Feinberg, Daniel
Suo, Peter
Bartlett, and Elad
Hazan
In International Conference on Learning Representations (ICLR 2026), 2024
@inproceedings{agarwal2025futurefillfastgenerationconvolutional,title={FutureFill: Fast Generation from Convolutional Sequence Models},author={Agarwal, Naman and Chen, Xinyi and Dogariu, Evan and Shah, Devan and Strauss, Hubert and Feinberg, Vlad and Suo, Daniel and Bartlett, Peter and Hazan, Elad},booktitle={International Conference on Learning Representations},year={2024},}