I am the research lead at LightOn, a French AI start-up bringing large language models to the enterprise. I am also wrapping-up my Ph.D. from École Normale Supérieure, under the supervision of Florent Krzakala and Laurent Daudet.
You can reach me at <firstname>@lolo.science.
🔥 Latest news
- 📈 I am organizing an ICML workshop on Efficient Systems for Foundation Models! Join us in Hawaii to discuss the nitty gritty details of training and inference of large models: gpusgobrrr.com.
Research interests
My research focuses on large language models, and how to make them more generally capable:
- 📈 Challenges in scaling. Scaling has been the main driver of progress in machine learning for the past few years: I am interested in how we can keep that engine churning. Specifically, I am interested in challenges brought forth by ML becoming a so-called big science, with novel research directions at the crossroads of large-scale engineering and pure research.
- 💿 Data scalability. What makes some pretraining datasets better than others? How can we build quality datasets with trillions of tokens? Is the human part in RLHF truly needed, or can models bootstrap themselves? These have been central questions for my team over the past few months, and we will be sharing some exciting results soon!
- 🧠 Philosophy of mind. I am interested in how LLMs can gain human-like functions. This goes from deliberate reasoning and planning, to the acquisition of a theory of mind and its relation with works such as Julian Jaynes' bicameral mind.
During my Ph.D., I also explored alternatives to backpropagation and using optical co-processors to train neural networks.
Fantastic networks and where to find me
- 📟 Blue-bird-thing: twitter.com/slippylolo;
- 📚 Super-duper serious scribbles: scholar.google.com;
- 🤖 Coder-Tinder: github.com/slippylolo;
- 💼 Professionnal-make-believe: linkedin.com/in/julien-launay;
- 🏞️ The Gram: instagram.com/slippylolo.