I am the research lead at LightOn, a French AI start-up bringing large language models to the enterprise. I am also wrapping-up my Ph.D. from École Normale Supérieure, under the supervision of Florent Krzakala and Laurent Daudet.
You can reach me at <firstname>@lolo.science.
🔥 Latest news
- 🧑🔬 The research team of LightOn is expanding: we have opened an antenna in Abu Dhabi, jointly operated with the local Technology Innovation Institute. We are actively looking for research engineers & scientists! Contact me if you are interested in joining our research team.
My research focuses on large language models, and how to make them more generally capable:
- 📈 Challenges in scaling. Scaling has been the main driver of progress in machine learning for the past few years: I am interested in how we can keep that engine churning. Specifically, I am interested in challenges brought forth by ML becoming a so-called big science, with novel research directions at the crossroads of large-scale engineering and pure research.
- 💿 Data scalability. What makes some pretraining datasets better than others? How can we build quality datasets with trillions of tokens? Is the human part in RLHF truly needed, or can models bootstrap themselves? These have been central questions for my team over the past few months, and we will be sharing some exciting results soon!
- 🧠 Philosophy of mind. I am interested in how LLMs can gain human-like functions. This goes from deliberate reasoning and planning, to the acquisition of a theory of mind and its relation with works such as Julian Jaynes' bicameral mind.
During my Ph.D., I also explored alternatives to backpropagation and using optical co-processors to train neural networks.
Fantastic networks and where to find me
- 📟 Blue-bird-thing: twitter.com/slippylolo;
- 📚 Super-duper serious scribbles: scholar.google.com;
- 🤖 Coder-Tinder: github.com/slippylolo;
- 💼 Professionnal-make-believe: linkedin.com/in/julien-launay;
- 🏞️ The Gram: instagram.com/slippylolo.