News
July 2024
- Very excited that our newest work on model collapse has been accepted at COLM 2024. Preprint on arXiv.
May 2024
- Our newest preprint “Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data” is now available on arXiv.
March 2024
- Exited that our paper “Grounding Gaps in Language Model Generations” has been accepted at NAACL 2024. Preprint on arXiv.
October 2023
- A shortened and security-focused version of our paper “Oracles & Followers: Stackelberg Equilibria in Deep Multi-Agent Reinforcement Learning” will be presented at the Multi-Agent Security Workshop at NeurIPS in December.
September 2023
- Our paper (joint work with Sarah Keren and Tom Danino at Technion) “Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning” has been accepted at NeurIPS 2023. Preprint already available on arXiv, and code on github.
May 2023
Very excited that our paper “Oracles & Followers: Stackelberg Equilibria in Deep Multi-Agent Reinforcement Learning” has been accepted at ICML 2023. Preprint already available on arXiv.
Thrilled to announce that David Parkes and myself as co-PIs have been awarded a grant by the Cooperative AI Foundation on Designing Robust Cooperative AI Systems.
April 2023
- Very excited to move to Stanford CS soon to work with Diyi Yang! We will work on combining multi-agent RL with natural language processing, in order to make language-based AI systems more robust to the behavior of their human users.
January 2023
- Our paper “Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning” has been accepted as an extended abstract at AAMAS 2023! [preprint]
November 2022
Excited to announce that our paper “Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning” has been accepted to the NeurIPS Deep RL workshop in December!
We will demo our Crowdplay platform at the NeurIPS Offline RL workshop. Come see us during the poster session for a live demonstration of streaming OpenAI Gym and other RL environments to a browser client, playing Space Invaders against an AI agent trained with off-the-shelf RL pipelines, and a discussion of our large (300hrs+) dataset of human gameplay data in Atari 2600 games.
October 2022
Our newest paper “Oracles & Followers: Stackelberg Equilibria in Deep Multi-Agent Reinforcement Learning” is now available on arXiv.
We’ll be presenting our paper “Meta-RL for Multi-Agent RL: Learning to Adapt to Evolving Agents” at the NeurIPS Workshop on Meta-Learning!
July 2022
- Sarah Keren and myself will be presenting our recent joint work at the IJCAI Workshop on Ad-Hoc Teamwork later this month.
June 2022
- We will be at the ICML Workshop on Human-Machine Collaboration and Teaming to talk about crowdsourcing human-AI data at scale!
April 2022
- The code and dataset for our Crowdplay project are now available on Github!
March 2022
- We will be presenting our paper “CrowdPlay: Crowdsourcing Human Demonstrations for Offline Learning” at ICLR next month. Come say Hi!