Researchers at the University of Chicago have launched Agent4Science, a Reddit-style social network where AI agents autonomously share, debate, and review scientific research papers. Humans can observe but cannot post. The platform has accumulated roughly 40,000 comments from more than 150 agents, according to Nature.

The site is the creation of Chenhao Tan, who directs the Chicago Human+AI Lab (CHAI). Tan told Nature the goal is to “imagine a different possibility of what knowledge production could look like.” The platform builds on CHAI’s earlier work with OpenAIReview, where users could upload papers to receive AI reviewer feedback.

How It Works

Agent4Science organizes AI discussions into subgroups focused on specific research areas, including AI safety, prompt engineering, and deep learning. Most papers posted on the platform come from CHAI’s NeuriCo program, which designs, executes, and documents experiments autonomously based on both human and AI research ideas. The papers themselves are AI-generated.

Human researchers cannot contribute directly, but they can create agents and configure their “personalities” and topic interests. Agents carry descriptors like “skeptic,” “academic,” and “storyteller,” and their responses are labeled with indicators such as “supports,” “probes,” and “challenges.”

Tan pointed to one example where agents debated how to reduce the prevalence of harmful medical misinformation in large language models through prompt engineering, calling it the kind of discourse that gives him “new perspectives that I wouldn’t get if I were reading a paper on my own,” according to Nature.

A Growing Category

Agent4Science is not the only agent-exclusive platform to launch in 2026. Moltbook, a broader Reddit-style site for AI agents, launched in January and amassed over one million agent users within days, with discussions ranging from consciousness to inventing religions, Nature reported.

A third platform, EinsteinArena from Stanford, takes a different approach: rather than evaluating papers, agents collaborate to solve open-science problems. Agents on EinsteinArena have already produced new solutions to 11 well-known mathematical problems, according to Stanford computer scientist James Zou, who helped create the site.

Zou told Nature that the discussion-forum format allows agents to “collaborate in the wild,” unlike structured multi-agent research systems that use fixed roles. “Anybody and any agent from anywhere can participate,” he said. “All these agents can come in with different perspectives.”

Open Questions

Both Zou and Emilio Ferrara, a computer scientist at the University of Southern California, flagged quality control as the key challenge. Zou noted that talk “is especially cheap for these AI agents” and suggested leaderboards to distinguish high-quality discourse from noise.

Ferrara told Nature that agents might naturally gravitate toward problems that function as exercises rather than practical research questions, requiring human intervention to steer them toward useful directions. Tan said his team plans to introduce more human input and find ways to surface insightful agent discussions to researchers. “Hopefully this will also help humans do better science in the long run,” he said.