Getting sabotaged by a reviewer at IJCAI [D]
A researcher posted on Reddit's Machine Learning community claiming that a single peer reviewer at IJCAI submitted a bad-faith review full of false statements about their paper. The case spotlights a growing crisis in AI conference peer review, where record submission volumes are...
A machine learning researcher took to Reddit's r/MachineLearning community to share a frustrating experience with the peer review process at IJCAI, the International Joint Conference on Artificial Intelligence. According to the post, sourced by Daily Dose of AI, the researcher received largely positive feedback from multiple reviewers, but one reviewer submitted a critique that the researcher describes as sabotage: making false claims about missing content that is plainly present in the paper and demanding additional experiments that would push the submission beyond the conference's page limit policy.
Why This Matters
This is not a one-off complaint from a sore loser. It is a symptom of a structural collapse in how top AI conferences manage peer review at scale. IJCAI alone attracts thousands of submissions annually, and as AI research budgets swell and researcher headcounts explode, the pool of willing, qualified reviewers is not growing at the same rate. A single malicious or negligent review can derail years of legitimate work, which is a real cost to the field, not just to one researcher's career. If premier venues like IJCAI, NeurIPS, and ICLR cannot guarantee baseline review quality, they risk becoming less reliable filters for scientific progress.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
IJCAI, founded in 1969, is one of the oldest and most respected venues in artificial intelligence research. Researchers compete fiercely for acceptance slots, and a publication there carries genuine weight for career advancement, grant applications, and professional standing. The conference uses a double-blind peer review system, meaning neither authors nor reviewers know each other's identities, a structure designed to reduce bias and ensure fair evaluation.
The researcher's account describes receiving feedback from multiple reviewers, with all reviews except one coming in as positive or constructively critical. The outlier reviewer, however, allegedly claimed that certain experimental directions and methodological considerations were absent from the paper, when the author states those exact elements are clearly addressed in the submitted manuscript. The reviewer also reportedly asked for additional experiments and citations that would require violating IJCAI's own page limit rules, making compliance literally impossible under the submission guidelines.
What makes the situation particularly maddening for the researcher is the mismatch. Four reviewers read the paper carefully. One apparently did not. Under the conference's structure, that single voice still carries significant weight in the final accept-or-reject decision, which means one person's negligence or bad faith can override the consensus judgment of everyone else who engaged seriously with the work.
The researcher turned to the Machine Learning subreddit for practical advice, asking whether escalating the issue to the IJCAI Program Committee was appropriate and how to frame a rebuttal effectively. Responses from other researchers confirmed that this experience is not isolated, with multiple community members sharing similar stories of reviews that contradicted the actual content of their submissions.
The incident sits inside a larger story about submission volume growth overwhelming review capacity. Academic venues in 2026 are processing record numbers of papers, and conference organizers are increasingly recruiting reviewers who may lack the depth of expertise or time to evaluate every submission thoroughly. That creates conditions where superficial reviews are more likely to appear, and where authors have less recourse when they . IJCAI and peer conferences have responded to community pressure by introducing author rebuttal periods, which allow researchers to correct factual errors in reviews before final decisions are made. But the rebuttal process is limited, and program chairs vary widely in how seriously they weigh author responses against reviewer scores.
Key Details
- IJCAI was established in 1969 and remains one of the top 3 AI research conferences globally alongside NeurIPS and ICML.
- The researcher received multiple reviews, with only 1 reviewer flagged as problematic.
- The problematic reviewer allegedly made false statements about content that the author says is clearly documented in the submitted paper.
- The reviewer demanded additional experiments that the author states would violate IJCAI's page limit policy.
- The original post was published on Reddit's r/MachineLearning community and sourced by Daily Dose of AI in 2026.
- NeurIPS, in response to similar complaints, has expanded its program committee size and added structured reviewer training requirements.
What's Next
The researcher's immediate recourse is the author rebuttal period, where they can directly address the false statements point by point with specific citations to the paper's content. If the program chair is attentive, a well-documented rebuttal that exposes factual inaccuracies in a review can shift the outcome. Watch for IJCAI to face increased community pressure to publish clearer reviewer accountability policies before its 2026 proceedings close.
How This Compares
The peer review problem is not unique to IJCAI. ICLR and NeurIPS have both drawn sustained criticism for inconsistent review quality, with community discussions surfacing annually on platforms like Reddit and academic Twitter. NeurIPS in particular has taken the most visible steps to address the issue, expanding its program committee and piloting reviewer training programs. ICLR's OpenReview platform, which makes reviews publicly visible after decisions are posted, adds a layer of transparency that IJCAI's process currently lacks, and that transparency alone creates mild social accountability for reviewers who might otherwise submit careless work.
What distinguishes this IJCAI case is the specific allegation of factually false statements, not just a disagreement about the quality of the work or the significance of the contribution. That is a harder problem to solve with training programs or larger committees. Some conferences have begun experimenting with AI-assisted review auditing tools that cross-check reviewer criticisms against the actual content of submitted papers, flagging cases where a reviewer claims something is absent when it clearly appears in the manuscript. These tools are still experimental as of 2026 and not deployed at scale, but the IJCAI incident makes a strong argument for accelerating that work.
The broader pattern points to a field that has scaled its research output far faster than it has scaled its quality-control infrastructure. That gap will not close on its own. Until conferences invest seriously in reviewer accountability, whether through structured post-review ratings, stronger program chair oversight, or technological auditing, researchers will keep sharing these stories on Reddit rather than resolving them through official channels. For related AI news on how AI tools are beginning to assist with academic workflows, the coverage has been growing steadily.
FAQ
Q: What is double-blind peer review at AI conferences? A: Double-blind review means neither the paper's authors nor the reviewers know each other's identities during evaluation. The system is designed to reduce favoritism and bias. However, it also means reviewers face no public accountability for careless or inaccurate feedback, which is a core reason why researchers are calling for reforms to the system.
Q: Can an author dispute a bad peer review at IJCAI? A: Yes. IJCAI, like most major AI conferences, provides an author rebuttal period before final decisions are made. Authors can respond directly to reviewer comments, correct factual errors, and clarify misunderstandings. Whether that rebuttal changes the outcome depends heavily on the program chair's judgment and how seriously they weigh the author's corrections.
Q: Why is peer review quality getting worse at AI conferences? A: Submission volumes at top AI venues have grown dramatically, but the pool of experienced, available reviewers has not kept pace. Conferences must recruit reviewers who may lack deep expertise in specific subfields or who have limited time to read papers thoroughly. That mismatch between submission volume and reviewer capacity is the primary driver of declining review quality across IJCAI, NeurIPS, and ICLR.
The IJCAI peer review controversy is a useful stress test for the entire academic publishing infrastructure in AI, and the community's growing willingness to document and share these experiences publicly is creating real pressure on conference organizers to act. Researchers building in this space should consult the available AI guides and tools to stay ahead of how AI is reshaping both research and publication workflows. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.

![Getting sabotaged by a reviewer at IJCAI [D]](https://images.pexels.com/photos/374559/pexels-photo-374559.jpeg?auto=compress&cs=tinysrgb&fit=crop&h=627&w=1200)


