TMLR reviews stalled [D]
A researcher who submitted a paper to Transactions on Machine Learning Research in February 2025 is still waiting on reviews eight weeks after the manuscript entered "under review" status, despite TMLR's published two-week review deadline. This bottleneck exposes a growing gap be...
A Reddit user posting under the handle Pure-Ad9079 on the r/MachineLearning subreddit flagged a troubling delay at Transactions on Machine Learning Research, better known as TMLR. The author submitted a regular-length paper in February 2025, watched it move into "under review" status six weeks before posting, and has since received exactly one review, despite TMLR's website explicitly promising that reviewers will complete their assessments within two weeks. The post asks a question that far more researchers are probably sitting on: is it acceptable to contact the Action Editor, or does that move put a target on your paper?
Why This Matters
TMLR was built specifically to fix the broken timelines that plague traditional academic publishing, and if it cannot hold its own two-week reviewer deadline on a 12-page paper, that is not a minor administrative hiccup. It is a signal that even well-designed continuous publication systems crack under the weight of machine learning's submission explosion. The machine learning research community produces thousands of papers annually, and researchers often have job offers, grant deadlines, or conference follow-ups riding on publication status. Eight weeks past a promised two-week deadline is a 400 percent overrun, and that deserves more than a shrug.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
TMLR operates differently from traditional journals and most AI conferences. Instead of collecting papers into quarterly issues or tying submissions to fixed deadlines, it runs as a continuous publication venue where papers are reviewed and published on a rolling basis. The theory is sound: eliminate the artificial bottleneck of issue dates, recruit a deep bench of expert reviewers, and give authors faster, more predictable feedback. The journal even publishes an expert reviewer list that includes researchers from major organizations like Meta, covering specializations from federated learning to privacy-preserving machine learning and decentralized optimization.
The paper at the center of this particular complaint falls within TMLR's "regular submission" category, meaning it clocks in at 12 pages or fewer. For that category, TMLR's editorial guidelines set a two-week window for reviewers to complete their reports. By the time Pure-Ad9079 posted to Reddit, the paper had been sitting in review for roughly eight weeks with a single completed review to show for it. The math here is straightforward and unflattering for the journal.
What makes this situation interesting beyond the individual case is the communication dilemma it surfaces. Academic publishing has long carried an unspoken rule that authors should wait patiently and not bother editors, on the theory that appearing too eager might bias the process against them. That cultural norm made some sense when review timelines were vague or informal. It makes considerably less sense when a journal publishes an explicit, specific deadline on its own website and then misses it by six weeks and counting. At that point, an author asking for a status update is not being impatient. They are asking a journal to explain why it has not met its own written commitment.
The hesitation Pure-Ad9079 feels is understandable but misplaced. Standard professional practice across most publishing venues, including journals far more traditional than TMLR, permits and often encourages polite follow-up emails when stated review timelines have lapsed. A concise note to the assigned Action Editor asking for a status update, framed factually rather than as a complaint, is well within professional norms. Action Editors are human administrators managing multiple submissions, and a gentle nudge frequently moves stuck cases forward faster than continued silence.
The broader cause of delays like this one is not unique to TMLR. Reviewer burnout, growing submission volumes in hot research areas, and the simple reality that busy researchers sometimes agree to review papers and then deprioritize the work are systemic problems across academic publishing. TMLR's continuous model was designed to reduce some of these pressures, but it does not eliminate the fundamental constraint of reviewer availability. If demand for TMLR publication is growing faster than the reviewer pool can expand, delays like this one will become more common rather than less.
Key Details
- Pure-Ad9079 submitted the paper to TMLR in February 2025, roughly eight weeks before posting the Reddit complaint.
- The paper has been in "under review" status for approximately six weeks as of the post date.
- TMLR's published policy sets a two-week review deadline for regular submissions of 12 pages or fewer.
- Only one review has been received, out of what would typically be multiple assigned reviewer reports.
- TMLR's expert reviewer list includes researchers from organizations including Meta, covering fields such as federated learning and privacy-preserving machine learning.
- The timeline overrun represents approximately a 400 percent deviation from TMLR's stated standard.
What's Next
Pure-Ad9079 should contact the assigned Action Editor with a short, factual status inquiry referencing the submission date and the journal's published two-week review standard, as this is the most direct path to getting the stalled review process moving again. TMLR's editorial committee contact information is publicly listed on their website, providing a backup escalation path if the Action Editor does not respond within a week or two. If delays like this one become a pattern rather than an exception, expect the machine learning research community to start publicly reassessing TMLR's value proposition relative to competing venues.
How This Compares
Compare this situation to the peer review delays that have plagued NeurIPS and ICLR over the past three years. Both conferences have faced public criticism for reviewer quality and timeliness as submission volumes scaled dramatically. NeurIPS 2023 received over 12,000 submissions, and widespread reviewer complaints about workload became a public conversation. TMLR was positioned as a corrective to exactly that kind of conference chaos, which makes a single paper's eight-week delay more damaging symbolically than it might appear numerically.
arXiv, the preprint server where most machine learning researchers post work before or alongside formal submission, sidesteps this problem entirely by not doing peer review at all. The tradeoff is that arXiv papers carry no editorial imprimatur. TMLR was supposed to offer the best of both worlds, faster turnaround than traditional journals with meaningful peer review attached. Stories like this one suggest that promise is harder to keep than it looked in design documents.
The Journal of Machine Learning Research, JMLR, operates on a similarly continuous model and has maintained a generally strong reputation for review quality over many years, though it is not immune to delays either. The difference is that JMLR has been operating long enough to have normalized reviewer expectations on both sides. TMLR is still relatively young as a venue, and its ability to enforce its own stated timelines will be a defining factor in whether researchers continue routing their best work there or treat it as a secondary option. For a deeper look at AI tools and platforms reshaping research workflows, and for the latest AI news on how publishing infrastructure is evolving alongside model development, the resources are available.
FAQ
Q: What is TMLR and how does it work? A: Transactions on Machine Learning Research is a peer-reviewed journal that publishes machine learning papers on a continuous rolling basis rather than in fixed issues or around conference deadlines. Authors submit papers, which are assigned to Action Editors who recruit reviewers. For regular papers of 12 pages or fewer, the journal's stated policy sets a two-week window for reviewers to complete their assessments.
Q: Is it acceptable to email an editor about a delayed review? A: Yes, reaching out to an Action Editor with a polite, factual status inquiry is professionally appropriate once a journal's stated review deadline has passed by a substantial margin. Framing the message as a straightforward request for a status update rather than a complaint avoids creating friction and often prompts editors to follow up with tardy reviewers quickly.
Q: Why do peer reviews take longer than journals say they will? A: Reviewer availability is the main constraint. Researchers who agree to review papers are juggling teaching, their own research, and other commitments, and reviews frequently slip in priority. As submission volumes in machine learning have grown sharply over the past five years, finding qualified reviewers willing to complete timely reports has become harder across nearly every major venue in the field.
Review delays are a frustrating but solvable problem, and TMLR has the editorial infrastructure to address them if the organization treats published deadlines as firm commitments rather than aspirational targets. Researchers navigating similar delays can find practical guides on academic AI workflows to help manage the waiting period productively. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.

![TMLR reviews stalled [D]](https://images.pexels.com/photos/6804068/pexels-photo-6804068.jpeg?auto=compress&cs=tinysrgb&fit=crop&h=627&w=1200)


