[NeurIPS 2026] Will you be submitting your code alongside your submissions? [D]
NeurIPS 2026 has rolled out a formal code submission policy that strongly encourages but does not require researchers to submit their code alongside conference papers. The policy has sparked genuine debate in the machine learning community about whether the reproducibility benefi...
A Reddit thread in r/MachineLearning, started by user undesirable_12, has pulled back the curtain on a tension that quietly runs through every major AI conference submission season. The question is deceptively simple: should researchers submit their code alongside their NeurIPS 2026 papers? The answer, it turns out, depends on how much you trust the peer review system and how much you fear losing credit for your own ideas before your paper ever gets accepted.
Why This Matters
Code reproducibility is no longer a nice-to-have in machine learning research. It is the difference between science and storytelling. NeurIPS, as the fortieth edition of one of the field's two or three most prestigious venues, sets the tone for how hundreds of other conferences and journals treat this issue. When NeurIPS stops short of making code submission mandatory, that decision echoes across ICLR, ICML, and every lab that looks to those conferences for norms. The plagiarism concern is not paranoia either. Given that NeurIPS 2026 alone will process thousands of submissions, the statistical likelihood that at least one reviewer will act in bad faith is not zero.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
NeurIPS 2026 is running a three-city format this December, with events in Sydney from December 6 through 12, Atlanta from December 8 through 13, and Paris from December 9 through 13. Against that backdrop, the conference published a Code Submission Policy that tries to thread a needle between encouraging open science and not punishing researchers who have legitimate reasons to hold back their code.
The policy strongly encourages authors to submit code when the paper's main contributions depend on experimental results. Authors working with new datasets face similar encouragement to share those as well. Submissions must be anonymized, bundled into a single compressed zip file, and kept under 100 megabytes for small datasets. Larger datasets can be linked through anonymous URLs to get around that size cap.
Reviewers who receive code are explicitly required to keep the materials confidential and use them only for reviewing the paper in question. The policy also points authors toward the Papers with Code repository on GitHub for detailed technical specifications covering training code, evaluation code, and dependency management. For papers that get accepted, authors must de-anonymize everything when preparing camera-ready versions.
The Reddit discussion that surfaced this debate is worth taking seriously. User undesirable_12 put the dilemma plainly: code submission boosts credibility, but the review window creates a gap where someone with access to your unpublished methods could use them before you receive any credit. That gap is real. Anonymity rules protect the author's identity from reviewers, but they do nothing to protect the code itself from a reviewer who decides to quietly incorporate a submitted technique into their own parallel work.
NeurIPS also made headlines earlier this year for a separate policy stumble. The initial version of the NeurIPS 2026 handbook contained a link to a US government sanctions tool that implied far broader participation restrictions than the conference actually intended. The organization acknowledged in an announcement that the error came from a miscommunication between the NeurIPS Foundation and their legal team, and they apologized, clarifying that their actual policy aligns with ACM and IEEE standards. That episode has made researchers understandably alert to any NeurIPS policy that feels like it adds friction or risk.
On the more positive side, NeurIPS announced in April 2026 a partnership with Google to give authors access to the Google Paper Assistant Tool, known as PAT. The tool provides automated feedback on manuscripts before the final deadline. NeurIPS was explicit that PAT feedback stays private to the authors and plays no role in the review process, which matters for keeping the playing field level. Program Chairs Marc Deisenroth, Finale Doshi-Velez, Nika Haghtalab, David Rolnick, and Jenna Wiens are leading that collaboration alongside Google Research and Google Cloud teams.
Key Details
- NeurIPS 2026 runs across three cities: Sydney (December 6-12), Atlanta (December 8-13), and Paris (December 9-13).
- Code submission is strongly encouraged but not required under the official Code Submission Policy.
- Zip file size limit for code and small datasets is 100 megabytes.
- The Google Paper Assistant Tool partnership was announced in April 2026.
- NeurIPS 2026 is the fortieth annual edition of the conference.
- The handbook error referencing a US government sanctions tool was acknowledged and corrected by the NeurIPS Foundation.
- The Evaluations and Datasets track, updated April 7, 2026, applies the same general policies as the main track.
What's Next
The final submission deadline for NeurIPS 2026 papers will determine how many researchers actually opt into code submission under the new policy, and that data will likely shape whether future editions move toward making code submission mandatory. Watch for post-conference reproducibility audits, which have become increasingly common at top venues, since those audits will reveal whether the "strongly encouraged" framing actually moved the needle. If accepted paper rates correlate noticeably with code submission rates, the community will have a clearer signal about whether voluntary submission is creating a two-tier system.
How This Compares
ICML 2024 and 2025 both operated under similar "encouraged but not required" frameworks for code submission, and the community response was roughly the same each time: researchers who felt secure submitted code, and those in competitive subfields often held back. NeurIPS 2026 has not moved the needle in any structural way compared to those conferences. What makes this moment different is the three-city format and the sheer scale, which means more reviewers, more opportunities for bad actors, and more reason for researchers to feel exposed.
Compare this to the approach taken by the Journal of Machine Learning Research, which has a standing policy of requiring code for reproducibility supplements on accepted papers rather than during review. That sequencing solves the plagiarism problem almost entirely because the code only becomes public after priority is established. NeurIPS could learn from that model, but conference culture moves slowly.
The Google PAT partnership is genuinely worth watching as a separate thread. STOC and ICML both piloted similar AI-assisted manuscript tools before NeurIPS adopted them. If PAT improves submission quality in measurable ways, it could become a standard fixture at top venues, which raises its own questions about equity for researchers without strong English writing skills versus those who were already polished writers. For a broader look at AI tools shaping research workflows, the gap between well-resourced labs and independent researchers is already wide, and AI writing assistants could widen it further or narrow it depending on access.
FAQ
Q: Does NeurIPS 2026 require code submission with papers? A: No, NeurIPS 2026 strongly encourages code submission for papers whose main contributions depend on experimental results, but it does not mandate it. Authors who choose to submit code must anonymize it during the review phase and de-anonymize it for camera-ready versions of accepted papers.
Q: How do I protect my code from plagiarism during peer review? A: The NeurIPS policy requires reviewers to keep submitted code strictly confidential and use it only for review purposes. In practice, many researchers choose not to submit code precisely because that rule is difficult to enforce, and the conference has no technical mechanism to prevent a reviewer from reusing ideas they encounter during review.
Q: What is the Google Paper Assistant Tool for NeurIPS 2026? A: The Google Paper Assistant Tool, or PAT, is an AI-powered system that gives NeurIPS 2026 authors automated feedback on their manuscripts before the final submission deadline. The tool was announced in April 2026 through a partnership between NeurIPS and Google, and feedback from PAT is kept private to the authors and does not factor into the review process.
The NeurIPS 2026 code submission debate is ultimately a proxy for a larger question the machine learning field has not fully answered, which is whether open science norms can coexist with competitive research incentives without leaving individual researchers exposed. The conference's "strongly encouraged" stance keeps the conversation alive without resolving it, which means this debate will return in full force next submission cycle. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.

![[NeurIPS 2026] Will you be submitting your code alongside your submissions? [D]](https://images.pexels.com/photos/7988754/pexels-photo-7988754.jpeg?auto=compress&cs=tinysrgb&fit=crop&h=630&w=1200)
