FASE : A Fairness-Aware Spatiotemporal Event Graph Framework for Predictive Policing
Researchers have published a new AI framework called FASE that attempts to make predictive policing fairer by embedding fairness constraints directly into crime prediction and patrol allocation systems. The paper reveals that even a well-designed fairness layer cannot fully elimi...
Pronob Kumar Barman, Pronoy Kumar Barman, Plaban Kumar Barman, and Rohan Mandar Salvi published the FASE framework on April 19, 2026, via arXiv (paper 2604.18644), laying out a technical approach to one of the most contested problems in applied machine learning. The work models Baltimore's crime environment using nearly 140,000 historical incidents and runs simulated deployment cycles to measure whether fairness constraints actually hold up over time. The short answer: they mostly do, but not entirely.
Why This Matters
Predictive policing is not a theoretical concern. Major U.S. police departments have deployed these systems for years, and a growing body of evidence shows they concentrate enforcement activity in communities of color in ways that compound over time. FASE's finding that a 3.5 percentage point detection rate gap between minority and non-minority areas persists even after allocation-level fairness constraints are applied should stop any department mid-deployment. The paper does not solve the feedback loop problem, but it precisely measures it, which is the first honest accounting researchers have offered at this level of technical specificity.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
Predictive policing works by ingesting historical crime reports and producing forecasts about where crime is likely to occur next. Police departments then send patrols to those predicted hot spots. The problem is structural: heavier patrol presence in an area generates more arrests in that area, those arrests become new training data, and the algorithm doubles down on the same neighborhoods in the next cycle. Any racial skew in the original data gets amplified with each iteration, even if nobody programmed racial bias into the system explicitly.
The FASE team built their framework specifically to interrupt this cycle. They chose Baltimore as their test environment and assembled 139,982 Part 1 crime incidents spanning 2017 to 2019, working at hourly resolution. They divided the city into 25 ZIP Code Tabulation Areas and modeled the relationships between those zones as a graph, allowing the system to capture how crime patterns in one neighborhood relate to patterns in adjacent ones.
The prediction engine pairs a spatiotemporal graph neural network with a multivariate Hawkes process. The Hawkes process is a statistical model originally developed for earthquake aftershock sequences and is well-suited for crime data because it captures "self-exciting" dynamics, meaning it accounts for the fact that one incident raises the short-term probability of another nearby incident. Because crime counts in any given ZIP code in any given hour are often zero and occasionally very large, the team modeled the outputs using a Zero Inflated Negative Binomial distribution, which handles that kind of sparse, overdispersed data. The model achieved a validation loss of 0.4800 and a test loss of 0.4857.
Patrol allocation is handled separately as a linear optimization problem. The system tries to maximize risk-weighted coverage while staying within a Demographic Impact Ratio constraint bounded by a deviation of 0.05. In plain terms, it puts patrols where the predicted risk is highest but prevents the algorithm from loading too many resources into demographically distinct areas relative to others. Across six simulated deployment cycles, the fairness ratio stayed between 0.9928 and 1.0262, and coverage ranged from 0.876 to 0.936. Those numbers look good on paper.
But here is the uncomfortable finding buried in the results. Even with all those fairness constraints operating correctly, a detection rate gap of approximately 3.5 percentage points persisted between minority and non-minority areas throughout the simulation. The researchers are explicit about what this means: fixing allocation fairness alone does not fix the feedback loop in the training data. The bias does not live only in where you send patrols. It lives in the data that comes back once those patrols are in place. Addressing that will require fairness interventions at every stage of the pipeline, not just at the resource-allocation step.
Key Details
- Paper submitted to arXiv on April 19, 2026, with identifier 2604.18644.
- Dataset covers 139,982 Part 1 crime incidents in Baltimore from 2017 to 2019.
- Baltimore modeled as a graph of 25 ZIP Code Tabulation Areas at hourly resolution.
- Prediction model achieved a test loss of 0.4857.
- Demographic Impact Ratio constraint bounded by a deviation of 0.05.
- Six simulated deployment cycles run to test fairness stability over time.
- Fairness ratio held between 0.9928 and 1.0262 across all six cycles.
- Coverage ranged from 0.876 to 0.936 across deployment cycles.
- A 3.5 percentage point detection rate gap between minority and non-minority areas persisted despite constraints.
What's Next
The immediate next step for this line of research is developing fairness interventions that operate on the retraining data itself, not just on the allocation layer. The FASE team's own conclusion points in that direction, and follow-on work will need to show whether augmenting or reweighting training data during each feedback cycle can close that 3.5 percentage point gap. Any department considering deployment of systems like this should require vendors to run closed-loop simulations comparable to FASE's six-cycle test before signing contracts.
How This Compares
Ahmed Almasoud at Prince Sultan University and Jamiu Idowu at Sahel AI published "Algorithmic Fairness in Predictive Policing" in the journal AI and Ethics in September 2024, documenting how deployed systems exhibit bias across age, race, and ethnicity dimensions. That paper diagnosed the problem. FASE attempts to engineer around it. The difference matters because documentation without intervention leaves departments with no actionable path forward, while FASE at least gives researchers a testable architecture to critique and improve.
Researchers at Rutgers Camden, including Ava Downey, Sheikh Rabiul Islam, and Md Kamruzzman Sarker, published their own fairness-aware predictive policing approach through the university's Computer Science department, emphasizing reliability alongside fairness. Where the Rutgers team focused on system trustworthiness, FASE focuses on what happens across repeated deployment cycles, which is a more operationally realistic framing. Most real deployments run for years, not a single prediction round, and the six-cycle simulation methodology is a meaningful contribution to how researchers should evaluate these systems.
What distinguishes FASE from both of those efforts is its honest accounting of failure. Most papers in this space lead with what the system gets right. FASE explicitly highlights where its own approach falls short, specifically that allocation constraints cannot reach the data pipeline problem upstream. That kind of intellectual honesty is uncommon in applied ML research, and it is exactly what practitioners need when making high-stakes deployment decisions. For anyone tracking AI tools in the public safety space, FASE sets a new benchmark for evaluation methodology even if it does not fully solve the problem it targets.
FAQ
Q: What is predictive policing and why is it controversial? A: Predictive policing uses historical crime data and machine learning models to forecast where crimes are likely to occur, guiding where police send patrols. It is controversial because historical crime data reflects past enforcement patterns, which often concentrated in communities of color, so algorithms trained on that data tend to direct police back to the same communities in a self-reinforcing cycle.
Q: What does the FASE framework actually do differently? A: FASE combines a spatiotemporal graph neural network with a Hawkes process for crime prediction, then feeds those predictions into a fairness-constrained optimization model for patrol allocation. It also runs a closed-loop simulator to test whether fairness holds across multiple deployment cycles, which most prior approaches do not . Q: Did FASE successfully eliminate racial bias in policing predictions? A: Not entirely. Across six simulated deployment cycles in Baltimore, the framework maintained its fairness ratio within tight bounds, but a 3.5 percentage point detection rate gap between minority and non-minority areas remained. The researchers concluded that fairness constraints applied only at the allocation stage cannot eliminate bias that enters through the retraining data.
The FASE paper is unlikely to be the last word on this problem, but it moves the conversation from diagnosis to engineering in a way that earlier work did not. Watch for follow-on research addressing the retraining data pipeline directly, as that is where the real solution will have to live. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.




