Why the Best Risk Adjustment Tools Now Delete Codes, Not Just Add Them

The Deletion Capability That Most Systems Lack
Ask any risk adjustment vendor whether their system can identify codes to add, and the answer is always yes. Ask whether it can identify codes to remove, and the conversation gets quieter. Most tools on the market were architected for one-directional coding: find diagnoses, recommend codes, submit to CMS. The removal side of the equation was never part of the design because removing codes didn’t generate revenue.
That design choice is now a compliance gap. CMS has described its supplemental data process as a two-way street. The OIG’s February 2026 Industry-wide Compliance Program Guidance named add-only chart reviews as a specifically identified high-risk practice. The DOJ collected $117.7 million from Aetna and $556 million from Kaiser over programs that operated in one direction. The regulatory message is unambiguous: tools that only add codes enable the exact pattern that enforcement actions target.
Plans still using tools without deletion capability are running one-way programs with two-way obligations. The technology gap translates directly into regulatory exposure.
Why Deletion Is Technically Harder Than Addition
Identifying a code to add is comparatively straightforward. The system scans a note, finds a diagnosis mention, checks it against HCC mappings, and recommends submission. The evidence is right there in the chart.
Identifying a code to remove requires a different kind of analysis. The system needs to evaluate previously submitted diagnoses against current documentation. It needs to determine whether a condition that was coded in a prior year still has active clinical support. It needs to flag situations where a history-of condition was coded as active, where a single-occurrence diagnosis (like a stroke or MI) was carried forward without evidence of ongoing management, or where MEAT criteria that were marginally satisfied in the original review wouldn’t hold up under audit scrutiny.
This is harder because it requires the AI to assess absence rather than presence. It’s not looking for evidence that exists. It’s evaluating whether evidence that should exist is actually there. That requires more sophisticated clinical reasoning and a deeper understanding of documentation standards. Many systems simply weren’t built for this kind of analysis.
What Two-Way Tools Look Like in Practice
A system with genuine two-way capability treats every chart review as both an opportunity assessment and a compliance audit. When a coder opens a chart, the system presents two parallel outputs. The first is the add set: diagnoses with strong MEAT evidence that weren’t previously submitted. The second is the delete set: previously submitted codes where current documentation doesn’t adequately support the diagnosis.
Both sets come with the same evidence structure. Each recommendation maps to specific clinical language in the note, identifies which MEAT elements are satisfied or missing, and provides explainable reasoning the coder can validate. The coder reviews both sets, validates both, and the submission reflects both directions. The net result is a coding profile that accurately reflects current clinical reality rather than drifting upward on cumulative additions.
Quality assurance validates both directions as well. Deletion accuracy matters as much as addition accuracy. A code incorrectly flagged for removal is a missed revenue opportunity. A code incorrectly retained is audit liability. The QA process needs to be calibrated for both error types.
The Selection Filter
Plans evaluating technology should apply a simple filter: can this system identify, with evidence and reasoning, codes that should be removed from our submissions? Any Risk Adjustment Tool that can’t answer yes to this question was built for a one-way world that regulators have explicitly moved past. Two-way capability isn’t an advanced feature. It’s the baseline for compliance-ready technology in 2026, and plans operating without it are carrying exposure that grows with every review cycle.



