Skip to main content

Drop the Presumption of Correctness for Flawed IRS Algorithms

Tax Notes

In this article, Robert Kovacev examines the Internal Revenue Service's (IRS) use of algorithms and artificial intelligence (AI), arguing that automation bias must be reduced to protect taxpayers and tax administration. The Inflation Reduction Act's (IRA) $45.6 billion in dedicated IRS enforcement funds will likely enhance IRS investments in AI and data analytics techniques. The author notes that as the IRS increasingly relies on automated processes, there is increased danger that humans (whether IRS agents or judges) will defer to the outcomes of those algorithms, even if those humans are explicitly told to not rely on the outcome of such systems. Experts in AI refer to this concept as "automation bias" or "overtrusting" the outcome of an automated system. Overtrust in IRS enforcement algorithms is a serious potential harm to tax administration as well as individual taxpayers. Kovacev describes a recent example of the algorithm, called the reasonable cause assistant (RCA), to illustrate the dangers of blind deference to IRS algorithms. Increased oversight of the IRS's use of AI and data analytics in making enforcement decisions is imperative. The author concludes that the best remedy would be for the IRS to follow the policy of transparency and accountability set forth in Executive Order 13960. Until that happens, denying the IRS a presumption of correctness for algorithmic determinations is a proportionate remedy that protects the IRS's legitimate law enforcement interests as well as the rights of taxpayers.