Tag Archives: Recovery Audit Program

GAO Report Slams RAC Operations

GAO Report Ignores RAC Statistical Methodology Problems 

In a new report “Changes Needed to Improve CMS’s Recovery Audit Program Operations and Contractor Oversight,” the U.S. Government Accountability Office (GAO) has found that as of May 2015, CMS collected “less than $10 million in improper payments, and had not approved new audit work since March 2014.”

Although CMS wrote  a “statement of objectives” for how RACs should identify improper payments, the RACs are lagging behind their targets.  And, CMS is also behind schedule in making regular performance evaluations of RACs.   In addition, CMS has not yet set up clear performance metrics that can be used to measure RAC activities.

RAC Methodology Lacking

One of the key problems is that there still is no clear accepted methodology for RACs to determine improper payments.   In fact, it does not appear that the methods currently used are consistent, or even completely understood by the government.

Statistical Methodology is Absent

A glaring omission in the Report is a lack of discussion of the statistical methodologies used for doing sampling of claims and making extrapolation of over payment amounts.

Even though the term “methodology” appears nine times in the report, it never is mentioned in connection with statistical methodology.

Barraclough on RACs and Stats

Our experience has shown that much improvement is needed in how contractors go about their statistical work.   In almost every case we have examined, the work would almost never pass the equivalent of a Daubert (rule of evidence regarding the admissibility of expert witnesses’ testimony during United States federal legal proceedings) test for scientific quality.


It’s hard to know if the assumption that “CMS collected “less than $10 million in improper payments” is correct since the methodology usually doesn’t support the initial RAC claim amounts. The amount of “improper payments” probably is much lower.

In Barraclough’s review of statistical work, our team has seen everything from use of the wrong formulas, to outright fabrication of data on the part of the RAC contractors.

These practices need to stop, but for the time being at least, it appears that the GAO is unaware of the problem.

Please contact Barraclough Health (email to info@barracloughllc.com) for the best statistical methodology in order  to reverse Medicare and Medicaid audits.

RAC Audit Medicare Data Snapshot

RAC Medicare Audit  Data  From Senate Chairman Hatch

RAC Medicare Audits recovered over $3 billion

  • A large portion of the initial payment determinations are reversed on appeal.  The Department of Health and Human Services Office of Inspector General reported that, of the 41,000 appeals made to Administrative Law Judges in FY 2012, over 60 percent were partially or fully favorable to the defendant.
  • In Fiscal Year 2014, Medicare covered health services for approximately 54 million elderly and disabled beneficiaries at a cost of $603 billion.
  • Of that figure, an estimated $60 billion, or approximately ten percent, were improperly paid, averaging more than $1,000 in improper payments for every Medicare beneficiary.
  • The large number of appeals being filed can’t be put on the docket of the Office of Medicare Hearings and Appeals for 20-24 weeks.
  • In FY 2009, the majority of appeals were processed within 94 days.  In Fiscal Year 2015, the average for an appeal is 604 days.

Source: Hatch Statement at Finance Markup of the Audit & Appeal Fairness, Integrity, and Reforms in Medicare Act of 2015 (June 3, 2015) Senator Hatch (R-Utah) is the Chair of the Senate Finance Committee.

The Barraclough Blog features latest news on events and policies, as well as original Barraclough features and blogs  about Litigation support for Medicare and Medicaid appeals and statistical overpayment extrapolations.


The Centers for Medicare & Medicaid Services (CMS) recently released its annual report to Congress:   Recovery Auditing in Medicare for Fiscal Year 2013: FY 2013 Report to Congress as Required by Section 1893(h) of the Social Security Act.

The report is full of statistics on the Medicare auditing program.  It presents a picture of “profit”, that is, less money is spent by the government on running the auditing program than is recovered.

The report, however, does not address the discrepancies between states for recovery “claw back” of Medicare claims.   The calculation is shown in the figure below.

2013_RAC_DATA_AnalysisWhen we chart the amount recovered and compare it to the number of persons living in the state, the difference is vast.   In Maine, for example, there was $2 per state resident recovered.   However, in North Dakota, there was $36 dollars recovered for each resident.

Does this mean that the health care providers in some states are being more strictly audited than in others?   The CMS report does not give any clue to the answer.

Federal Precision Standards for Medicare and Medicaid Statistical Sampling and Extrapolations

As we have seen from other entries to this blog, Recovery Audit Contractors (RACs) operating under the Centers for Medicare & Medicaid ServicesRecovery Audit Program who are involved in conducting Medicare and Medicaid audits of health care providers have been granted an incredible bit of leeway in acceptable standards for their work.   It is not uncommon to see precision that is far more than +/- 20%, and even when the precision is as poor as +/- 49%, the Medicare Appeals Council (MAC) as well as Federal Courts will not throw out the extrapolation.

So the question that arises is this:  Are contractors free to employ any accuracy they wish in their work, or are there standards that have been suggested or published by the Federal Government?

As it turns out, there appears to be some guidance from two sources.

Source One:

In the May 5, 2010, report by the Acting Administrator and Chief Operating Officer of the Centers for Medicare & Medicaid Services (CMS)  On page 3 of that report, the section titled “Precision-level requirements” states:

“[Office of Management and BudgetOMB Circular A-123, Appendix C, states that Federal agencies must produce a statistically valid error estimate that meets precision levels of plus or minus 2.5 percentage points with a 90-percent confidence interval or plus or minus 3 percentage points with a 95-percent confidence interval.”

There is a note in the document: Under these assumptions, the minimum sample size needed to meet the precision requirements can be approximated by the following formula, which is used in the examples:

BLOG_ACCURACY_FORMULA.001Where n is the required minimum sample size and P is the estimated percentage of improper payments (Note: This sample size formula is derived from Sampling of Populations: Methods and Applications (3rd edition); Levy, P. S. & Lemeshow, S. (1999); New York: John Wiley & Sons; at page 74. The constant 2.706 is 1.645 squared.

Source Two:

In the CMS-issued Federal Register, 72 Fed. Reg. 50490, 50495 (Aug. 31, 2007), the error estimate should meet precision levels of plus or minus 2.5 percentage points with a 90-percent confidence interval, and the State error estimates should meet precision levels of plus or minus 3 percentage points with a 95-percent confidence interval.”

So it appears that these standards, which are fairly good, have been twice promulgated by the Federal Government.

The question is:  Why are they routinely ignored by Administrative Law Judges (ALJs), and the Medicare Appeals Council (MAC)?


Appeal on Statistical Sampling for Medicare Audits – “Measuring the Variables of Interest” and “Proper Procedures”

It has been our experience at Barraclough that contractors almost always skip the step of taking a probe sample when calculating the required sample size.   Even though they do this, they frequently rely on RAT-STATS to make the sample size calculations. The inputs into RAT-STATS requires the variation in the variable being estimated, that is, RAT-STATS requires as one of its crucial inputs the variation (e.g., the mean and standard deviation) of the overpayments (which is the variable being estimated). Since the contractors skip taking a probe sample, they plug the wrong data into the RAT-STATS program, and then make their calculation of sample size by using the variation of the payments instead of the underpayments.   This almost always results in RAT-STATS claiming that a smaller sample size is adequate. In the MPIM, Chapter 3, Section 3.10.2, we see a sketch of what a “properly executed” sample design is.   In includes:

(1) defining the universe, (2) [defining] the frame, (3) [specifying] the sampling units, (4) using proper randomization, (5) accurately measuring the variables of interest, and (5) using the correct formulas for estimation

It can be argued that taking a probe sample so as to be able to plug the correct (and required) data into the RAT-STATS program falls under the fifth category “accurately measuring the variables of interest”. It follows that if the probe sample is not taken, then according to the MPIM, proper procedures have not been used. Note:  RAT-STATS is a free statistical software package that providers can download to assist in a claims review. The package, created by OIG in the late 1970s, is also the primary statistical tool for OIG‘s Office of Audit Services.


The Recovery Audit program from CMS used four companies in FY 2012.   These are HDI, CGI, Performant, and Connolly.   These companies each are responsible for a specific part of the United States.


(Source: Barraclough analysis.)

One would assume that the audit patterns would be roughly similar, but they are not.

If we use as a basis the total number of active physicians practicing in each region(*) and compare this to the activities of the Recovery Auditors, then an uneven pattern emerges.

RAC_regions.001NOTE: (*) Source: The Henry J. Kaiser Family Foundation


The Recovery Audit program from CMS used four companies in FY 2012.   These are HDICGIPerformant, and Connolly.   These companies each are responsible for a specific part of the United States.

If we use as a basis the total number of active physicians practicing in each region(*) and compare this to the activities of the Recovery Auditors, then an uneven pattern emerges.

One would assume that the audit patterns would be roughly similar, but they are not.


(Source: Barraclough analysis.)

It appears that HDI audits the least number of claims per physician, but in exchange recovers the greatest amount of claw backs.

RAC_regions.001NOTE: (*) Source: The Henry J. Kaiser Family Foundation


It appears that if you were located in the Western Region of the United States, your chances of having a claim paid is more than twice as high as if you are in the North East, which has the lowest return rate for claims that were not paid but should have been.

Here is the data according to the Recovery Auditor:


According to the most recent report concerning the Recovery Audit Program from CMS regarding its auditing program, a small number of underpayments were restored.

There are four Regions of Recovery Auditors in FY 2012:



The amount returned in FY2012 through the Recovery Audit Program was $1.9 billion.  In order to collect these claw-backs, CMS spent $313.9 million.

In other words, for each $1 dollar spent, $5.05 dollars were returned to the taxpayer.

Is this high or low?  The $6 dollar amount seems standard in many settings, so the cost of collection appears to be in line.

It would be interesting to see what this ratio is for the Recovery Audit Contractors (RACs).