In this day and age, everything is electronic or digital. Data, both personal and business, is being generated and collected at extraordinary rates. Until recently, most forensic accountants and business valuators were not taking advantage of the transactional, accounting, and other data being generated and collected by businesses or on behalf of individuals (e.g., bank and credit card transactions).

Historically, much of the data collected was not put to good use because analyzing it was a manual, labor-intensive, and time-consuming process. But this is changing with advances in technology and a recent focus on using this data more effectively. Businesses have started using data analytics for continuous, real-time monitoring of key business metrics. The benefits of such monitoring include improved risk assessments, increased business transparency, and reduced costs of risk management programs, to name just a few.

The recent data analytics trend is not just benefiting businesses. Data analytics is also playing a larger role in litigation, particularly in calculating damages and assessing fraud. In the past, it may have been too expensive to complete a damage calculation or fraud analysis because of the effort it would take to analyze a large data set. But current technology makes it possible to analyze those data sets more efficiently and cost-effectively, as illustrated by the following case study.

Case Study
We recently employed data analytics to manage and analyze huge datasets in connection with a dispute over collective bargaining agreements. The plaintiff claimed that the defendant, a party to a collective bargaining agreement, was obligated to make certain monthly contributions to three different funds but underpaid each of the funds over a period of three years. The defendant filed a counterclaim alleging that the plaintiff failed to account for instances of overpayments and only accounted for the underpayments, thus overstating the amount allegedly due.

We were provided thousands of records, including weekly payroll records for each employee and weekly contribution data for each of the funds. We used IDEA data analysis software and Excel to verify the efficacy of the information provided and perform damage calculations. The records came in a variety of formats, including Excel, PDF, and text files. In order to complete the analysis, we needed all of the records to be in the same format. We also needed all information related to an individual employee for the entire three-year damage period to be contained in one row, allowing us to easily match contribution detail by employee and by week with payroll data by employee and by week. Previously, our only option would have been to enter and sort the data manually in Excel—an extremely time-consuming process. But in this case, we were able to quickly and efficiently organize the data into one dataset by uploading the documents, as provided, into IDEA.

IDEA automatically converted all of the data into the same format and allowed us to run a program that matched the payroll records to the contribution records so that all payroll and contribution data for an employee was presented in one row. IDEA also gave us the option of choosing which data to include in our analysis. That way, if there were irrelevant data points included in the documents provided, they were
easily excluded from our dataset.

IDEA also allowed us to run some analyses within the program. For example, we needed to know the full-time versus part-time status of each employee on a monthly basis. We were able to create a query in IDEA that automatically determined that status in each month. We also needed to know the length of employment for each employee, in months, as of each pay period. Again, that was a simple query run in IDEA, which automatically populated the information into our dataset.

Once we had all the data converted, combined, and organized in IDEA the way we wanted, we were able to export the dataset to Excel. Having the data organized in Excel allowed us to do two things:

  • Create our own analysis to determine the underpayments or overpayments to the funds, and
  • Compare our analysis to the opposing expert’s analysis (whose report happened to be provided in Excel) to easily identify the differences in our analyses.

For our damages analysis, we had to make some assumptions regarding employee start dates because that data had not been provided to us. For example, employees did not begin paying into one of the funds until they had been employed for seven months. We had the data regarding an employee’s first contribution into that fund so we assumed that the employee started seven months prior to that first contribution. We tested this assumption on both employees for whom we had start dates and employees for whom we did not have start dates. The assumed start dates matched the actual start dates that we had, confirming that our assumption was accurate. This allowed us to confirm whether the defendant had started contributions at the right time based on each employee’s length of employment. We were able to complete this analysis and testing very easily in Excel because we were able to effectively and efficiently consolidate and organize the data in IDEA.

After we tested some of the data, we were able to create simple Excel formulas to compare the amounts actually paid to the funds against the amounts that should have been paid to the funds to determine the total underpayment or overpayment amount.

As for our comparison to the opposing expert’s analysis, we were easily able to run simple lookup formulas to identify the differences in underpayments or overpayments for each employee, for each pay period. This allowed us to identify the following key differences between our analyses:

  • Rounding adjustments. The plaintiffs had rounded employees’ hours up, resulting in overstated damages.
  • Overfunding adjustments. The plaintiffs had accounted for instances of underfunding, but had not accounted for instances of overfunding.
  • Source adjustment. We identified several instances where the data in the plaintiff’s analysis did not match the source data provided. This is the key adjustment for purposes of this case study, as it likely resulted from hard coding the data from the source documents into Excel or not properly using the check functions built into Excel. These errors alone accounted for $26,000 in
    overstated damages.

Ultimately, we determined that the plaintiff had overstated the amount due from defendants by approximately $400,000. If this analysis had been done the old way, without relying on data analytics, a significant portion of that $400,000 would have been spent on fees to complete
the analysis.

The moral of this story is that data analytics is here to stay. It allows work to be completed more efficiently, benefiting you and your employees while saving time and money for the client as well.

 

Article written by:
Rebekah Smith, CPA, CVA, MAFF, CFF
   Director of Forensic & Dispute Advisory Services
Mallory Ashbrook, CPA, CVA, JD
   Senior Manager, Forensic & Dispute Advisory Services

This article (or a version of it) originally appeared in The Value Examiner, January/February 2021 issue, published by the National Association of Certified Valuators and Analysts (NACVA®). All Rights Reserved. To learn more, please visit https://www.nacva.com/valueexaminer.

« Back