Interview with Craig M. Lewis, SEC Division of Risk, Strategy, and Financial Innovation (Part II)

Craig M. Lewis is Director and Chief Economist of the Division of Risk, Strategy, and Financial Innovation (the SEC’s think tank, also known as Risk Fin or RSFI) and is on leave as a professor of finance at Vanderbilt University. In a speech he gave in December 2012, Risk Modeling At The SEC: The Accounting Quality Model, Dr. Lewis stated that the RSFI Office of Quantitative Research is “developing cutting-edge ways to integrate data analysis into risk monitoring.” The mining of XBRL data is a key part of this work.

How will this monitoring tool parse the XBRL data to select the companies whose financials need extra review?

The tool itself tries to model what is known in the financial-accounting literature as discretionary accruals. It is a predictive model that estimates how much of the total accruals that a company reports are discretionary. (Total accruals are the difference between cash flows and net income.)

Those are actual XBRL tags?

Those would be consolidated tags. Net income would be an XBRL tag. You also could obtain cash flows from the XBRL filing. What we are doing is building this model from factors. One of the exercises we need to go through is to take the taxonomy and synthesize it in a way so that we can compress the actual taxonomy choices that companies make and the way they use the taxonomy into high-level financial statements. By converting all those choices that firms make about how to tag elements, we are going to look at more of a high-level financial statement presentation. The factors we develop will be based on this stylized set of financial statements, and that gives us the ability to really compare firms.

Companies develop their own XBRL extensions. Does that cause a problem in your system?

To the extent that firms use unique extensions, we have to make decisions about how to collapse those extensions into the way we represent these stylized financial statements. Is it a problem? No. Is it something we are addressing in the way we actually build the model out? Yes.  One of the things we have noticed is that the longer a firm is actually making XBRL filings, the fewer unique extensions they tend to choose. So there is a  learning curve that seems to be going on, where filers may begin by using unique  extensions, but over time, as they become more comfortable with using the taxonomy, the number of those unique extensions tends to collapse.

While developing this tool, have you had any concerns about the quality and consistency of XBRL tagging beyond the issue of extensions? Are there other issues with the quality of XBRL filings that are making it harder to develop this tool and the analytics around it?

What you will find is that anyone who uses structured financial statement data will be required to come up with a rule-based approach to dealing with outliers. Whether there are errors in the taxonomy itself, whether there are errors in the way the XBRL data is being tagged, we have to come up with an approach that will allow us to identify unusual elements. Even if you use the commercial databases, you make these same choices. There are ways of dealing with outliers in the data that are fairly standard among people who do empirical corporate finance. We are taking a similar approach: using our expertise as financial economists to create similar rules for the XBRL data. To take it a step farther, though, with respect to the quality of the data, now that there is an actual liability associated with inaccurate XBRL statements, I fully expect quality to improve. My view is that the real solution to this is inline XBRL: creating a document where the tags are embedded directly into your filing so that you do not have to have two separate documents. This seems to be where the industry is moving, and I fully support that.

If the industry does move to inline XBRL, will that make your model easier or harder to use?

As long as it is tagged, it is structurally the same data. The only thing that inline XBRL would do for us is to [reduce] the potential error rate that you might see in tagged data. So any time you can remove a step in the process where there is an opportunity for additional errors, it will improve the quality of the data you get.

How can companies be sure their XBRL filings are not automatically flagged in the monitoring system you are developing?

I would say, check your work. If you make a mistake in how you record an element, that would affect the score you get from the model and might make you more likely to be pulled up for a review—I would argue, correctly so. The model will tell the reviewer which factor was contributing to the score, and if one factor comes out and has a large impact on the score and can be traced back to a recording error in the XBRL data, you will be flagged because you made a mistake in providing us your XBRL data. I do not view that as a problem.

Some companies look at the resources involved in preparing XBRL documents and claim it is not worth the expense and time involved because nobody is using XBRL data. What is the message you want to get out about how the SEC is and will be using XBRL?

Let me preface my remark with some observations about the data. I do not think the data has been around long enough to actually be an incredibly useful tool for financial statement analysis. Anybody who actually wants to analyze financial statements needs a time series, and a few years is an insufficient time series to do meaningful analysis. The lack of uptake by people outside the SEC is simply because XBRL is still in the development phase. Once you get a long enough time series, you will find people will start to use this tool. It is a chicken-and-egg problem. You need sufficient data before you can find it useful. When people say it is not being used, they are missing the point. In its current form, it is not as useful as it will be five years from now. What I like to say is that the SEC is using this data. It seems natural to me that we would want to use this data. But just like the observation I made about utility for individual investors, the same concern is there for us. There is a learning curve when companies start to use this data. The early data will have errors. Over time, as those filers become more experienced with XBRL, their error rate goes down, and their data becomes significantly more useful. The SEC  was really just allowing firms to have a window to figure out how to tag data. Now that the window has shut, we are just going to start to use the data. I view it as the  natural outcome of giving filers the opportunity to figure out what they are doing with XBRL. One of the interesting things is that new filers have significantly lower error rates than the original filers, and that is because so many of them use third-party vendors to help them with their filings.

Once a filing has been flagged by the tool, what happens next?

The tool can be used in a number of different ways. One way is to possibly assist in the scheduling of firms to examine. There is a requirement under the Sarbanes-Oxley Act that the SEC needs to examine every 10-K filing at least once every three years. By risk-scoring the filers, you can deploy resources within the SEC efficiently by directing staff to filers that might benefit from immediate attention. So that would require you to essentially develop a database of scores that you would rank. Once the schedule is set, we will generate customized, company-specific reports. The report will hopefully identify areas where we think it would be most natural to focus review time—because something unusual might be happening with respect to particular accounting choices. So [the Division of] Corporation Finance may have one use for it, which is to improve the quality of the corporate disclosures; while the Division of Enforcement may have an independent need for the tool. I also see an interplay between the two. There are a lot of ways internally in which the tool can be used.

What is your vision for how the system will improve financial disclosures and prevent fraud?

It is a tool, not the solution. It may be used by a particular team in Corporation Finance to identify areas that warrant further attention. That could be done for all filers. If a problem turns out to be actionable, and possibly something fraudulent, I see it being referred to Enforcement for additional investigation.

NOTE: The views expressed here are entirely those of Dr. Lewis and do not necessarily reflect those of the Securities and Exchange Commission (SEC) or any other organization.

 

Posted by on May 10 2013. Filed under Featured, Interviews. You can follow any responses to this entry through the RSS 2.0. You can leave a response or trackback to this entry

Leave a Reply

Search Archive

Search by Date
Search by Category
Search with Google
Log in | Designed by Gabfire themes