Dundas BI: The Case for Business Intelligence vs. Excel [PART 1]

This is part one of a two part series on why companies should consider replacing Excel with a Business Intelligence platform for more effective decision making.

Excel is, undeniably, the single most popular data analytics tool in the world. I’m not here to dispute that, and I’ll be the first to admit that even I’m a fan of the spreadsheet application and find myself using it frequently (along with 750,000,000+ others) to organize data. As organizations acquire greater volumes of data from a dizzying array of sources, there’s a growing need to be able to consolidate and analyze the information as quickly and effectively as possible. And in order to do so, most companies turn to Excel and do so for good reason.

 

But is Excel the right tool for the job?

That’s the million dollar question. There’s not a doubt in my mind that Excel spreadsheets can perform excellently for many small data processes (key word being small), such as calculation and simple analysis. And while many companies will argue that Excel is an indispensable tool, and claim its application is limitless, it simply cannot hold its own in respect to all data processes – especially when it comes to Business Intelligence (BI) and reporting.

It’s well documented the concerns of using Excel as the main tool for BI and reporting. Many of these concerns stem from the high risks associated when performing excessive amounts of manual edits, including copy/paste operations. There are many examples highlighting use cases where Excel has failed during manual operation, perhaps the most notable being when JPMorgan Chase suffered a $6.2 billion trading loss.

Much of the debate between Excel and Business Intelligence solutions simply detail the limitations of Excel. What this article will be focusing on, are the actual processes commonly performed in Excel that should be replaced with a Business Intelligence platform for more effective decision making. We’ve grouped these processes into 4 categories and will take a comprehensive look into how each process manifests itself, as well as the telltale signs that should prompt change. We’ll provide use cases of these processes in Excel, along with details on how they can be optimized with Dundas Data Visualization’s Business Intelligence solution, Dundas BI.

Part one of this series will focus on two processes: Inefficiency and Instability.

 

Inefficiency

Fast analytics are crucial to the success of any sophisticated data analysis process. However, if you find yourself wasting time connecting, preparing, analyzing and distributing data, and are repeating the same steps over and over (and over!) again on a regular basis just to simply update your Excel reports, there’s a very good chance you’re not making the best use of your time or resources. And while your process technically works, it’s inefficient, laborious, and uninspiring (shall I go on?), ultimately resulting in time wasted manually updating reports.

Spreadsheets Lack Agility [Use Case]

I’m willing to bet that you’ve found yourself patching together weekly Excel exports, be they from a CRM or other operational systems, via copy/paste operations from a weekly snippet into a larger file in order to create a “master” Excel file. You’re likely doing this to identify historical trends in data or simply because it’s easier for you to analyze and work with your data when it’s in a single file vs. multiple files/sheets. As you can imagine, and are probably already aware, this is a very manual and tedious process, with a less-than-acceptable margin for error.

Take for example, our customer Rental One; a premier equipment rental company who’d maintained a mammoth Excel spreadsheet that tracked KPIs for financial data company-wide. This Excel spreadsheet had lived within the organization for countless years, and housed all the data responsible for supporting key decisions. Prior to implementing Dundas BI, Rental One’s Systems Analysts would work for 6 business days every month constructing an Excel dashboard to display their KPIs and an additional day determining how to accurately distribute the content. That’s a total of 7 days to simply share their KPIs with end users. To make matters worse, the process of tracking KPIs for Rental One was historically one month delayed. Executives and Store Managers were at the mercy of their Finance team, and were unable to even begin the dashboard construction process until financial results were posted and published. With Dundas BI, Rental One was able to eliminate this process entirely by replacing their Excel spreadsheet with a single dashboard that updates in real-time. They saved precious time, as their dashboards were reflective of the most current data at their disposal.

You can read more about Rental One’s success story here.

Refine the Process

Automatic Data Connection

Using Dundas BI, it’s possible to establish a connection to multiple Excel files, and completely automate the process of combining them together. What’s especially great, is that with this connector, we’re still able to easily identify where each data point originates (i.e., the master file, or a weekly workbook file, etc.). This type of connection takes care of combining all the data in the files together without any additional configurations or manual operations. Exploring and analyzing that data is as simple as dragging-and-dropping the data we want to work with onto our Metric Set Designer, where Dundas BI will automatically recommended the best visualizations to use according to our selection. To take our analysis one step further, with a Business Intelligence solution such as Dundas BI, we’re also able to perform data manipulations to the metric set (for example, group the data to the year level and then drill down to the quarter level or add a comparison to a previous period) while exploring it; a task that if conducted in Excel, would be time-consuming and take multiple steps. So beyond the automatic union of Excel files that Dundas BI executes with its Excel data connector, we can perform ample amounts of data analysis directly within the tool.

Agile Data Preparation

While the process of connecting and combining data from Excel is managed much more efficiently in Dundas BI, where the heavy lifting typically happens, is within Dundas BI’s data cube layer (an optional data preparation layer). Once the connection between Excel files has been established and the spreadsheets combined, a data cube (a multidimensional data model that’s based on measures and hierarchies) that contains all the ETL processes that have been set up is automatically created, and can be made visible for other users as well. This is paramount, because oftentimes when processes are created, they’re very difficult to reuse in Excel. With Dundas BI, all processes that are created are not only automatically saved, but are also available for reuse by all users with accessibility permissions.

After the initial creation and discovery of the Excel data connector, we have the ability to redefine the data structure by adding transformations to shape our data as needed. These transforms include combining data from multiple structures, filtering rows that don’t meet specific criteria, performing data conversions, adding calculated elements (similar to how you would in Excel), etc. Some advanced functions that can be used within the data cube process are not available within Excel, such as applying transformations and analysis through Python scripts or an R Language Analysis script.

Streamlined Data Distribution

To truly bring this process to its zenith, the data extracted from the combined Excel files should easily be able to be shared with those who’ll be consuming it. Rather than needlessly expend energy constructing a dashboard with Excel to display results, Dundas BI enables its users to add their visualizations to a dashboard that is always available online and up to date. To reduce an inefficient manual email exchange that often occurs with offline Excel files, sharing Dundas BI dashboards can be automated and distributed on a schedule via a multitude of formats (Link, Image, PDF, PowerPoint, etc.). These distribution schedules can also be data-driven, so consumers are alerted when, for example, sales values trend in a significantly negative pattern over a prolonged period of time. This type of proactive alert can also help with the indifference problem discussed below.

Performance Optimization

The endgame in all of this, is fast analytics. For Business Intelligence and analytics projects to truly be successful and transform the way people manage decisions, speed is a necessity. It’s one thing to be able to prepare and consolidate your Excel spreadsheets within a single interface, but it’s the data retrieval design that will ensure a smooth and seamless experience for consumers. Dundas BI allows its users to optimize performance by taking advantage of different storage types available in the data cube layer. Read our blog – A Deeper Look into Dundas BI’s In-Memory Engine – for details on the different storage types.

If you find yourself repeating the same steps over and over (and over – wait, I’m repeating this line) again, and they’re routinely focused on connecting, preparing, analyzing and distributing data, there’s a good chance you’re suffering from symptoms of inefficiency, and should consider looking for methods to better optimize your processes. One of the best ways to do so, is by automating them end-to-end, with a Business Intelligence solution such as Dundas BI.

 

Instability

If in Excel you’re dealing with large quantities of data and having difficulty maintaining your data process, the most likely culprit is instability. And what we’ve categorized as instability is not a one-dimensional affair, but rather an entity that’s comprised of a plethora of parts, each more detrimental to your reporting needs than the next. Many of the barriers you’ll face, or are currently facing in regards to the instability of Excel are rooted in the fact that Excel is a client-side application and in most cases is not powered by a server. It tends to be primarily a desktop application that attempts to handle as much data that’s thrown at it. It performs well for small volumes of data and for small files, but lacks the necessary toolset when scaling up to enterprise levels (more on this later).

Size Impacts Spreadsheets’ Performance [Use Case]

Here’s a brief video exhibiting what happens when we attempt to open an Excel file with too many rows of data:

What you’re seeing, is an attempt at opening a CSV file with just over 2,000,000 rows of data. By no means is this a small file, but neither is it exorbitantly large. You’ll notice Excel is processing the file (and processing and processing), and eventually presents us with an error message indicating that the file did not load completely. You’ll see we then accept the error and our Excel file opens, however, the number of rows have been spliced to conform to Excel’s limitations, ultimately returning a partial data set. To remedy this, we’d be forced to split our source file into several smaller files, which makes the data that much more difficult to work with.

The maximum number of rows Excel allows by the way, is 1,048,576.

Refine the Process

Neutralize Excel File Sizes

The truth is that for many organizations, Excel has always played a significant role in their analytics portfolio. But consider this. In 2016, the average company expected its data volumes to grow to 247.1 terabytes over a span of 12-18 months. To further emphasize the resounding boom in data quantity, it’s predicted that by 2020, annual data production will see an increase of 4,300 percent. If you aren’t already, many of you will likely wind up maintaining Excel spreadsheets well in excess of 2,000,000 rows. In short, if you’re struggling to store and analyze data in Excel now, you’ll be drowning in data over the next few years.

So if you find yourself in constant battle with numerous Excel file errors, or are having difficulty maintaining/are unable to maintain the programs and scripts being ran in Excel, take a minute to evaluate your process. Presumably you’re dealing with a case of instability that can usually be solved by creating a much more streamlined process. Ideally one that’s independent of your client’s desktop application, that’s actually able to process information on the server and send it back to your client, so regardless of your client’s architecture, you’ll be able to process information and visualize it (even on mobile devices!) very quickly.

Rather than navigate the troubled waters of over-sized, bloated Excel files, consider using a business intelligence software like Dundas BI to calm to seas. We have a sample use case similar to the one above, but instead of dealing with 2,000,000 rows of data, we go to work on 105,000,000 records; a number well beyond Excel’s thresholds. To see how quickly and effortlessly this loads in Dundas BI, you can watch this video [starting at 22:55].

Print