Follow BigDATAwire:

December 5, 2019

Three Surprising Ways Archiving Data Can Save Serious Money

Anna Mowry

(PongMoji /Shutterstock)

Data growth is exploding, with the amount of global data expected to grow to 175 zettabytes by 2025, according to an IBM report. The rate of the increase is growing as well. An  IDC report found that 90 percent of data in the world in 2017 had been generated in the previous two years.

With such significant growth, the cost of managing and storing data in primary storage adds up fast for enterprises. According to customer research by Igneous, storing a single terabyte of data in primary storage costs an organization anywhere from $322 to $2,270 per year, depending on an organization’s primary performance and backup strategy.

Archiving data can result in serious savings, especially for organizations with a significant amount of unstructured data, which is typically text-heavy data not organized in any predefined way. Every unused terabyte of data that’s archived could save between $195 and $2,028 per terabyte per year, representing 63% to 94% in savings compared to the total cost of storing data on a primary storage tier and backing it up.

You’re probably familiar with some of the reasons why archiving data can save money, most notably lower primary hardware and software costs. But there are other reasons why archiving data is a smart choice. In the whitepaper, we take a deep dive into data storage costs. In this article, let’s examine three surprising ways archiving data can cut storage costs beyond hardware and software savings.

1. Reducing Backup Costs

Nearly all organizations today protect their primary storage with a backup solution. When data is removed from primary storage with an archive solution, there is less data on the primary tier to backup, meaning less backup hardware, software, energy, and datacenter costs as well.

Tape isn’t necessarily the most affordable archive medium (Full_chok/Shutterstock)

Until recently, backup solutions for enterprises typically fall into two strategies: tape or disk-to-disk (D2D) replication. Both of these solutions come with significant price tags to backup a single terabyte of primary data.

The common misconception is that tape backup is cheap. While an actual tape might be cheap, backing up primary data with tape also requires tape libraries, servers, software, data center space, power, cooling, and management overhead. These costs add up very quickly. Our research shows that to backup a single terabyte of primary with tape could cost $138-$1,731 per year, depending on how frequently you are completing a full backup.

The other common backup solution – replication – requires backup workflows that replicate data from the primary NAS system to a secondary storage platform from the same vendor. In most cases, this means that the secondary storage system is architecturally similar to the primary NAS device, requiring hardware, software, data center space, power, cooling, and management. Our research shows that the total cost of replication is $147-$1,512 per terabyte of primary data per year.

This means that for every terabyte of data you archive, not only are you saving hundreds of dollars by reducing consumption of primary storage, you are also saving hundreds or thousands of dollars by reducing backup costs!

2. The Magic of Data Compression

Another major benefit of archiving your data, either on-premises or in the cloud, is data compression.

Compression is your friend in the age of big data

Primary data is stored in native format so that end users and applications don’t have to do any extra work to use it. However, since there isn’t the same expectation for cold data, archived data can be compressed.

When optimizing for cost rather than performance, datasets can be crunched down to the minimum possible capacity footprint to fully maximize savings. Compression ratios vary from 1:1 for incompressible data, up to 2.5:1 or higher for more compressible workflows.

What’s that mean in terms of cost savings? If your data has a compression ratio of 2:1, this reduces the consumption of archive storage by 50% because a terabyte of cold data stored on primary will require just 500 gigabytes in archive storage, resulting in significant additional annual savings for every primary terabyte archived.

3. Avoided Building Costs

By archiving data from primary storage to the cloud, companies reduce building costs, but these costs often, incorrectly, appear to be free. Common sense suggests that the organization already owns its building, and therefore incurs no cost.

However, this kind of thinking fails to account for the expansion costs once all the data center space is used. Once all rack units are occupied, organizations must either build a new data center, or rent space in someone else’s data center. Typical market rates for a rack unit in a data center are between $240 and $300 per rack unit per year, while building or buying a new datacenter will likely cost millions of dollars. Additionally, hardware stored in buildings will require energy to power the hardware and to provide cooling for the datacenter.

These recurring costs add up over time, and are frequently overlooked and underestimated when evaluating storage options. By archiving data to the cloud, organizations can offload these costs to the cloud providers, offering additional savings over on-premises storage solutions.

The Power of Archiving Data

With data growth rapidly increasing and the high cost of primary storage, it’s crucial for organizations to adopt a sensible, effective archiving data strategy. By understanding the true, significant cost of data storage, IT administrators are in a better position to persuade data owners to archive cold and unneeded data, especially when those data owners realize that archived data can be accessible data.

Whether they archive on-premises or in the cloud, organizations can mitigate rising costs and maintain fast access to crucial data by adopting a modern archiving solution which provides end users with direct access to archived data.

About the author: Anna Mowry is the vice president of finance and operations at Igneous. Prior to her role at Igneous, Anna was Senior Director of Finance and Sales Operations at ExtraHop, and before that, she held finance leadership roles at Isilon, DellEMC, and AWS.

 

 Related Items:

Architecting Your Data Lake for Flexibility

Big Data Is Still Hard. Here’s Why

The State of Storage: Cloud, IoT, and Data Center Trends

 

BigDATAwire