Follow BigDATAwire:

May 31, 2013

Facebook Molds HDFS to Achieve Storage Savings

Isaac Lopez

The Hadoop file system, HDFS, has been under a lot of fire over the last year as various corners of the industry have maligned the file system for some of its perceived limitations. One big data pioneer and social giant, Facebook, has seen fit to tackle their big data growth with HDFS and say they’re reaping the financial rewards in the form of storage savings.

Between 2008 and 2012, Facebook witnessed their user base increase by 14 times turning up the volume on the log data the company generates every day by 250 times in that time frame.  These exponential increases are expected to continue, said Facebook software engineer, William Wong, at Hadoop Summit Europe this spring.

Right now, says Wong, there are more than 500 terabytes of data logged into the Facebook warehouse cluster, and as the company adds new features, new experiments, and new types of data this growth shows no sign of slowing down. “As a result, we need to maintain a HDFS cluster that is 2,500 times larger, which is more than 100 petabytes of data,” he says.

Data at this level is very expensive, and Facebook is very aggressive about finding ways to control these costs. Wong explained that using Hadoop for data storage and analysis while maximizing the utility and efficiency of HDFS is a cornerstone of their strategy.

As part of this strategy Facebook developed their own internal approach to HDFS federation – a process which provides a clean separation between namespace and storage, which enables generic block storage on the cluster. One challenge to traditional HDFS scalability says Wong, is that because all of the metadata is stored in the name node’s memory, the memory size limited the number of files that they could store. Wong says that in Facebook’s re-worked approach to HDFS federation, they have created multiple name nodes to share the pool of data nodes, thus distributing these nodes and adding further capacity for scaling.

Wong says that their team also employs HDFS RAID to reduce the replication factor of data in HDFS. Part of the feature of HDFS is triple replication of data across machines in the cluster in order to provide benefits like fault tolerance, improved latency, thus increasing performance. Having three of everything, however, means that you need at least triple the storage cluster to handle all of the data. This can get very expensive, especially when you consider the exponential growth of data.

While this can prove to be a very valuable feature, it may not be right for every last bit of data in the system. Wong says that much of Facebook’s data is accessed seldomly, making the additional replication a weight on their system. To counter this, Wong says that Facebook has developed strategies to get the replication factor down as low as 2.2 in some cases, and even 1.4 in others.

One of the strategies they’ve employed is the development of a file level XOR code, says Wong, which he says through a series of raiding processes, has reduced the replication factor from 3 to as low as 2.2. In more aggressive replication strategies, he says that they have implemented a file level Reed Solomon code, which he says works similarly to the XOR, but can reduce the replication factor from 3 to as little as 1.4.

Facebook employs their replication reduction strategies against a data temperature model with hot, warm, and cold data. The data generated today is likely to be accessed by periodical queries by analysts to generate a report, so this data is left to the normal HDFS triple replication. But after a day, the company uses the XOR algorithm to provide relative good performance, but achieve better space usage. Once the data is three months old, they use the Reed Solomon code to reclaim the disk space, but still be able to rebuild the date if needed.

While HDFS has taken its lumps, Facebook’s example is a reminder that there is still much that can be achieved by enterprises willing to devote time and resource into it in order to reduce the long term infrastructure costs.

And as Wong notes, “we all want to use less money to do more things.”

 

Related Items:

How Facebook Fed Big Data Continuuity 

Baldeschwieler: Looking at the Future of Hadoop 

On Algorithm Wars and Predictive Apps 

BigDATAwire