What Informatica’s Buyout Means to Big Data Integration
Yesterday’s news that Informatica has agreed to be bought out by private equity firms for $5.3 billion has stirred a frenzy of activity in the big data integration community. For those working at data integration startups, the news represented a death knell of sorts for “old style” ETL and the recognition that newer data integration technologies and techniques are here to stay.
“Rest in Peace Old ETL” says Darren Cunningham, a former Informatica manager who now heads up product management at SnapLogic, the big data integration startup founded by Guarav Dhillon, who also co-founded and led Informatica for more than 12 years.
Coming on the heels of a similar private equity buy-out of enterprise service bus (ESB) developer Tibco last year for $4.3 billion, Cunningham took the news as an indication that the market has turned away from traditional ETL and ESB technologies.
“In a matter of months, we’ve seen the market impact of the enterprise IT shift away from legacy data management and integration technologies when it comes to dealing with today’s social, mobile, analytics/big data, cloud computing and the Internet of Things (SMACT) applications and data,” Cunningham writes on the SnapLogic blog.
Informatica, of course, isn’t dead–the company just announced new products today, in fact. But clearly, selling the company to a private equity firm is not a sign of strength. While Informatica has steadily grown revenues over the years and ended fiscal 2014 with more than $1 billion in revenues, the company had failed to grow profits at the same pace, thanks to a movement to subscription-based revenue models. And so yesterday it announced that it is selling itself to a company backed by Permira funds and Canada Pension Plan Investment Board (CPPIB) for about $5.3 billion.
Other data integration vendors jumped on the news, too. According to Alteryx, the buyout represents a “generational shift” away from the “old guard” and towards solutions that are more self-service and agile in nature, such as those from Alteryx, Clearstory Data, Trifacta, and Paxata.
In particular, Alteryx points to a 2015 William Blair report that finds “a significant inflection” in demand for newer data prep and analytics tools, which it took as validation that the “secular shift to self-service analytics” is building momentum. Sixty percent of net new BI investment will go toward next-generation solutions in 2015, Blair found.
“We are witnessing a generational shift in the enterprise software market. Informatica delivered good information management software… but only to the few,” says Alteryx president and COO George Mathew. “Our focus is empowering the millions of data analysts that have been disenfranchised by the previous generation of tools.”
The folks at Talend, which also provides data integration and ETL tools, see similar signs in the tea leaves following Informatica’s buyout.
“What we’re seeing is a once-in-a-generation redefinition of the entire data-management stack,” Talend CEO Mike Tuchen told eWeek. “A growing number of companies are migrating away from legacy, premise-based integration software sold on a license basis to more agile and modern solutions optimized for Hadoop and big data, open source, and the cloud.”
Clearly there’s something in the water when it comes to data integration. Many companies are still leveraging ETL tools like Informatica’s flagship PowerCenter tools–and similar offerings from IBM and Oracle, which collectively have tens of thousands of users–to move data en masse from row-and-column oriented databases into analytical data warehouses. Those systems will exist until their useful lives runs out.
But moving forward, the action will be about how to efficiently move less-structured data from all sorts of sources into Hadoop or cloud-based repositories. It’s all about the data “gravity,” according to SnapLogic’s Dhillon.
“Because data has gravity, you should make the choice of the deployment you want,” Dhillon said in a recent Diginomica interview. “We believe this is really a ‘connective tissue’ problem. You shouldn’t have to change the data load, or use multiple integration platforms, if you are connecting SaaS apps, or if you are connecting an analytics subsystem. It’s just data momentum. You have larger, massive data containers, sometimes moving more slowly into the data lake. In the cloud connection scenario, you have lots of small containers coming in very quickly. The right product should let you do both.”
Related Items:
Deep Dive Into Oracle’s Emerging Big Data Stack
Startup Aims to Make Integration a ‘Snap’
Talend’s ‘Sandbox’ Aims to Speed Big Data Projects
Alex, thanks for this interesting article. Another dimension is that enterprises increasingly require visibility into how their data is being used. This helps re-balance select data sets across platforms to meet analytics, performance and cost requirements.
– Kevin Petrie, Attunity
http://www.attunity.com
https://www.linkedin.com/pulse/big-data-infrastructure-you-kevin-petrie