

Data lakes are synonymous with Hadoop to many people grappling with the promise and the peril of big data. That’s not surprising, considering Hadoop’s unparalleled capability to gobble up petabytes of messy data. But for Barry Zane and other folks at Cambridge Semantics, data lakes are taking on a decidedly graph-like appearance.
Cambridge Semantics, which acquired Zane’s latest startup SPARQL City earlier this year, is beginning to talk about its concept of the smart data lake. The data lake concept is a well-worn one by now. The “smart” part, you may have guessed, owes to the semantic aspect of how the data is stored, how it’s connected to other data in the lake, and the way it impacts how people can extract meaningful information from it.
To Zane’s way of thinking, those who can get the most insights with the least amount of effort have an advantage. Of course, this has always been the case. But the telling part is the fact that Zane—who was founder and CTO of ParAccel (acquired by Actian) and a co-founder and VP of architecture at Netezza (acquired by IBM)–sees graph databases and graph analytic technology as the best way to get there for at least the next 10 years.
“We strongly believe that this is an extremely effective approach, a future-proof approach,” Zane tells Datanami. “Just as Hadoop basically came of maturity because relational just wasn’t able to work with a certain class of question and wasn’t able to work at a certain scale, we pursue those classes of questions and scale using the graph standards, at an incredible cost and performance advantage, as compared to hiring programmers for every question and analytic you want to perform.”
From Relational to Graph

Barry Zane, vice president of engineering for Cambridge Semantics
Zane, who is Cambridge Semantics vice president of engineering, sees graph databases—such as the Anzo Graph Query Engine–as a natural evolution from relational databases, which he says have developed some pretty powerful analytic capabilities themselves over the past 40 years.
“Without a doubt what we’re doing is educated by learning from Netezza, educated from learning from ParAccel. So I really see it a just an evolution,” Zane says. “The difference is you’re able to ask more interesting question of your data. You’re able to find relationships that are otherwise nearly to impossible to find.”
The core problem with relational database technologies—even the massively parallel processing (MPP) technologies that he championed at ParAccel (which powers Amazon’s Redshift data warehousing service) and Netezza (which IBM has renamed into something that nobody can ever remember)—is the ease at which advanced analytics can be performed, and the length of time it takes to get answers back.
“Being a longtime relational guy, one of the great things about the relational database is that you don’t need to be programmer. You’re able to work with the database through either a set of application layer tools or in the SQL language,” he says.
“The best way to think of SPARQL and RDF is that they’re just the next evolution of relational database SQL,” he continues. “That’s the way I think about it, and that’s what got me excited because you can have people who are not super high trained programmers be able to post queries of the data in a matter of minutes or hours and get back response in a matter of seconds or minutes, as opposed to hiring very highly trained and expensive programmers for any given query.”
Graph As Oracle
Zane sees graph databases giving us oracle-like powers to start with one set of facts and to drill outward to ask innumerable other questions from huge and connected data sets. That kind of power has never been available on a widespread basis, but graph is quickly making it possible, and having an impact in multiple industries.
For example, say you’re a retailer selling sweaters, and you want to know how many sweaters to stock. A graph database can let you easily add other data sets, such as weather forecasts or social network data, which can let you spot trends and adapt to changing demand.
“Graph databases, and the technology behind graph databases and graph analysis, is all about working with that kind of stuff and being able to add in additional graphs of information, like demographics, weather, geographic information, and so forth,” Zane says. “It’s obviously very relevant in the life sciences space, where you might be relating genetic aspects to drug effectiveness to drug marketing, clinical trials and so forth. Likewise in financial services around trades. In national security [it helps to] find who the bad guys are.”
Data Rich, Insight Poor
Most of us are data rich these days, but insight poor, says Cambridge Semantic vice president of marketing John Rueter.
“The explosion of data is causing a great deal of pain to organizations,” he says. “Most organizations have been very good at collecting and storing information, but really have not done a very good job of making sense of that data and then being able to perform analytics on top if it.”
End users who are accustomed to having practically limitless amounts of data available to them will eventually come to depend on the capability of graph analytics to navigate it and make sense of it, he says.
“Everybody thought that big data would make everyone’s job easier, when it fact we know it’s made everybody’s job a lot harder,” he says. “End users are demanding and asking for the ability to have interactive data they can work with and go beyond just a traditional query, which almost goes linear in fashion, whereas here with the graph technology, you’re able to traverse all of the data and on a dime spin and ask new questions…It mimics the way we think and the way we want to ask questions of our data.
Product Positioning
While Hadoop-based data lakes compete on some level with Cambridge Semantics graph offering, called Anzo Graph Query Engine, it’s mostly complementary. In many instances, HDFS will be the repository for unstructured data sets before it’s loaded into the in-memory graph database.
The marriage of Cambridge Semantics and SPARQL City makes a lot of sense when viewed through a technological lens. SPARQL City provided an in-memory graph database that could scale to great heights, while Cambridge Semantics provided the tooling that made it more useful.
“As a standalone company, SPARQL City had a great massively parallel database architecture and likewise Cambridge Semantics has a great architecture and product for doing knowledge and data management and the associate visualizations, ETL, and so forth,” Zane says. “So it was just plain very natural that we combined. That way as a single company we could provide the entire stack.”
Yesterday Cambridge Semantics announced that customers can now buy the varous big data products—including the Anzo Graph Query Engine and Anzo Smart Data Manager–as stand-alone products. Users can also buy it as part of the Cambridge Semantics’ Anzo Smart Data Lake offering.
Related Items:
Cambridge Semantics Buys Graph Database Specialist
The Bright Future of Semantic Graphs and Big Connected Data
Hadoop, Triple Stores, and the Semantic Data Lake
March 27, 2025
- Dataiku Announces 2025 Partner Award Winners
- Marvell Showcases PCIe Gen 6 Optical Interconnect for AI Infrastructure
- Akamai Launches Cloud Inference to Boost AI Workloads at the Edge
- Prophecy Introduces Fully Governed Self-Service Data Preparation for Databricks SQL
- Verdantis Launches Next-Gen AI Solutions to Transform Enterprise Master Data Management
- TDengine Releases TDgpt, Extending the Power of AI to the Industrial Sector
March 26, 2025
- Quest Adds GenAI to Toad to Bridge the Skills Gap in Modern Database Management
- SymphonyAI Expands Industrial AI to the Edge with Microsoft Azure IoT Operations
- New Relic Report Reveals Media and Entertainment Sector Looks to Observability to Drive Adoption of AI
- Databricks and Anthropic Sign Deal to Bring Claude Models to Data Intelligence Platform
- Red Hat Boosts Enterprise AI Across the Hybrid Cloud with Red Hat AI
March 25, 2025
- Cognizant Advances Industry AI with NVIDIA-Powered Agents, Digital Twins, and LLMs
- Grafana Labs Unveils 2025 Observability Survey Findings and Open Source Updates at KubeCon Europe
- Algolia Boosts Browse with AI-Powered Collections
- AWS Expands Amazon Q in QuickSight with New AI Scenarios Capability
- Komprise Automates Complex Unstructured Data Migrations
- PEAK:AIO Chosen by Scan to Support Next-Gen GPUaaS Platform
- Snowflake Ventures Deepens Investment in DataOps.live to Advance Data Engineering Automation
- KX Emerges as Standalone Software Company to Make Temporal AI a Commercial Reality
- PAC Storage Unveils 5000 Series Data Storage Solutions
- PayPal Feeds the DL Beast with Huge Vault of Fraud Data
- OpenTelemetry Is Too Complicated, VictoriaMetrics Says
- Accelerating Agentic AI Productivity with Enterprise Frameworks
- When Will Large Vision Models Have Their ChatGPT Moment?
- The Future of AI Agents is Event-Driven
- Your Next Big Job in Tech: AI Engineer
- Data Warehousing for the (AI) Win
- Nvidia Touts Next Generation GPU Superchip and New Photonic Switches
- Alation Aims to Automate Data Management Drudgery with AI
- Can You Afford to Run Agentic AI in the Cloud?
- More Features…
- Clickhouse Acquires HyperDX To Advance Open-Source Observability
- NVIDIA GTC 2025: What to Expect From the Ultimate AI Event?
- IBM to Buy DataStax for Database, GenAI Capabilities
- EDB Says It Tops Oracle, Other Databases in Benchmarks
- Google Launches Data Science Agent for Colab
- Databricks Unveils LakeFlow: A Unified and Intelligent Tool for Data Engineering
- Reporter’s Notebook: AI Hype and Glory at Nvidia GTC 2025
- Grafana’s Annual Report Uncovers Key Insights into the Future of Observability
- Excessive Cloud Spending In the Spotlight
- Big Data Heads to the Moon
- More News In Brief…
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- Snowflake Ventures Invests in Anomalo for Advanced Data Quality Monitoring in the AI Data Cloud
- NVIDIA Unveils AI Data Platform for Accelerated AI Query Workloads in Enterprise Storage
- Accenture Invests in OPAQUE to Advance Confidential AI and Data Solutions
- Alation Introduces Agentic Platform to Automate Data Management and Governance
- Seagate Unveils IronWolf Pro 24TB Hard Drive for SMBs and Enterprises
- Gartner Identifies Top Trends in Data and Analytics for 2025
- Qlik Survey Finds AI at Risk as Poor Data Quality Undermines Investments
- Palantir and Databricks Announce Strategic Product Partnership to Deliver Secure and Efficient AI to Customers
- HighByte Launches API Builder for Industrial Data
- More This Just In…