

(vladwel/Shutterstock)
Data professionals with plans to build lakehouses atop the Apache Iceberg table format have two new Iceberg services to choose from, including one from Tabular, the company founded by Iceberg’s co-creator, and another from Dremio, the query engine developer that is holding its Subsurface 2023 conference this week.
Apache Iceberg has emerged as one of the core technologies upon which to build a data lakehouse, in which the scalability and flexibity of data lakes is merged with the data governance, predictability, and proper SQL behavior associated with traditional data warehouses.
Originally created by engineers at Netflix and Apple to deal with data consistency issues in Hadoop clusters, among other problems, Iceberg is emerging as a defacto data storage standard for open data lakehouses that work with all analytics engines, including open source offerings like Trino, Presto, Dremio, Spark, and Flink, as well as commercial offerings from Snowflake, Starburst, Google Cloud, and AWS.
Ryan Blue, who co-created Iceberg while at Netflix, founded Tabular in 2021 to build a cloud storage service around the Iceberg core. Tabular has been in a private beta for a while now, but today the company announced that it is now open for business with its Iceberg service.
According to Blue, the new Tabular service basically works as a universal table store running in AWS. “It manages Iceberg tables in a customer’s S3 bucket and allows you to connect up any of the compute engines that you want to use with that data,” he says. “It comes with the catalog you need to track what tables and metadata are there, and it comes with integrated RBAC security and access controls.”
In addition to bulk and streaming data load options, Tabular provides automated management tasks for maintaining the lakehouse going forward, including compaction. According to Blue, Tabular’s compaction routines can shrink the size of customers’ Parquet files by up to 50%.
“Iceberg was the foundation for all of this and now we’re just building on top of that foundation,” says Blue, a Datanami 2022 Person to Watch. “It’s a matter of being able to detect that someone wrote 1,000 small files and clean them up for them if they’re using our compaction service, rather than relying on people, data engineers in particular, who are expected to not write a thousand small files into a table, or not write pipelines that are wasteful.”
Tabular built its own metastore, sometimes called a catalog, which is necessary for tracking the metadata used by the various underlying compute engines. Tabular’s metastore is based on a distributed database engine, and scales better than the Apache Hive metastore, Blue says. “We’re also targeting a lot better features than what’s provided by the Hive metastore or wire-compatible Hive metastores like Glue,” he says.
Tabular’s service will also protect against the ramifications of accidentally dropping a table from the lakehouse. “It’s really easy to be in the wrong database, to drop a table, and then realize, uh oh, I’m going to break a production pipeline with what I just did!” Blue says. “How do I quickly go and restore that? Well, there is no way in Hive metastore to quickly restore a table that you’ve dropped . What we’ve done is we’ve built a way to just keep track of dropped tables and clean then up… That way, you can go and undrop a table.”
Blue, who spoke today during Dremio’s Subsurface event and timed the launch of Tabular to the event, describes Tabular as the bottom half of a data warehouse. Users get to decide for themselves what analytical engine or engines they use to populate the upper half of the warehouse, or lakehouse.
“We’re purposefully going after the storage side of the data warehouse rather than the compute side, because there’s a lot of great compute engines out there. There’s Trino, Snowflake, Spark, Dremio, Cloudera’s suite of tools. There’s a lot of things that are good at various pieces of this. We want all of those to be able to interoperate with one central repository of tables that make up your analytical data sets. We don’t want to provide any one of those. And we actually think it’s important that we separate the compute from the storage at the vendor level.”
Users can get started with the Tabular service for free, and are free to use it until the 1TB limit is hit. Blue says that should give testers enough time to familiarize themselves with the service, see how it works with their data, and “fall in love” with the product. “Up to 1TB we’re managing for free,” he says. “Once you get there we have base, professional, and enterprise plans.”
Tabular is available only on AWS today. For more information see www.tabular.io and Blue’s blog post from today.
Dremio Discusses Arctic
Meanwhile, Dremio is also embracing Iceberg as a core component of its data stack, and today during the first day of its Subsurface 2023 conference, it discussed a new Iceberg-based offering dubbed Dremio Arctic.
Arctic is a data storage offering from Dremio that’s built atop Iceberg and available on AWS. The offering brings its own metadata catalog that can work with an array of analytic engines, including Dremio, Spark, and Presto, among others, along with automated routines for cleaning up, or “vacuuming” Iceberg tables.
Arctic also provides fine-grained access control and data governance, according to Tomer Shiran, Dremio’s founder and chief product officer.
“You can see exactly who changed what, in what table and when, down to the level of what SQL command has changed this table in the last week,” Shiran says, “or was there a Spark job and what is the ID that changed the data. and you can see all the history of every single table in the system.”
Arctic also enables another feature that Dremio calls “data as code.” Just as Git is used to manage source code for computer programs and enable users to easily roll back to previous versions, Iceberg (via Arctic) can enable data professionals to work more easily with data.
Shiran says he’s very excited about the potential for data as code within Arctic. He says there are a variety of obvious use cases for treating data as code, including ensuring the quality of ETL pipelines by using “branching;” enabling experimentation by data scientists and analysts; delivering reproducibility for data science models; recovering from mistakes; and troubleshooting.
“At Dremio, in terms of our product and technology, we’ve worked very hard to make Apache Iceberg easy,” Shiran says. “You don’t really need to understand any of the technology.”
Subsurface 2023 continues on Thursday, March 2. Registration is free at www.dremio.com/subsurface/live/winter2023.
Related Items:
Open Table Formats Square Off in Lakehouse Data Smackdown
Snowflake, AWS Warm Up to Apache Iceberg
Apache Iceberg: The Hub of an Emerging Data Service Ecosystem?
March 28, 2025
March 27, 2025
- IBM Expands On-Prem Offerings with Storage Ceph as a Service
- Dataminr Partners with WWT to Launch Unified Cyber-Physical Threat Intelligence Platform
- Dataiku Announces 2025 Partner Award Winners
- Marvell Showcases PCIe Gen 6 Optical Interconnect for AI Infrastructure
- Akamai Launches Cloud Inference to Boost AI Workloads at the Edge
- Prophecy Introduces Fully Governed Self-Service Data Preparation for Databricks SQL
- Verdantis Launches Next-Gen AI Solutions to Transform Enterprise Master Data Management
- TDengine Releases TDgpt, Extending the Power of AI to the Industrial Sector
March 26, 2025
- Quest Adds GenAI to Toad to Bridge the Skills Gap in Modern Database Management
- SymphonyAI Expands Industrial AI to the Edge with Microsoft Azure IoT Operations
- New Relic Report Reveals Media and Entertainment Sector Looks to Observability to Drive Adoption of AI
- Databricks and Anthropic Sign Deal to Bring Claude Models to Data Intelligence Platform
- Red Hat Boosts Enterprise AI Across the Hybrid Cloud with Red Hat AI
March 25, 2025
- Cognizant Advances Industry AI with NVIDIA-Powered Agents, Digital Twins, and LLMs
- Grafana Labs Unveils 2025 Observability Survey Findings and Open Source Updates at KubeCon Europe
- Algolia Boosts Browse with AI-Powered Collections
- AWS Expands Amazon Q in QuickSight with New AI Scenarios Capability
- Komprise Automates Complex Unstructured Data Migrations
- PEAK:AIO Chosen by Scan to Support Next-Gen GPUaaS Platform
- PayPal Feeds the DL Beast with Huge Vault of Fraud Data
- OpenTelemetry Is Too Complicated, VictoriaMetrics Says
- Accelerating Agentic AI Productivity with Enterprise Frameworks
- When Will Large Vision Models Have Their ChatGPT Moment?
- The Future of AI Agents is Event-Driven
- Your Next Big Job in Tech: AI Engineer
- Data Warehousing for the (AI) Win
- Nvidia Touts Next Generation GPU Superchip and New Photonic Switches
- Alation Aims to Automate Data Management Drudgery with AI
- Can You Afford to Run Agentic AI in the Cloud?
- More Features…
- Clickhouse Acquires HyperDX To Advance Open-Source Observability
- NVIDIA GTC 2025: What to Expect From the Ultimate AI Event?
- IBM to Buy DataStax for Database, GenAI Capabilities
- EDB Says It Tops Oracle, Other Databases in Benchmarks
- Google Launches Data Science Agent for Colab
- Databricks Unveils LakeFlow: A Unified and Intelligent Tool for Data Engineering
- Reporter’s Notebook: AI Hype and Glory at Nvidia GTC 2025
- Grafana’s Annual Report Uncovers Key Insights into the Future of Observability
- Excessive Cloud Spending In the Spotlight
- Big Data Heads to the Moon
- More News In Brief…
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- Snowflake Ventures Invests in Anomalo for Advanced Data Quality Monitoring in the AI Data Cloud
- NVIDIA Unveils AI Data Platform for Accelerated AI Query Workloads in Enterprise Storage
- Accenture Invests in OPAQUE to Advance Confidential AI and Data Solutions
- Alation Introduces Agentic Platform to Automate Data Management and Governance
- Seagate Unveils IronWolf Pro 24TB Hard Drive for SMBs and Enterprises
- Gartner Identifies Top Trends in Data and Analytics for 2025
- Qlik Survey Finds AI at Risk as Poor Data Quality Undermines Investments
- Palantir and Databricks Announce Strategic Product Partnership to Deliver Secure and Efficient AI to Customers
- HighByte Launches API Builder for Industrial Data
- More This Just In…