Datanami’s Look Back on Key Topics: 2011- 2021
Check out our special Decade of Datanami series, which chronicled the major advances in the fields of big data from 2011 to 2021. From the rise of Apache Spark to the rapid expansion of the cloud, Datanami has been there to provide you with the dependable coverage you need to build out your big data operation. It hasn’t always been a smooth ride, but it’s been an eventful 10 years. We invite you to take a look at the top headlines of the past decade, listed chronologically below. And as always, stay tuned to Datanami for breaking news and analysis.
2020 – COVID-19 — Kicking Digital Transformation Into Overdrive
Earth threw humanity a curveball in early 2020 with the emergence of the ultra-contagious SarsCoV-2 virus from China and its spread around the world. Over the ensuing months, hundreds of millions of people would be infected with COVID-19, millions would die, and billions more would be impacted by unprecedented economic shutdowns imposed by governments in attempt to stop the spread. Read more…
2019 – DataOps: A Return to Data Engineering
When the big data boom fizzled out around 2016, AI was there to carry the torch forward. But by 2019, the shine was starting to wear off AI. The culprit? Bad data, as usual.
That’s not to say that 2019 marked a return to big data. Read more…
2018 – GDPR and the Big Data Backlash
If the first seven years of Datanami’s existence were primarily about the rapidly expanding use cases for big data, advanced analytics, and AI, then the eighth year marked the first major pullback in the use of these technologies. Read more…
2017 – AI, Deep Learning, and GPUs
Around the year 2017, something funny happened: People no longer talked as much about big data. Indeed, Gartner had already dropped “big data” off its hype curve. The idea of collecting loads of data on everything we do had become pervasive, so much so that Deloitte told us that data had become “like air.”
Absent a centralizing idea or rallying cry, the community formerly known as big data eventually settled on something else: Read more…
2016 – Clouds, Clouds Everywhere
We are living in a “cloud first” world today in 2021. But roll the clock back 10 years, and that definitely wasn’t the case. When did things really begin to move the cloud’s way? In our book, the change roughly occurred sometime around 2016, give or take. Read more…
2015 – Spark Takes the Big Data World by Storm
When Cal Berkeley’s AMPLab released Spark as an open source product back in 2010, nobody could have foreseen the huge impact that it would have on the big data ecosystem – an impact that continues to this day.
The original idea of Spark’s creator, Matei Zaharia was to build a better and faster version of MapReduce, which at that time was the main execution engine in Hadoop. Read more…
2014 – NoSQL Has Its Day
For several decades starting in the late 1970s, relational database management systems (RDBMSs) were the workhorses of IT. It became a best practice for companies to store their most important structured data, such as customer names and product orders, in an RDBMs like Oracle, Db2, and SQL Server, which formed the common data foundation atop which enterprise applications ran. Read more…
2013 – The Flourishing Open Source Ecosystem
By 2013, big data wildfire was sweeping the computing world, and open source software was the fuel helping it to grow. Apache Hadoop, based upon Google technology that Doug Cutting and Mike Cafarella re-created in Java and implemented at Yahoo, was among the first and the most influential of the open source big data projects. Read more…
2012 – SSDs and the Rise of Fast Data
Hadoop-based clusters that co-located X86 processors with cheap spinning disks personified the JBOD (Just a Bunch Of Disks) approach to big data storage. It was a rather inelegant, brute-force approach to creating a single namespace for storing petabytes of data. Read more…