

(winyuu/Shutterstock)
It’s that time of year again – time for predictions! Thank you for patiently waiting while Datanami compiled 2021 predictions from the assorted predicters. We’ll kick things off with predictions about a most pertinent topic: data science.
If there’s one thing that the COVID-19 pandemic in 2020 made clear, it’s that organizations are relying on data more than ever before. To get the most out of that data, shops are going to need to increase their spending on data science, argues Domino Data Lab CEO Nick Elprin.
“Organizations are making dramatic budget cuts in many areas in an effort to overcome the effects of COVID-19 and keep their business viable,” Elprin says. “Yet, in 2021 we predict that many will sustain or actually increase their investment in data science to help drive the critical business decisions that may literally make the difference between survival and liquidation.”
You will see more people with the title of chief data scientist (CDS), says Ira Cohen, who is the co-founder and (naturally) CDS at Anodot. In fact, Cohen says that, by 2022, 90% of large global companies will have a CDS in place. CDSs will also allocate their time differently in 2021. “Fifty percent will be more focused on value creation and revenue generation while 28% will focus on cost savings and 22% on risk mitigation,” he says.
Josh Patterson, the senior director of RAPIDS engineering at Nvidia, says 2021 will bring empowerment to data scientists.
“For too long, enterprise data scientists have been relegated to sampling data or only pre-production development. People with titles such as data engineer and machine learning engineer are the ones who scale workflows into production, often translating code from Python to Java,” Patterson says. In 2021, “data scientists will be able to process massive amounts of data quickly, drastically reducing the need to have code translators.”
Alan Jacobson, chief data and analytics officer at Alteryx, is bullish on the potential to upskill data analysts into full-blown data scientists.
“While it is always important for companies to offer training to employees, the fields of data science and digital transformation are challenging companies to break the mold and deliver new and constantly evolving ways to upskill and deliver ROI” Jacobson says. “Data science has evolved to the point where people don’t need to go back to college to learn. They’ll learn on the job or while at home by encountering new tools and technologies. And with a huge shortage of those with analytic skills, many will start new jobs and careers based on the new skills.”
Tracking changes in data generated by SaaS-based business applications will be the feedstock for more intelligent AI and machine learning, says Joe Gaska, CEO of GRAX.
“Organizations with a focus on artificial intelligence and machine learning will continue to hunger for meaningful training datasets that can be fed into their ML algorithms to spot cause-and-effect change patterns over time,” Gaska says. “To do this, they will turn to their ever-changing datasets in 3rd party cloud/SaaS applications as inputs into these algorithms. This will create pressure for them to capture and ingest every single change in that data over time into their DataOps ecosystem.”
Rachel Roumeliotis, vice president of AI and data content at O’Reilly Media, says machine learning operations, or MLOps, will be important in 2021, as organizations look to connect the last mile in data science.
“ML presents a problem for CI/CD for several reasons,” she writes. “The data that powers ML applications is as important as code, making version control difficult; outputs are probabilistic rather than deterministic, making testing difficult; training a model is processor intensive and time consuming, making rapid build/deploy cycles difficult. None of these problems are unsolvable, but developing solutions will require substantial effort over the coming years.”

Data scientists will be in high demand in 2021, if prognostications are correct (Sergey Nivens/Shutterstock)
Instead of spending time and money building machine learning systems, in 2021, organizations will take a big step forward in terms of using ML systems, says Clemens Mewald, director of product management for machine learning and data science at Databricks.
“In the future, we’ll see enterprise customers moving away from building their own machine learning platforms, recognizing that it’s not their core competency,” Mewald says. “They’ll realize that more value comes from applying ML to business problems versus spending the time to build and maintain the tools themselves.”
We’re still a ways from having intelligent AI robots walking among us. But the conjunction of neuroscience and data science is a rich playground for new ideas, says Biju Dominic, the chief evangelist at Fractal Analytics and chairman at FinalMile Consulting.
“As AI makes rapid strides into unsupervised learning, one-shot learning, and artificial general intelligence, the field will seek inspiration and validation from system neuroscience and computational neuroscience,” Dominic says. “The interaction between the fields of AI and neuroscience will help the rapid growth of both these fields of knowledge.”
Here is a very specific data science prediction from James Bednar, senior manager of technical consulting at Anaconda: Python data visualization libraries will synch up.
“We’re finally starting to see Python data visualization libraries work together, and this work will continue in 2021,” Bednar says. “Python has had some really great visualization libraries for years, but there has been a lot of variety and confusion that make it difficult for users to choose appropriate tools. Developers at many different organizations have been working to integrate Anaconda-developed capabilities like Datashader’s server-side big data rendering and HoloViews’ linked brushing into a wide variety of plotting libraries, making more power available to a wider user base and reducing duplication of efforts. Ongoing work will further aid this synchronization in 2021 and beyond.”
Which is more valuable: metadata or the data itself? The answer from Petteri Vainikka, the president of product marketing for Cognite, may surprise you.
“As the cost and value of data storage continues to gravitate towards zero, and data science teams simultaneously scrambling to convert their existing data warehouse and data lakes into business value, the mountain of evidence pointing to ‘no correlation’ between volume and value of data keeps growing,” Vainikka says. “Whether through manual tagging of images, AI-driven data set matching to uncover data relationships, or OCR/NLP methods to convert unstructured data into structured data, the focus and value of metadata will exceed that of the data itself. Data contextualization will be at the centre of metadata curation.”
Big problems require big tools to solve, and that will be how data science differentiates itself in 2021, says Alicia Frame, lead data science product manager at Neo4j.
“With the computation power to crunch data getting cheaper and easier to access, and severless technology making it easier to develop and deploy code, we’ll see data scientists getting back to focusing on the basics: solving big problems more effectively than anyone else,” she says.
Amid data science practitioners, there will be a strong emphasis on the possibilities of feature engineering in 2021, predicts Ryohei Fujimaki, Ph.D., founder and CEO of dotData.
“While predictions are one of most valuable outcomes, AI and ML must produce actionable insights beyond predictions, that businesses can consume,” Fujimaki says. “AutoML 2.0 automates hypothesis generations (a.k.a. feature engineering) and explores thousands or even millions of hypothesis patterns that were never possible with the traditional manual process. AutoML 2.0 platforms that provide for automated discovery and engineering of data ‘features’ will be used to provide more clarity, transparency and insights as businesses realize that data features are not just suited for predictive analytics, but can also provide invaluable insights into past trends, events and information that adds value to the business by allowing businesses to discover the ‘unknown unknowns, trends and data patterns that are important, but that no one had suspected would be true.”
Stay tuned for our next batch of 2021 predictions, on advanced analytics.
Related Items:
2020: A Big Data Year in Review
Big Data Predictions: What 2020 Will Bring
April 4, 2025
- MindsDB Adds MCP Support to Streamline AI Access to Enterprise Data
- Nutanix to Spotlight Enterprise AI and Hybrid Cloud at .NEXT 2025
- Fujitsu and SMBC to Develop AI-Powered Data Analytics Business for Management Optimization
- Articul8 Debuts Specialized GenAI Models for Industrial Process Optimization
- Texas A&M Researchers Mimic Brain Function to Build More Efficient AI Systems
- Esri Releases Expanded Edition of the World’s Most Comprehensive GIS Dictionary
- Zencoder Launches AI Coding and Testing Agents with Deep IDE and DevOps Integration
- Cohesity Expands Database Protection with Nutanix Ready Status
- Tintri Recognized for Leadership in AI-Driven Data Management Solutions
- SAS Partners with Kansas State to Advance AI-Driven Water Management
April 3, 2025
- data.world Announces Partnership to Enhance National Biosurveillance Capabilities
- Hydrolix Raises $80M to Expand Log Data Analytics Platform
- SoftServe Partners with Google Cloud to Accelerate Agentic AI and Data Initiatives
- QuestDB Chosen by B3 to Streamline Real-Time Market Data Processing
- Redpanda Raises $100M to Expand Real-Time Data Platform for AI Agents
- Precisely Expands Role in European Insurance with Data Enrichment at Abeille Assurances
- Starfish Recognized for Advancing Data Management in Academic Research
- Gartner Forecasts Worldwide GenAI Spending to Reach $644B in 2025
- Opsera Raises $20M to Expand AI-Driven DevOps Platform
April 2, 2025
- PayPal Feeds the DL Beast with Huge Vault of Fraud Data
- OpenTelemetry Is Too Complicated, VictoriaMetrics Says
- Will Model Context Protocol (MCP) Become the Standard for Agentic AI?
- Accelerating Agentic AI Productivity with Enterprise Frameworks
- What Benchmarks Say About Agentic AI’s Coding Potential
- When Will Large Vision Models Have Their ChatGPT Moment?
- Nvidia Touts Next Generation GPU Superchip and New Photonic Switches
- Data Warehousing for the (AI) Win
- Four Obstacles to Enterprise-Scale Generative AI
- Can You Afford to Run Agentic AI in the Cloud?
- More Features…
- Clickhouse Acquires HyperDX To Advance Open-Source Observability
- NVIDIA GTC 2025: What to Expect From the Ultimate AI Event?
- Grafana’s Annual Report Uncovers Key Insights into the Future of Observability
- FlashBlade//EXA Moves Data at 10+ TB/sec, Pure Storage Says
- Reporter’s Notebook: AI Hype and Glory at Nvidia GTC 2025
- HPE Preps for the AI Era with Updated Data Fabric, Storage, and Compute Offerings
- Databricks Unveils LakeFlow: A Unified and Intelligent Tool for Data Engineering
- ScaleOut Enhances Digital Twin Intelligence With Generative AI and ML
- Mathematica Helps Crack Zodiac Killer’s Code
- Confluent GAs Tableflow, Adds Flink Native Inference in Bengalaru
- More News In Brief…
- Gartner Predicts 40% of Generative AI Solutions Will Be Multimodal By 2027
- Snowflake Ventures Invests in Anomalo for Advanced Data Quality Monitoring in the AI Data Cloud
- Seagate Unveils IronWolf Pro 24TB Hard Drive for SMBs and Enterprises
- NVIDIA Unveils AI Data Platform for Accelerated AI Query Workloads in Enterprise Storage
- Accenture Invests in OPAQUE to Advance Confidential AI and Data Solutions
- MinIO: Introducing Model Context Protocol Server for MinIO AIStor
- Gartner Identifies Top Trends in Data and Analytics for 2025
- Qlik Survey Finds AI at Risk as Poor Data Quality Undermines Investments
- Palantir and Databricks Announce Strategic Product Partnership to Deliver Secure and Efficient AI to Customers
- CData Launches Microsoft Fabric Integration Accelerator
- More This Just In…