Self-Driving Cars vs. Coding Copilots
Back in the mid-2010s, the world of autonomous vehicles was making great progress, and it seemed that we would soon be ushered around in cars that drove themselves, leaving us free to spend our time how we wanted. That obviously hasn’t happened, but instead, we’ve been treated to a form of AI we weren’t expecting: generative AI-powered copilots.
Following the launch of ChatGPT in late 2022, the world of generative AI has been on a tear. Every company seems to be investing in large language models (LLMs) to build one of the two most visible forms of GenAI: chatbots and copilots.
AI copilots, in particular, seem to be having a moment. Software developers and data engineers are being inundated with AI assistants that can understand code, write code, and even convert code from one language to another, giving them a potentially large boost in personal productivity.
Microsoft, which spearheaded the copilot trend with development partner OpenAI with the 2021 launch of GitHub Copilot, has been at the forefront of the copilot trend. The software giant updated its copilot offerings at its Build conference yesterday. In addition to further embedding Copilot across its Office 365 estate, it’s now offering the capability for Copilot to record every action that you take on your Windows PC.
Even IBM is getting into the copilot act. A year after launching a watsonx copilot for its venerable System Z mainframe that can convert COBOL to Java, Big Blue execs unveiled plans this week for another watsonx-powered copilot to help developers working with its EBCDIC-brother-in-crime, the Power-based IBM i server (formerly AS/400).
Now, nobody ever accused COBOL and RPG developers of being on the cutting-edge of tech. After all, they’ve dedicated their professional lives to maintaining systems that numerous people have (wrongly) predicted would go the way of the Dodo bird. Yet even these mainframe and midrange professionals can’t resist the productive lure of the LLM-backed coding copilot.
It’s somewhat ironic that one form of AI that we were promised was just around the corner–widespread use of self-driving cars–hasn’t come to pass, Merging autonomous vehicles into the real world has turned out to be a much tougher problem to solve than it was first envisioned.
“There’s a lot of analogies between autonomous vehicles and generative AI,” says Varun Mohan, who previously worked on self-driving cars at Nuro before founding Codeium which is developing an AI assistant for developers that competes with Microsoft Copilot.
“In 2015, everyone was like, self driving is closer than we believe,” Mohan says. “But on the other hand, the technology is getting markedly better year over year, even though there’s a lot of promises that didn’t come to be.”
Solving the “pixels to torque” problem is hard. It requires making life-or-death decisions about moving humans surrounded by thousands of pounds of glass, rubber, and metal through a world based on the skill of fusing data signals from sensors. There are only four outputs–turn the steering wheel left or right, and apply the accelerator or the brake–but figuring out how to apply them in reaction to a multitude of inputs isn’t straightforward.
But as complex as teaching a car to drive itself is, the potential action space for a software developer is exponentially bigger.
“Think about the distribution of things that’s a software developer does versus a car,” Mohan says. “There aren’t lot of different things to you as a driver. Granted, you’re negotiating conflicts scenarios and all these other things, but you could imagine solutions based on set of things you do writing software is significantly larger.”
Despite the seeming larger inherent complexity in developing software, it is software development that AI is currently excelling at.
“I would say automating developers is providing a tremendous amount of leverage for software developers,” he said. “But I think that the tail end of solving this problem is a significantly harder problem than autonomous vehicles, in my mind.”
GenAI for data engineering also shows great promise. Informatica just announced the general availability of its LLM-powered GenAI product, called CLAIRE GPT, across its entire product set. And Matillion, which develops ELT and data integration tech, is also building GenAI its offerings.
The Matillion copilot is able to create SQL scripts at the direction of a data engineer. What makes it so powerful is that it doesn’t change how the data engineer works, says Ciaran Dynes, the company’s chief product officer.
“The beauty of this thing is that I didn’t change your working habits at all,” he said. “I just made you faster. And that’s what’s game changing about this technology. It’s that you don’t have to learn AI to make it useful in your business. You could just apply it to an existing business process. It just changes the freaking game. I think we’re on the cusp of something huge here.”
That’s not to say that self-driving cars will never come to pass. Advances keep coming in the fields of computer vision and sensor fusion. Self-driving cars continue to be deployed in limited tests, and those tests are showing promise.
But in the meantime, it’s language AI’s time to shine in the sun. Any form of text-based communication, any type of text-based computer interface, is game for AI developers to apply the power of LLMs to automate and replicate. That bodes well for a huge range of applications–not just customer service agents and software engineers, but journalists, stockbrokers, lawyers, and government.
The future for GenAI is extremely bright, and the pace of development will only increase in the years to come, Mohan predicts.
“Generative AI has the capability to do very crazy things in the future,” he said. “I think we will overestimate what is possible over the next year and tremendously underestimate what happens over the next five years.”
Related Items:
Has Codeium Cracked the Code for AI Assistants?
Matillion Bringing AI to Data Pipeliness
Informatica CEO: Good Data Management Not Optional for AI