1. Home
  2. /
  3. Advice
  4. /
  5. Field

What’s the Future of Machine Learning in 2023 and Beyond?

Published on: Oct 11, 2022
Last Updated on: Apr 1, 2023
By: Editorial Staff
Share Article

There’s no doubt that machine learning is already a powerful force driving productivity, efficiency, and innovation across almost every industry. Be it high-frequency automated trading in finance, supply-chain streamlining in manufacturing, or autonomous navigation systems in consumer and military vehicles, it’s hard to overstate the impact machine learning is having. 

In the coming years, we can only expect this impact to grow: by 2028, Grand View Research projects the global AI market to expand by over 40% — with revenues forecasted to top $1.8 trillion worldwide. But what will this growth look like?

This article will lay out what experts expect for the future of machine learning: how the rise in computing needs will be matched by quantum computing and efficiency solutions, how machine learning model innovations will drive access for smaller businesses and industries, and how the future of machine learning looks for those interested in breaking into the field. This future’s coming fast, so read on now to prepare yourself.

How will innovative machine learning models be powered in the future?

While much attention has been paid in major media to the computing power and energy required to mine cryptocurrency, processing requirements for artificial intelligence and machine learning have accelerated at similarly breakneck rates. From a 2018 analysis of historical “compute” requirements, OpenAI, an AI think tank, found that the computing power required to train the largest AI models has doubled every 3.4 months since 2012. This means that the compute requirements in 2018 were over 300,000 times what they were in 2012. When this research was published, OpenAI was confident this trend would continue.

ai-and-compute-modern-log

Data visualization courtesy of OpenAI

Strong demand for computing needs means those on the vanguard of ML development are encountering hardware constraints as they continue their work: the GPUs (graphics processing units, used for machine learning because of their ability to simultaneously render large amounts of data) and TPUs (tensor processing units, developed by Google specifically for neural networks) at times struggle to keep up with what’s being required of them, and massive data centers are also having to be built to ensure enough capacity to train the newest machine learning models. 

In the future, experts suggest this ever-increasing demand for computing power to be met by two key innovations: quantum computing and algorithmic efficiency.

Quantum Computing

If you ask a machine learning engineer for their biggest top-of-mind game change for machine learning in the next decade, there’s a good chance they’ll say quantum computing. By leveraging quantum physics — specifically the complex quantum states of atomic and subatomic particles — quantum computing is able to perform processes simultaneously that a classical computer would have to perform one at a time. This allows for drastic improvements in computing speed and power. In one test in 2019, researchers at Google reported its “Sycamore” quantum computer performing a task that the world’s most advanced classical computer would need 10,000 years to complete in just over three minutes.

But whether a quantum computer can outperform a classical supercomputer depends on what exactly is asked of it. While quantum will certainly help machine learning engineers training complex deep learning neural networks, Google researchers caution that the utility of quantum computing vs classical computing depends on factors like the nature of available data. Certain kinds of data, if available, can actually give classical computing the edge over quantum methods.

Another important factor to consider when gauging the near- and mid-term impact of quantum computing for the future of machine learning is its commercial accessibility. While tech giants like IBM and Google are already offering cloud computing services that allow other companies to leverage the speed and power of quantum, a machine learning engineer will need to wait for the cost of such services to lessen and the availability to grow before writing quantum-specific machine learning algorithms. Given these access issues, those writing future machine learning models might sooner benefit from innovations in how these models themselves can be written. 

Algorithmic Efficiency

Even as machine learning engineers have been developing more and more complex machine learning algorithms, advances in how these algorithms are continuing to reduce the amount of computing power required to achieve the same level of performance. In 2020, OpenAI unveiled research showing that improvements to algorithmic efficiency allowed a neural network to be trained to the level of AlexNet — a neural network focused on computer vision — with 44x less compute than AlexNet required when it famously won the ImageNet Large Scale Visual Recognition Challenge in 2012.

ai-and-efficiency-compute

Data visualization courtesy of OpenAI

The researchers at OpenAI are quick to caution that the trends they’ve identified in algorithmic efficiency with regards to AlexNet-like neural network performance can’t necessarily be generalized across artificial intelligence but suggest that they probably hold for similar cases in deep learning. At the very least, such trends are likely to be directionally accurate for how algorithmic efficiency has progressed over the past decade. 

Looking to the future, nothing is suggesting such trends won’t continue for machine learning algorithms in the near-term, but there is, of course, a ceiling to any kind of efficiency.

The take-away? In the next several years, improvements in algorithmic efficiency will continue to allow machine learning engineers to develop more complex neural networks and other machine learning models without hitting the ceiling of most companies’ compute capabilities — but there will likely be certain hard limits. As you’ll see in the next section, such limits are already being anticipated through the development and use of new kinds of machine learning methods that will improve accessibility, including federated learning and foundation models.

How will machine learning become more accessible in the future?

Machine learning has already entered the public imagination, and consumer demand for processing capabilities is demonstrated by Apple’s push to market computers sporting its new M1 chip as machine learning-ready. This push for consumer-facing accessibility reflects a general push to make the benefits of artificial intelligence and machine learning more accessible, both for consumers and for smaller businesses who might not have the resources to build out extensive machine learning programs, especially when considering OpenAI’s estimates that “the largest training runs today employ hardware that cost in the single digit millions of dollars to purchase…” At the forefront of the accessibility movement are two innovative new methods to train machine learning algorithms: federated learning and foundation models.

What is federated learning?

When reading about machine learning techniques, you’ll most frequently come across the following:

  • Supervised learning: labeled data sets are used to train ML algorithms for tasks like spam filtering.

  • Unsupervised learning: ML algorithms train themselves by identifying patterns and relationships in unlabeled data sets for tasks like product recommendation.

  • Reinforcement learning: ML algorithms are incentivized to learn tasks like autonomous driving and manufacturing optimization using numerical rewards. 

Though these techniques differ in what and how exactly a machine learning algorithm “learns,” they have traditionally shared a reliance on local datasets centrally stored on a single server. 

Enter federated learning. Instead of training using a single, centralized data set, federated learning uses decentralized training data. Each member of a federated network — often a mobile device — downloads a prediction model, trains the model using its locally stored data, and re-uploads the newly improved prediction model back to the cloud. It’s all explained in Google’s great federated learning comic.

Federated learning has a host of benefits: 

  • It allows access to better, more personalized training data while maintaining a user’s data privacy

  • This personalized training data helps make smarter (and more personalized) machine learning models.

  • Because the model and training data can be stored on a user’s mobile device, these smarter, more personalized machine learning models can be used immediately and provide immediate benefit.

  • Because training occurs at the site of the device, there are significantly lower requirements for communication and data transfer between network members, lowering compute times and bandwidth requirements.

While the focus has been on the benefits and access to sophisticated machine learning it provides in the context of mobile device use, federated learning is also seeing increasing application in areas like autonomous vehicle navigation, manufacturing, and healthcare, in part because of the lower bandwidth demands and the heightened privacy it provides.

What are foundation models?

Federated learning will increasingly give consumers greater access to machine learning solutions that are safely trained and personalized on their own data, but how will companies gain better access to machine learning — and especially deep learning — in the future? One way is through foundation models, models trained on massive amounts of data that can then be distributed and customized to fit a business’ specific needs. You might think of a foundation model a little bit like an ice cream sundae: everyone gets an ice cream base, but based on their taste preferences can add sprinkles, whipped cream, fudge, or a cherry on top — without having to build the ice cream from scratch.

IBM sees foundation models as a fundamental shift in the direction of artificial intelligence R&D away from “task-specific” artificial intelligence and machine learning models toward models with broader applicability. IBM notes computer vision and natural language processing as two areas where this shift can already be seen to be occurring through foundation models like the text-generating GPT-3 and the image-generating DALL-E 2. If you’ve been on social media recently, you are almost certainly familiar with the latter through the fascinating images users are producing by feeding a pared-down version with text prompts.

generations ai example

An example of the kinds of images DALL-E Mini is producing from user text-prompts.

Images like this represent the vanguard of a burgeoning artistic movement, but more importantly for the average person, their accessibility showcases the potential impact of DALL-E 2, GPT-3, and the other foundation models that will surely follow. If anybody on the internet can harness the power of natural language processing or computer vision and prompt an AI to write an essay for them or generate a set of images for whatever strikes their fancy — say, “Brave Little Toaster falls in the tub” — then what might be possible with foundation models trained to perform sentiment analysis on social media, identify foreign cells in the body, or streamline manufacturing efficiency, all easily customized according to the user’s whims? It’s no surprise that IBM believes that “foundation models will dramatically accelerate AI adoption in enterprise.” Once these capabilities are out there and affordable to use, companies will have to take advantage of them just to stay competitive.

What might the future of machine learning look like for you?

With machine learning models becoming easier to power and more accessible to companies big and small, more and more artificial intelligence and machine learning engineers will be needed to support the wider adoption of these technologies. If you’ve read this far, it means you’re likely interested in these innovations, and maybe eager to learn how you can play a part. 

If you’re new to the discipline and haven’t read our “Start Here” page, we’d suggest starting there. You’ll get an overview of artificial intelligence and machine learning, learn how both are being leveraged by companies today, and get some ideas for how to begin your way down an artificial intelligence career path. If you already have some training in STEM or machine learning and are looking for an artificial intelligence or online machine learning course of study to jumpstart your career, head over to our program recommendations

You can also subscribe to our newsletter to keep up-to-date with the newest developments in the field. We’ve outlined the future of machine learning as we see it today, but who knows what tomorrow might bring!