Did you miss a session from MetaBeat 2022? Head over to the on-demand library for all of our featured sessions here.
Accelerating machine learning (ML) and artificial intelligence (AI) development with optimized performance and cost, is a key goal for Google.
Google kicked off its Next 2022 conference this week with a series of announcements about new AI capabilities in its platform, including computer vision as a service with Vertex AI vision and the new OpenXLA open-source ML initiative. In a session at the Next 2022 event, Mikhail Chrestkha outbound product manager at Google Cloud, discussed additional incremental AI improvements including support for the Nvidia Merlin recommender system framework, AlphaFold batch inference as well TabNet support.
[Follow VentureBeat’s ongoing Google Cloud Next 2022 coverage »]
Users of the new technology detailed their use cases and experiences during the session.
Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.
“Having access to strong AI infrastructure is becoming a competitive advantage to getting the most value from AI,” Chrestkha said.
Uber using TabNet to improve food delivery
TabNet is a deep tabular data learning approach that makes use of transformer techniques to help improve speed and relevancy.
Chrestkha explained that TabNet is now available in the Google Vertex AI platform, which makes it easier for users to build explainable models at large scale. He noted that the Google’s implementation of TabNet will automatically select the appropriate feature transformations based on the input data, size of the data and prediction type to get the best results.
TabNet is not a theoretical approach to improving AI predictions, it is an approach that shows positive results in real-world use cases already. Among the early implementers of TabNet is Uber.
Kai Wang, senior product manager at Uber, explained that a platform his company created called Michelangelo handles 100% of Uber’s ML use cases today. Those use cases include ride estimated time of arrival (ETA), UberEats estimated time to delivery (ETD) as well as rider and driver matching.
The basic idea behind Michelangelo is to provide Uber’s ML developers with infrastructure on which models can be deployed. Wang said that Uber is constantly evaluating and integrating third-party components, while selectively investing in key platform areas to build in-house. One of the foundational third-party tools that Uber relies on is Vertex AI, to help support ML training.
Wang noted that Uber has been evaluating TabNet with Uber’s real-life use cases. One example use case is UberEat’s prep time model, which is used to estimate how long it takes a restaurant to prepare the food after an order is received. Wang emphasized that the prep time model is one of the most critical models in use at UberEats today.
“We compared the TabNet results with the baseline model and the TabNet model demonstrated a big lift in terms of the model performance,” Wang said.
Just the FAX for Cohere
Cohere develops platforms that help organizations to benefit from the natural language processing (NLP) capabilities that are enabled by large language models (LLMs).
Cohere is also benefiting from Google’s AI innovations. Siddhartha Kamalakara, a machine learning engineer at Cohere, explained that his company has built its own proprietary ML training framework called FAX, which is now heavily using Google Cloud’s TPUv4 AI accelerator chips. He explained that FAX’s job is to consume billions of tokens and train models as small as hundreds of millions to as large as hundreds of billions of parameters.
“TPUv4 pods are some of the most powerful AI supercomputers in the world, and a full V4 pod has 4096 chips,” Kamalakara said. “TPUv4 enables us to train large language models very fast and bring those improvements to customers right away.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.