Tesla's New Self-Labeling AI Just Brought It One Step Closer to Level-3 Automation
The Head of AI at Tesla has released new footage from the self-driving vehicle company's auto-labeling tool, and it could substantially accelerate progress in the Full Self-Driving Beta, according to tweets from the Tesla officer.
In case funky-colored videos don't mean much to you, Tesla's self-labeling AI brings Elon Musk's firm one step closer to level-3 automation.
The 'holy grail' of labeling for Tesla's self-driving cars
Despite leveraging the collective efforts of thousands of employees to label videos from Tesla cars, the electric car firm has a lot of quality data to work through. This is why the automaker has put more than one million vehicles on roads to gather video footage that can enhance its neural net AI systems. For self-driving cars, labeling refers to the classification of people, vehicles, road signs and lines, and other objects surrounding a vehicle to build a panoptic awareness of a vehicle's environment that up until recently only human drivers have learned to do. But the "holy grail" of labeling entails the development of a system that can do this automatically, identifying objects throughout unspeakably vast quantities of footage.
Tesla has said it's developing a tool like this, called the Dojo supercomputer, according to an Electrek report. And the new tweets from the firm's Senior Director of AI Andrej Karpathy, they've made incredible progress. The new series of tweets included images and video feed from Tesla's auto-labeling tool. "Some panoptic segmentation eye candy from a new project we are bringing up," wrote Karpathy in his tweet. "These are too raw to run in the car, but feed into auto labelers. Collaboration of data labeling a large (100K+), clean, diverse, multicam+video dataset and engineers who train the models." While Karpathy stressed that this tool is still in its early stage of deployment, the act of sharing seems to be an attempt to attract new recruits to his team at Tesla.
2/3 The multicam + video data, temporal continuity of a slowly moving viewpoint, close collaboration with data sourcing and labeling, and the infinity-sized dataset of unlabeled clips dramatically expands creative modeling opportunities on the neural net side pic.twitter.com/gmkUbyXtmD— Andrej Karpathy (@karpathy) November 30, 2021
Tesla's long march to Level 3 automation
"The multicam + video data, temporal continuity of a slowly moving viewpoint, close collaboration with data sourcing and labeling, and the infinity-sized dataset of unlabeled clips dramatically expands creative modeling opportunities on the neural net side," added Karpathy in a reply tweet. While the new progress is commendable to self-driving enthusiasts, it's also important to note that Tesla has admitted that Elon Musk exaggerates about "Full Self-Driving". A memo obtained and released by the legal transparency group PlainSite in May of this year. And it revealed a growing cognitive disparity between what Musk says to the world about Tesla's Autopilot, and the actual capabilities of the underlying software.
"Elon's tweet does not match engineering reality per CJ," said the memo from California's DMV, referring to its March 9 conference call with officials at Tesla. "Tesla is at Level 2 currently." Level 2 automation in a self-driving vehicle means it has semi-automated capabilities, but still needs human drivers to supervise and monitor the surrounding environment (that includes "labeling" miscellaneous objects). But with the new auto-labeling software making headway, you might not be wrong to expect projections for real-world Level-3 automation in Tesla vehicles in the coming decade.
This was a developing story and was regularly updated as more information became available.