Tesla's FSD Data Advantage is Overstated
Can Tesla’s highly anticipated robotaxi launch meet expectations?

Tesla's long-anticipated robotaxi service launch in Austin is fast approaching. Although the dramatic deterioration in Elon Musk’s relationship with Donald Trump could prove detrimental to Tesla — introducing a new, unpredictable factor that further increases volatility in Tesla's stock — the success or failure of the robotaxi service is a more measurable and crucial long-term value driver.
The significance of Elon Musk’s ability to meaningfully shape federal regulations around autonomous vehicles and robotaxis has likely been overstated. Regulatory favor alone is insufficient to drive industry growth or confer a lasting advantage. Ultimately, such influence is irrelevant unless Tesla—and its competitors—can first overcome technical challenges and successfully develop a scalable, safe, and commercially viable autonomous vehicle system.
As I’ve noted in my previous post— Autonomous Vehicles: A Watershed Moment—Tesla’s approach to autonomy still faces technical limitations. Ongoing data showing frequent disengagements in Tesla’s Full Self-Driving (FSD) system underscores that the technology is likely still not ready for large-scale, fully driverless robotaxi deployment. Moreover, some commonly cited advantages in the Tesla robotaxi narrative—such as access to a large volume of data from its vehicles—have been overstated and do not necessarily translate into a superior system. That said, Tesla may still succeed in orchestrating a small-scale launch that appears successful on the surface, drawing significant public attention and generating positive commentary from Tesla-aligned influencers.
Initial reports suggest the service will include just ~10 vehicles operating on well-defined, familiar roads—conditions that will help minimize disengagements and improve perceived performance. Tesla is also expected to lean heavily on teleoperators and limit early usage to “invite-only” participants, likely including influencers and Tesla-friendly voices who can amplify a positive narrative around service quality and the broader promise of Tesla’s perceived superior position to disrupt personal vehicle ownership.
Given the recent decline in Tesla’s share price, even a tightly managed but visually successful launch will likely fuel a rebound in the stock. However, once the initial excitement fades and EV sales continue to lag, investors will again confront the gap between Musk’s ambitious promises and the operational and technical reality. This sets the stage for renewed downside risk in the stock.
Tesla’s unsupervised Full Self-Driving (FSD) system, built around an end-to-end neural network architecture continues to face inherent technical challenges in handling real-world variability and edge case scenarios. These limitations raise questions about the system’s readiness for broad deployment. Compounding the issue is Tesla’s uncertain go-to-market strategy and, based on limited company disclosures, the potential near-term absence of the operational infrastructure required to support a scalable, multi-city robotaxi service. As these shortcomings become increasingly evident—especially when contrasted with the company’s promotional narrative—the risk of a more meaningful correction in the stock price grows.
The Misconception of a Data Volume Advantage
There is a common but flawed assumption that simply having more data and superior training compute power is sufficient to develop a successful autonomous vehicle (AV) system. This overlooks an important reality: once an AV system is trained, it is no longer continuously dependent on having access to large AI compute clusters for its everyday operation. At that point, system performance is defined by the efficiency and robustness of the vehicle-level tech stack, not access to the cloud or large data center AI compute capacity.
Tesla is often credited with a significant data advantage—and indeed, its access to real-world driving data is substantial. However, it is not alone. Mobileye, for instance, has harvested over 56 billion miles (as of 2024) of driving data through its Road Experience Management (REM) platform. In parallel, technical advances in simulation capabilities are making synthetic data an increasingly valuable complement to real-world data in training autonomous systems.
Beyond sheer data volume, data diversity is important to advancing performance in autonomous vehicle systems—not just the breadth of scenarios, edge cases, and environmental conditions encountered, but also the variety of data sources contributing to the system’s perception. Diverse, high-quality inputs enhance signal richness, enabling better model generalization and more robust decision-making. This is where Tesla’s perceived data advantage begins to fall short.
Elon Musk’s decision to rely on a narrow sensor suite—cameras only—is based on the belief that because humans drive using vision and a biological end-to-end neural network system (the brain), replicating this model in autonomous vehicles is the most logical and efficient path forward. However, this assumption overlooks still existing challenges faced by an end to end system, and is inconsistent with the idea that not only the volume of data matters in the performance of an AI system, but the richness and diversity of signal is crucial as well.
Unlike human drivers, camera-based systems do not perform reliably in all environments or driving conditions. Adverse weather, glare from the sun or car headlights of an oncoming vehicle, and visual obstructions can degrade camera input and compromise perception quality. Although there are some techniques to mitigate the impact of glare from bright light sources, this remains a challenge for a vision only system. These persistent performance bottlenecks suggest that a richer, multimodal sensor suite—incorporating radar and lidar—can provide a more comprehensive and robust world model.
A more robust sensor suite helps overcome the limitations inherent to any single type of sensor in an autonomous vehicle system. LiDAR offers precise, high-resolution 3D spatial mapping and excels at capturing the shape and position of objects across complex environments. It performs well across varied lighting conditions but remains costly and can experience performance degradation in bad weather conditions. Cameras, by contrast, are inexpensive and provide rich visual data, but are heavily dependent on good lighting and offer limited depth perception, making them less reliable in poor visibility conditions. Radar complements both by delivering performance in a wide range of environmental conditions, directly measuring object distance and velocity.
Additional sensors not only introduce redundancy and enhance safety but also deliver more accurate and stable perception capabilities. They allow the system to extract crucial environmental data such as the position, velocity, and classification of road agents (vehicles, pedestrians, cyclists) with greater precision.
This improved perception feeds into a autonomous vehicle systems, enabling them to learn more effectively and directly infer appropriate driving actions from higher-quality data. Richer sensor data helps reduce reliance on a single sensor by better contextualizing the driving environment—allowing the system to perform more reliably, even in complex or novel scenarios. This approach, combined with the engineered elements of its compound AI system, is Waymo's current strategy for autonomy. It integrates high-definition maps, modular AI frameworks, rigorous safety layers, and extensive use of neural networks to deliver a more robust and deployable platform.
The Consequences of a Vision-Only Approach
In the context of his "vision-only" approach to autonomous driving, Elon Musk has repeatedly downplayed the value of sensor redundancy, viewing it not as a means of improving system safety and performance, but as a potential source of signal conflict between sensors. His argument is typically framed around a binary choice: that conflicting data from two different sensors could lead to worse outcomes than relying on a single source. However, the addition of a third, independent sensor can help resolve discrepancies and reinforce accuracy through sensor fusion—an established technique used in many leading AV systems.
Instead of developing a more robust sensor fusion system that supports multiple sensor types, Tesla has chosen a vision-only architecture, risking over-reliance on a single type of sensor. In a camera-only system, if visual input is compromised—by glare, rain, fog, low light, or visual occlusion—there is no complementary sensor data (from radar or lidar) to reconcile erroneous inputs or cross-check signals within a vision-only architecture. This creates a vulnerability, especially in safety-critical edge cases.
An example of the consequences of this design decision emerged in 2021–2022, when Tesla initially transitioned all vehicles built for the North American market to its camera-only Tesla Vision system, discontinuing the use of radar. Following this change, many drivers began reporting instances of phantom braking when FSD (Full Self Driving) was engaged. While correlation is not causation, these incidents are plausibly linked to the absence of radar—an important sensor for detecting the position and velocity of objects ahead, particularly in conditions where vision alone may be unreliable. Removing radar not only stripped the system of an additional safety layer but also eliminated a key input that could help mitigate false positives generated by vision-based perception.
A real life driving experience cited in my post AV Technology: Facts vs. Fiction also provides a simple example of the advantages of utilizing a multi sensor system -
A recent drive in a Tesla Model Y highlighted a key technical challenge for its autonomous vehicle systems. The drive was through a familiar suburban environment, and overall, the FSD system performed well — except for two disengagements.
One disengagement occurred when the vehicle struggled to navigate out of a parking lot. The second disengagement, however, was more noteworthy and informative. While driving through a suburban village, the vehicle came to a complete stop in the middle of an intersection despite a green traffic light. The vehicle’s halt seemed to be triggered by FSD detecting a pedestrian standing at the corner, waiting to cross the street. However, the pedestrian was stationary, the traffic light was green, and the pedestrian signal showed a red "Don't Walk." Despite these cues, the vehicle stopped unnecessarily. This is a good example of an autonomous system identifying an object in its environment but failing to understand the broader context needed to make an optimal driving decision.
Tesla’s vision-only system correctly identified the pedestrian but lacked the contextual understanding to conclude that it was safe to proceed. A more robust sensor suite could have improved performance in this situation. Radar, for example, measures an object’s distance, speed, and relative motion — key data points that would have confirmed the pedestrian was stationary and not about to cross, prompting the vehicle to drive through the intersection safely.
Conclusion
The debate and narrative surrounding Tesla’s autonomous vehicle system development have consistently been dictated by Elon Musk’s optimistic forecasts coupled with effusive commentary and praise from Tesla supporters. However, these sources are not objective, nor do they provide concrete data to support bullish conclusions about the current state and readiness for deployment of unsupervised Full Self-Driving (FSD).
Frequent comments from Tesla owners, claiming their vehicles drive flawlessly with no disengagements in their daily commute, are misleading. A daily commute is a fixed route where many current systems can demonstrate solid performance. The true test lies in assessing a system's capacity to perform reliably in unfamiliar environments and when encountering unforeseen edge cases.
Likewise, the true test of Tesla’s autonomous technology and robotaxi service won’t come in the early days of its limited launch, but rather in the weeks and months that follow—as the pace of scaling is likely to lag behind the optimistic timelines set by Elon Musk’s public forecasts.