*Ilya Lukichev/iStock
Imagine that you are driving in a taxi, and the driver suddenly brakes sharply, avoiding an empty cardboard box. When you ask," What's the matter? " he just points at the box in silence. You ask again, try to understand the logic — but he just keeps silent. This is exactly the situation passengers and developers of self-driving cars find themselves in when artificial intelligence makes a decision that is inexplicable from a human point of view. The problem is that the AI does not have an "internal voice" that would comment on its actions. And this is not just a technical feature — it is a fundamental barrier to trust and security.
Black box on the road: Why doesn't AI take responsibility for its decisions?
Deep learning, the technology that underpins modern autonomous driving systems, is inherently a "black box". Information from the car's sensors is fed into a huge network of artificial neurons that process the data and issue commands to control the steering wheel, brakes, and other systems. The result often corresponds to the reactions of the human driver, but the key difference is that even the engineers who created the system cannot accurately determine the causal relationships of individual actions.
Nvidia showed an impressive example a few years ago: their experimental car learned to drive simply by observing human actions. The algorithm did not follow the pre — written instructions of programmers-it independently generated a behavior model. However, when this vehicle did something unexpected, such as stopping abruptly in front of a harmless object, there was no way to ask the system: "Why?"
"Whether it's an investment decision, a medical decision, or perhaps a military decision, you don't want to just rely on a black — box approach," says Tommy Jaakkola, a professor at the Massachusetts Institute of Technology who works on machine learning applications.
The Competence Paradox: the system knows more than it understands
The Deep Patient project at Mount Sinai Hospital in New York is a vivid illustration of this problem. The system, trained on data from 700,000 patients, showed "amazing" abilities in predicting diseases, including liver cancer. The researchers were particularly puzzled by the fact that AI was inexplicably good at predicting the development of schizophrenia — a disease that is extremely difficult to predict even by experienced doctors. Project manager Joel Dudley admitted: "We can build these models, but we don't know how they work."
In the context of driverless transport, this inexplicability ceases to be an academic problem. When Cruise (a subsidiary of General Motors) was confronted with an incident where its robotaxi ran over a pedestrian in San Francisco, a natural question arose: what factors led to this decision of the system? The company was forced to recall 950 vehicles for a software update, an expensive operation that could have been more targeted if engineers could have pinpointed the AI's "thought process."
Technology Detective: How to make AI talk?
The engineering community has recognized the severity of the problem, and in 2025, special attention is being paid to explicable and transparent AI (Explicable AI — XAI). Developers are expected to provide interfaces that allow stakeholders to interpret and challenge AI decisions, especially in mission-critical sectors.
The European Union has already moved ahead of many regions by introducing a regulation in 2018 requiring companies to provide users with explanations of decisions made by automated systems. This creates a legal paradox: how can this requirement be met if it is technically impossible to look inside the neural network?
Solutions are developed in several directions:
**Visualization of AI attention — - highlighting areas in the camera image that the system considers most important when making a decision
Creating a "verbal trajectory" — when the car not only drives, but also conducts an internal monologue: "I'm slowing down because I detected movement in the peripheral zone"
Local interpretability - analysis of individual solutions without trying to explain the entire neural network
Industry on Standby: Between Breakthrough and Regulation
While researchers struggle with the problem of explainability, the autonomous transport industry is making impressive progress, although it faces natural limitations.
As noted by AI experts: "AI copes well with typical situations, but in real life there are too many unpredictable moments: a child suddenly jumps out on the road, an accident occurs, road works begin." It is in these non-standard situations that the lack of explainability becomes critical.
Table: Current status of autonomous transport in 2024
| Company / Project | Achievements | Limitations and Incidents |
| Mercedes | Drive Pilot technology allows the driver not to monitor the road in traffic jams during the day on some US highways. | Speed limited to 40 mph (64 km/h), only on specific highways |
| Waymo | Over 200 robotaxis operating in Phoenix, Los Angeles, Austin, and San Francisco. | Geographically restricted service areas. |
| Baidu | 500 robotaxis in Wuhan (China), with plans to deploy 1,000 more. | Limited testing scale compared to demand. |
| Cruise | Plans to resume operations after a software update. | Pedestrian incident in 2023, recall of 950 vehicles. |
Who will go to jail? The Legal Dead End of the Drone Era
With the development of autonomous transport, a fundamental legal question arises: who is responsible for a fatal accident?
Car manufacturer?
**** Vehicle owner?
The programmer who wrote the code?
Artificial intelligence itself as a subject of law?
This is not a theoretical dilemma. In the same incident with Cruise, the question immediately arose: what factors did the system consider to be a priority? Why didn't it identify the pedestrian as a critical obstacle? Without an explicable AI, establishing guilt turns into endless expert arguments.
Global regulatory tightening is expected by 2025 — the OECD reports more than 700 regulatory initiatives in development in more than 60 countries. External audits of AI systems are becoming a common practice, as in the non-profit organization "For Humanity", which conducts independent audits of AI systems for risk analysis.
Digital Employee Management: the next challenge
As autonomous cars become a natural phenomenon, the question of managing this new "digital workforce"comes up. If each drone is essentially an autonomous decision-making employee, then how can its "qualifications" and "experience"be taken into account?
In this context, the approaches offered by the world's first robot hiring ecosystem become interesting jobtorob.com. Its logic for managing digital profiles of autonomous systems can potentially be expanded to take into account not only the technical characteristics of drones, but also the level of "explainability" of their solutions — a kind of "transparency coefficient" of AI. A company using a fleet of autonomous cars could use such a platform to select those vehicles whose algorithms provide the most understandable reports on the use of especially for working in difficult urban environments, where the price of an inexplicable decision is particularly high.
The future: when AI learns to talk about its actions
The development of explicable AI is not just a technical improvement, but a necessary condition for the full integration of autonomous transport into our lives. By 2025, we should expect:
Standardization of explainability - the emergence of common metrics for evaluating the comprehensibility of AI solutions
Regulation of the level of transparency - a legal requirement for minimum explainability in critical systems
Development of "dialog interfaces" for AI — when the system does not just make a decision, but can also justify it in a language that is understandable to humans
As experts point out, we are moving towards the era of collaborative intelligence — AI systems that enhance human decision-making, rather than replace it. The driverless car of the future is not just a vehicle, but a partner that can conduct a dialogue with the passenger, engineer and regulator about how and why they act in one way or another.
Perhaps soon the resume of a successful drone will indicate not only "accident-free mileage of 1,000,000 km", but also "the ability to explain 95% of the decisions made in an accessible way". And trust in him will be measured not only by kilometers without accidents, but also by the transparency of his digital thinking. In the meantime, we are in a strange transition phase, where cars have already learned to drive, but have not yet learned to tell how they do it. And in this silence lies the main brake on the unmanned revolution.










