The UAE recently approved a temporary license to test self-driving or autonomous vehicles (AVs), making it the first country in the Middle East to test the applications of vehicle autonomy.
Self-driving public transportation is not new to the UAE; the Dubai Metro is one of the largest self-driving public transportation systems in the world, and by 2030, Dubai aims to have 25% of all transportation trips in the city to be smart and driverless.
The first autonomous (Level 4) driving tests on public roads in the UAE are being conducted in Abu Dhabi by UAE-based geospatial, data analytics and artificial intelligence company Bayanat, under the brand name TXAI.
Bayanat developed the simulation system, operating system, big data platform, and monitoring system with an accuracy of less than one metre. The company began testing its first set of autonomous vehicles as ride-sharing services in November 2021, and the first phase of the tests saw five TXAI branded vehicles operating on Yas Island, transporting passengers between nine stops that include hotels, restaurants, shopping malls and offices. The second phase will include more TXAI vehicles in multiple locations across Abu Dhabi. Although the vehicles are driverless, a safety officer will be present in the driving seat during the trial phase.
Autonomous mobility is part of the UAE’s strategy to become a global leader for attracting innovation and advanced technology applications, and the UAE will be the first market outside North America to deploy several types of autonomous vehicles. For example, Dubai’s Road and Transport Authority is working with General Motors to deploy a fleet of GM’s Cruise Origin autonomous vehicles from 2023, with the potential to scale up to 4,000 vehicles by 2030. The AV testing process in the UAE will be carried out through the Regulations Lab, an initiative by the General Secretariat of the Cabinet that provides a safe test environment for legislation that will govern the use and applications of future technologies.
A number of such fleet automation initiatives are taking off around the world. If fully implemented, this trend carries with it the promise of improved road safety, lower greenhouse gas emissions, mobility as a service, and transportation that is accessible, affordable and sustainable.
A 2020 survey jointly conducted by the Permanent International Association of Road Congresses (PIARC) or the World Road Association and the International Association of Transportation Regulators (IATR) about the readiness of new transportation technologies revealed that AVs are seen as a long-term technology that will take more than ten years for adoption, while EVs are a short- to medium-term technology that will be widely adopted within less than ten years. The survey also found that most organizations would like to be leaders or supporters when it comes to AVs and EVs.
The fast growth in autonomous mobility alongside electric mobility have to be matched with efforts to adapt road infrastructure, traffic management, and regulations, the complexity of which can only be handled through public-private partnerships.
“Governments cannot just be supportive. They must lead the effort in harmonization of regulations; otherwise, AVs will remain a long-term technology implementation,” points out Matthew Daus, founder and chair of the transportation practice group at US law firm Windels Marx Lane & Mittendorf.
“I often get automakers asking me about the US government’s plan for the transition to autonomous mobility, and I surprise them with the fact that there is no plan. The government usually follows the private sector. In the US, there’re too many city, state and local agencies that are expected to formulate plans, but they rarely talk to each other. In most cases, people that don’t specialize in transportation are regulating transportation. Governments must get their act together like they do in the UAE, Singapore and London, where everything is under one roof. They must address socio-economic issues related to AV implementation, such as pedestrian safety, workforce equity, data privacy, accessibility for people with disabilities, government costs and liabilities, lawsuits, and services for underserved communities. With regard to equity, could governments make self-driving vehicles affordable considering that the business model is likely to move towards uberising them to provide taxi services,” explains Matthew.
Who’s responsible when a driverless vehicle crashes?
The first pedestrian fatality involving a self-driving car was recorded in March 2018 when a self-driving Volvo SUV operated by Uber Technologies struck a female pedestrian in Arizona, USA. Uber had modified the vehicle with a proprietary developmental automated driving system (ADS) designed to operate in autonomous mode only on pre-mapped, designated routes. A female driver occupied the driver’s seat of the vehicle.
Although the ADS detected the pedestrian 5.6 seconds before impact and tracked the pedestrian until the crash, it failed to classify her as a pedestrian or predict her path. By the time the ADS determined that a collision was imminent, the situation exceeded the response specifications of the ADS braking system. The system design precluded activation of emergency braking for collision mitigation, relying instead on the operator’s intervention to avoid a collision or mitigate an impact.
Video from the SUV’s inward-facing camera showed that the operator was glancing away from the road for an extended period while the vehicle was approaching the pedestrian. According to her phone records, the operator was streaming a television show using an application on her phone. About 6 seconds before the crash, she redirected her gaze downward, where it remained until about 1 second before the crash.
An investigation by the National Transportation Safety Board (NTSB) concluded that the crash was predictable and avoidable. The probable cause of the crash was the failure of the vehicle operator to monitor the driving environment and the operation of the ADS because she was visually distracted throughout the trip by her personal cell phone. Contributing to the crash were the Uber Advanced Technologies Group’s (1) inadequate safety risk assessment procedures; (2) ineffective oversight of vehicle operators; and (3) lack of adequate mechanisms for addressing operators’ automation complacency—all a consequence of its inadequate safety culture.
In August 2021, the National Highway Traffic Safety Administration (NHTSA) launched a formal investigation into the Tesla Autopilot, Tesla’s SAE Level 2 advanced driver assistance system (ADAS), following a series of crashes involving Tesla vehicles operating in either Autopilot or Traffic Aware Cruise Control during 2014–21. The investigation will assess the technologies and methods used to monitor, assist, and enforce the driver’s engagement with the dynamic driving task during Autopilot operation. The NHTSA also issued a standing general order that requires manufacturers and operators of ADS and SAE Level 2 ADAS equipped vehicles to report crashes to the agency. Because these new technologies present unique risks, NHTSA is evaluating whether the manufacturers of these vehicles (including manufacturers of prototype vehicles and equipment) are meeting their statutory obligations to ensure that their vehicles and equipment are free of defects that pose an unreasonable risk to motor vehicle safety.
A growing number of incidents such as these are making regulatory authorities worldwide formulate new regulations to ensure safety and accountability in dealing with self-driving technologies.
“There’s so much uncertainty about who is responsible for damages when a driverless vehicle crashes. Should we blame the manufacturer, or the driver, or the government, or all of them? How do we account for the role of the technology that replaces a human operator inside the vehicle? If the driverless vehicle is retrofitted with parts, is the component or technology supplier also liable? And how should victims be compensated? We need answers to a large number of questions. So, it’s important that regulatory authorities and road agencies evaluate the current testing environments thoroughly and determine the risks and that rules with regard to liability. It’s obvious that we need a new legal and liability framework,” says Matthew.
Most recently, the Law Commission of England and Wales and the Scottish Law Commission (the Law Commissions) published a joint report that recommends introducing a new Automated Vehicles Act to regulate self-driving. The report highlights the need to draw a clear distinction between features which just assist drivers, such as adaptive cruise control, and those that are self-driving.
Under the Law Commissions’ proposals, when a car is authorised by a regulatory agency as having self-driving features and those features are in use, the person in the driving seat would no longer be responsible for how the car drives. Instead, the company or body that obtained the authorisation (an authorised self-driving entity) would face regulatory sanctions if anything goes wrong. The Law Commissions also recommend new safeguards to stop driver assistance features from being marketed as self-driving. This would help minimise the risk of collisions caused by members of the public thinking that they do not need to pay attention to the road while a driver assistance feature is in operation.
There are different legal terms by which companies can be held responsible. Most vehicle accidents are evaluated under either a negligence or products liability framework. Typically, courts evaluate unintentional human errors under negligence, and unintentional manufacturing errors under products liability. Negligence is defined as failure to behave with the level of care that someone of ordinary prudence would have exercised under the same circumstances; conduct that falls below the standard established by law for the protection of others against unreasonable risk of harm. Products liability refers to liability of any or all parties along the chain or manufacturing of any product for damage caused by that product. This includes the manufacturer of component parts (at the top of the chain), an assembling manufacturer, the wholesaler, and retail store owner (at the bottom of the chain).
“Automakers are most concerned about products liability. If legislatures do not impose new rules, the fault for AV collisions is likely to be assigned to AV manufacturers under a products liability framework. A legal or liability framework could have different levels of liability for the levels of automation, such as SAE Level 1: liability falls on the driver; SAE Levels 2–3: liability falls on the driver and the manufacturer; and SAE Levels 4–5: liability shifts to the manufacturer.
“As vehicles become increasingly autonomous (Levels 3 and 4), liability will likely shift toward parties in the connected and autonomous vehicle (CAV) supply chain, including vehicle manufacturers, tier 1 manufacturers, software companies and those responsible for the smart infrastructure. The focus will likely be on what went wrong with the navigation, electronics, and the connected/automated parts, rather than driver error. The law may look to manufacturers of the vehicles and/or the infrastructure to cover all or a portion of the liability, depending on the level of autonomy.
“Therefore, regulations should define the duty of CAV technology companies to ensure public safety; the liability of CAV technology companies and/or OEMs for the products they supply; provide clarity on what constitutes a design defect for an AV; and define the responsibilities of each party or entity involved,” says Matthew.
Insurance models, too, must evolve as autonomous vehicles may not fit into current risk-pooling models. Currently, under the ‘user-liability’ model, vehicle operators are required to have insurance. But the traditional approach may not work and it may be necessary to extend insurance to the maker of the vehicle or the computer or automated system that is controlling the vehicle.
“The insurance for AVs may need to include third-party liability insurance and product liability coverage. I worked as a plaintiff’s lawyer long enough to see the perspective of the plaintiff to sue anybody and everything that walks or affects a vehicle crash. So, if we want AVs to progress from the testing stages to full implementation, we need to solve the liability issues as soon as possible. With politics being what it is and the involvement of interest groups, a big crash will make headlines globally and stall the AV movement. I can’t stress this enough: governments must take the initiative and work with the private sector. Because this is too important, and we can’t mess it up,” says Matthew.