Self-driving cars are facing a momentary roadblock. There are questions being raised even before they hit the streets. A survey released on 25 January by the global technology company Thales revealed that 57% of UK residents would not feel safe in a self-driving car. In certain US cities, there have been reports of people throwing stones at these vehicles. A few failed test experiments—some even resulting in fatalities—haven’t helped.
So, why are people sceptical about self-driving, or driverless, cars? According to a World Health Organization fact sheet on road traffic injuries, approximately 1.35 million people die every year as a result of road traffic crashes and more than half of all road traffic deaths “are among vulnerable road users: pedestrians, cyclists, and motorcyclists”. Speeding, driving under the influence of alcohol, and distracted driving are directly related to an increase in road crashes.
Autonomous vehicles aim to minimize this since they will run with the help of algorithms, sensors and other technology. According to the Thales report , one of the biggest concerns about self-driving cars among the public (49%) was a rise in the number of potentially fatal accidents.
As with every new technology, people need to perceive that autonomous vehicles are possible and safe to use. This is where a four-course specialization on self-driving cars by Coursera, the online education platform, could prove useful. Designed in collaboration with the University of Toronto, this specialization will be taught by Steven Waslander and Jonathan Kelly, both experts in the field of autonomous robotics research.
“There is a huge thirst from the industry for new graduates that have exposure to, and an understanding of, the complexities of self-driving car automation,” says Waslander. “By the end, you would have basically gone through the entire architecture of self-driving autonomy software and seen the main tools that are used in each case,” adds Waslander, associate professor at the University of Toronto Institute for Aerospace Studies and founder of the Toronto Robotics and Artificial Intelligence Laboratory (TRAILab). The new course will roll out through 2019 and will be available for $79, or around ₹5,600, per month.
In an exclusive interview with Lounge, Waslander talks about the course and the blind spots in the development of self-driving cars. Edited excerpts:
What is this specialization course trying to address?
We built this specialization as a tool to help engineers who are graduating to get into this area. We are looking at either undergraduates who have completed their degrees or graduate students who are moving from fields of robotics, controls or computer vision and want to become specialists in self-driving. We assume a fair bit of background experience but we are bringing everything you need to know to understand software architecture, hardware configurations and all of the various components of perception, planning and control for a self-driving car. In the end, you are not going to be an expert in any one of those areas just from this four-course specialization. But you will have a broad picture of how you build a self-driving car, from where you can go deeper into the different areas.
Each of the four courses is built around that idea. The first course introduces you to self-driving cars, the hardware and software… By the end of the first course, you are driving a car around a (virtual) race track. The second course then goes into localization and mapping—how you track your own motion through the environment using on-board sensors like GPS, IMU, wheel odometry, and also laser scanners. The third course takes in computer vision and tries to take in object detection and semantic segmentation. This is our perception module. Then the final course is planning: all the way from the high-level mission plan, how you get from point A to point B on our road network, through behaviour planning.
Can this course be pursued by learners other than those with prior engineering experience?
It’s mostly self-contained. We do assume some exposure to the following: certainly experience with linear algebra and calculus, derivatives and integration, simple stuff like that. But also, we need a little bit of exposure to computer vision, Artificial Intelligence (AI) or deep learning and a little bit of exposure to control: so, dynamics modelling the vehicles and basic linear controls (PID controls)… It is relatively self-contained but some learners may need a little additional background reading or an extra course to brush up on skills they might not have.
What are the possible career options for learners once they are done with this course?
This is clearly a market that isn’t going to show up overnight and just be solved. It’s also a market that isn’t going to have just one solution and one winner. Somebody will have an advantage but there’s no reason to believe there will be a winner-takes-all situation here. There won’t be hundreds of players, but 5-10 large players that get to the level of capability that they can serve the public.
The other exciting thing about self-driving cars is that it’s turned into a massive engineering effort. Anyone who goes through this specialization and also has some depth in a particular area that’s within it—like computer vision and motion planning or even on the hardware side, sensor construction or design or embedded systems—those kind of specialists are going to be needed throughout this industry. This transformation that the automotive industry is going to go through is going to require a whole new generation of engineers to design these vehicles. There’s going to be so much flexibility in the kinds of vehicles we have, so design is going to get really exciting.
Microsoft and MIT have recently developed a training model that could be used to improve the safety of AI systems, including driverless vehicles…
I think it’s a wonderful direction of research. There are two aspects to that. One is identifying situations that current learned systems can’t handle. These are negative cases or cases of hacking the visual input to the vehicle in such a way that it misclassifies or misinterprets the scene around it. The second is identifying scenarios where vehicles could be expected to fail, based on some understanding of their architecture. The current approach in industry, and certainly the one we talk about in the specialization, is one where the road tests that are being done are really scenario mining. They are looking for scenarios they haven’t yet uncovered to incorporate into their testing and validation for the overall system—what things have we not yet planned for? To do that in public on the roads with public lives at stake is a kind of strange way to go about it. But, of course, we have safety drivers and a lot of fallbacks… I think if we can accelerate that process, if you can find those scenarios automatically and better demonstrate complete coverage of the situations a vehicle will find itself in, then this will accelerate our ability to say a car is safe.
What are some of the blind spots remaining in the functioning of self-driving cars?
There are still some significant areas that nobody claims to do well in. One of those is a typically Canadian problem, which is all-weather driving: trying to handle adverse winter conditions or heavy rainfall that has a negative impact on a lot of the sensors, particularly vision and also laser scanners, LIDAR. You end up blind in the car, with the exception of RADAR, which makes it extremely hard to drive. Another one is chaotic traffic. This might be one that’s highly relevant for India. The rules of the road are greatly beneficial in terms of making predictions about what the other vehicles are going to do. I think the focus has been on north American- and European-style driving. The transition to soft-lane boundaries and intersections with dynamic traffic going in different directions simultaneously, round-abouts and mutli-lane interactions—these are really hard problems just because prediction becomes such a challenge. I think the first cars that are being rolled out are in limited domains, residential-type roads with low speeds and limited traffic. Getting to a complex, dangerous and dynamic driving environment is still an open question.
They are already being tested in Ontario as part of a pilot project. What kind of infrastructure would a city need for self-driving cars?
In Ontario, we have had the ability to test on public roads for multiple years now. The big news in January was that they are also allowing driverless cars—no driver in the driver’s seat. From the beginning of the pilot programme, they allowed driving on every road on Ontario. There was no restriction. So, all of the safety impact lay with the tester: You had to have a certain amount of insurance and you had to confirm that you were following safe procedures. We’ve benefited from that. My lab built a self-driving car in two years and put it on public roads. We drove a 100km last August in the Waterloo region, without any support from the government or any kind of modification to the infrastructure. The ultimate goal is to really drive like a human and to handle the driving environments the same way humans do. Right now, they are definitely catching up to humans but humans are still needed.
Essentially, a self-driving car from Waymo can drive about a year’s worth of driving—15,000km—before it needs a disengagement of any type. That’s a pretty impressive standard. But a human driver will go 10 years without an accident. We still need more reliability out of the system.
How important is it to have stronger guidelines?
In my mind, it’s absolutely essential, and we should all look to Germany as an example of how to move forward in this area. The key is not so much in the testing and fleet phase where companies can take on all the responsibilities. If they start having incident rates that are above acceptable, then the government will step in and shut down their programmes. So, there is huge risk on the company side for anyone deploying an expensive fleet—they want to make sure their cars work and meet the safety standards. The problem is when it becomes more widespread and when we start adopting it, when these features become available in commercial or consumer vehicles. Then, the legislation needs to be crystal clear about what is and isn’t acceptable… If that does not get sorted in the next three-five years, there is a potential that your country will fall behind in terms of adoption.
How do you see self-driving cars coming to India?
Where the driving is chaotic and irregular, that’s going to take longer for sure… there’s going to be a need for home-grown adaptations of these systems. So there’s a huge possibility for Indians to build Indian self-driving cars. It’s a huge market that somebody is going to want to take on. But clearly, there is another level of complexity there (in India) that needs to be considered: even just dealing with the different traffic rules and regulations, different vehicle types, the different types of environment and objects that need to be understood and detected. New data sets are needed; training on those data sets has to be performed. There’s going to be a lot of customization but the core of what you are going to do to bring cars to India is going to be the same.
[“source=livemint”]