The Future of Autonomous Vehicles

Spread the love
  • 2
    Shares


What’s the Future of Autonomous Vehicles? Imagine a world where you wake up, grab your cup of coffee, and hop in your car to drive to work. Except you’re not doing the driving. You have more time to sleep, read a book, or even get a physical. This is a world we all want to live in. And although we are not quite there yet. People all over the world are working on developing, testing, and planning for a future with autonomous vehicles. Because, who doesn’t want that extra hour of sleep?

The Future of Autonomous Vehicles

You may have seen self-driving cars on the news, splashed across the internet, or even testing around your city. But most of those cars still have a human in the driver’s seat. And that means it’s probably a level 2 or 3 car. Which definitely more independent than the car you might drive, which is probably a level 0 or 1. But it’s still a far cry from our dream ride, which would be a level 5 or 4.

What are the different levels of car?

Let me explain. SAE International has divided autonomy into five stages. Level one is “driver assistance,” and level two is “partial automation,” which you can already find in cars we drive today. Here, the car can do some of the steering, braking, and accelerating, but still needs a driver with hands on the wheel, because levels 1 and 2 are still just “driver support.” So, lane keeping, collision warning, even active interventions that will swerve the vehicle if you are about to get into an accident.

Level three is “conditional automation,” which means that the car is pretty much in control, but requires human intervention in an emergency, or when prompted by the system. Remember level three, because this is where it can get sticky. But the ultimate self-driving car would be operating at level 4 or 5, where it can steer, brake, accelerate, monitor the road, respond to random events, choose to change lanes, turn, and of course use its blinker like any decent citizen. The yellow brick road toward self-driving technology has been a winding one.

So, how it all started?

Dr. Dean Pomerleau has been navigating it for a long time. You could call him the grandfather, or at least the cool uncle, of autonomous vehicles. Back in 1995, dean and his graduate student made a pilgrimage across the country “look ma, no hands”-style, after they tricked out a stylish minivan with cameras and computer vision algorithms. About 98.2% of the trip, was hands-off, feet-off, with the system controlling the vehicle all on its own. It was a proof of concept, basically, for some of the technologies that we are seeing finally being deployed today.

In the years that followed, research teams competed to develop that technology further. It wasn’t until 2005, after some catastrophic failures, that DARPA’s grand challenge to build a self-driving car finally awarded first place to a Stanford team, led by Sebastian Thrun. Yeah, you might have seen him around. Fast forward to 2009, when he starts a little project called “Waymo” (formerly the Google self-driving car project).

In 2016, Waymo spins off from google, and in a few short years, the industry’s erupted, with established tech and car companies just as eager as start-ups to get in on the action. Waymo is probably the recognized leader. GM bought cruise automation “Argo AI” here in Pittsburgh is one leading player. BMW and Mercedes, are all working on their own projects for self-driving cars. It remains to be seen whether it’s a good investment or not. For that investment to pay off, driverless technology must be refined to the point where it’s both reliable and flexible enough to handle a complex journey. That means sophisticated sensors, robust computer hardware, and intelligent decision-making software.

The Importance of Maps

To start with, autonomous vehicles rely on something not all human drivers are equipped with, a sense of direction. The companies that are building these self-driving cars build their own maps. Very much like Google has its street-view cars that drive through neighborhoods and collect map data, they have another fleet with many additional sensors to drive through a city and map it in great detail – static obstacles, like telephone poles or the curbs around the road, that it should be aware of and avoid. But to be truly adaptive, the car needs to be able to gather real-time information about a dynamic, unpredictable environment.

av- The Future of Autonomous Vehicles 2
Source: sciencemag.org

Elon Musk thinks we can accomplish this with cameras alone. But if you have ever taken a selfie in the club, you know that cameras probably aren’t going to cut it, because they still struggle with darkness, depth, and reflections. So self-driving-car companies are investigating many different sensors, for example, millimeter wave radars for long-range sensing, and short-range, often ultrasound, sensors that see things that are very close to the vehicle. Lidar is probably the most common and most impressive technology currently being used.

Also Read: The need of Quantum Computing

What is LIDAR?

LIDAR (Light Detection and Ranging) is a laser-based technology that shoots a laser beam out into the environment, scans it very quickly, and detects the range to objects and other vehicles. LIDARs are both a great sensor but also a weak link. They are very expensive and break down fairly often. And this has been a major roadblock to fully autonomous roads. LIDAR has huge potential, but it’s just too delicate at the moment because it’s made up of fragile, moving parts.

But something called solid-state LIDAR, which scans the environment using no moving parts could change all that. And these sensors, while in their infancy, are in such demand that manufacturers literally can’t make them fast enough to supply the demands of companies like Ford and Baidu. It’s much more reliable and also much cheaper to manufacture, which is very important if you are going to do this at scales on thousands of vehicles.

av-The Future of Autonomous Vehicles 3
Source: gelastic.com

Let’s say they are building a driving robot. That is an entity that can perceive its environment, judge, and act on the road based on a complex network of real-time data analysis. In a way, it’s still only prepared to drive on a map. To be able to navigate in the real world, and share the road, and the steering wheel, with human drivers, they need to take it to the driver’s end. Some of the biggest safety concerns are involved with perception and behavior of drivers or pedestrians or cyclists. Slushy roads covered with ice and snow are very hard to cope with, and there’s really been very little effort or progress in self-driving cars in these very challenging environments. To make that progress, we have to study how human drivers actually respond – both to risky road conditions and to autonomy itself.

Stanford’s Automotive Innovation Lab

So, to find out more about the human in the whole equation, let’s get into Stanford’s Automotive Innovation Lab. They are working together to really get a detailed understanding of the human as we move forward in designing active safety systems and automated vehicles. So, they are going to set up this NIRS cap on a human driver. It will be shining a little bit of infrared light onto motor cortex. So that they will be able to see as when the driver is turning left, turning right, using the gas pedal and the brake pedal. Everything is stored through their data streams.

The majority of accidents that we see come down to human error in either recognition, decision, or performance. So, they are getting to the point where the system does a better job at those three things than humans to make our roads safer. The flexibility in the steering of their X-1 experimental vehicle allows them to set up all sorts of experiments. They can emulate driving on an unexpected change of friction. Going from snow to ice, for example. There are studies going on in the dynamic design lab, measuring the inputs that professional drivers make, so that they can try and understand what they are doing differently to drive right at the limits of the vehicle. They use that to design the algorithm to control autonomous vehicle. So hopefully, the future autonomous vehicle will drive as well as the very best human driver.


Also Read: The future of Hypersonic Travel

A most recent project investigated a scenario that might pop up in something like level 3 autonomy. Where the car’s been rolling solo when suddenly, it encounters some scenario it can’t make sense of, and the human driver is asked to intervene. Their studies of brain and behavior tell us that it’s important to consider a period of time when people’s driving behavior may be significantly different if they have taken control of a vehicle after a certain amount of time out of the loop. We can see almost in real time the cognitive resources being deployed. They may have more limited cognitive resources to deal with an emergency situation under those conditions.

It’s potentially a quite dangerous situation if we are handing off control back and forth with the system. Though it may seem extreme for consumers to make the jump from cruising around in a level 1 car to hopping in a fully autonomous one, many researchers agree that partial autonomy should only be reserved for testing purposes. And unfortunately, most of the accidents that have already occurred have proven them right.  The next five years or so of autonomous vehicle design is actually going to focus more on the ways in which we can implement full autonomy in a much smaller, more controlled environment, and sort of do it that way rather than necessarily going through this partial autonomy stage to get there.

Role of NACTO

People are easily distractible, and that’s the underlying problem that autonomous vehicles are setting out to solve. So NACTO (National Association of City Transportation Officials) cities believe that it needs to be really full automation to achieve the safety benefits that are the major promise behind autonomous vehicles. The NACTO represents 68 cities and 11 transit agencies across North America. NACTO recently convened to discuss how the world will prepare for fully autonomous cars to become a reality. When self-driving cars to hit the road, they will need to travel at low speeds and make use of existing infrastructure. Because over the past century, they have made countless compromises to accommodate the shiny new technology of the time, the automobile.

The blueprint for autonomous urbanism came about because they were seeing too many visions for driverless cars in a people-less city. The blueprint is imagining how cities can structure their streets to prioritize walking and biking and transit and public space, to really maximize those benefits of living and being in a city, while using autonomous vehicles to help achieve those goals.

Also Read: China is going to lit up the sky

I think in the next year or two we will see companies like Waymo and GM cruise deploying maybe a few hundred of these vehicles for the general public to ride in. Probably by early 2020s, we will see cars without drivers giving rides and then driving empty to pick up the next passenger. It will be probably at least a decade before you can walk into a showroom and buy a car at an affordable price. That can do, say, level four or five autonomy, which means you don’t have to do anything.

av -The Future of Autonomous Vehicles 4
Source: quecollision.com

The fact that Waymo’s CEO said we are still quite a way off makes me think that that’s probably true. But in the near term, I think there are some applications, especially for transit, to use autonomous technology to achieve some of our goals. Access to affordable, convenient transportation is really important. We all have a grandparent or a friend of our grandparents who had to give up driving and lost a lot of their independence. I think it can change lives and save lives across the board as long as we take into consideration everyone across the spectrum as we as a society move forward with automated vehicles.


Spread the love
  • 2
    Shares

Leave a Reply