When Humans Panic While Inside An AI Autonomous Car

If the human occupants of a self-driving car panic, maybe because a dog cross into the car’s path, the AI needs to be able to cope. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

Don’t panic.

Wait, change that, go ahead and panic.

Are you panicked yet?

Sometimes people momentarily lose their minds and opt to panic.

This primal urge can be handy as it invokes the classic fight-or-flight instinctive reaction to a situation.

If you suddenly see a bear up ahead while in the woods, it could be that rather than carefully trying to plot out all of the myriad of options about what to do, entering instead into a panic mode might get your feet moving and you’ll have run far from the bear before it has had a chance to do anything to you. On the other hand, it could be that your effort to run away is not wise and the bear easily catches up with you, allowing the bear to win and perhaps an untoward result for you.

Not many of us will likely get into a circumstance of confronting a bear, and so let’s consider something that might be higher odds of happening to any of us.

Suppose you are in an airplane and the plane is on the ground and engaged in fire.

Presumably, with or without panic, you’d realize that you should get out of the burning airplane.

How can you get out of the burning airplane?

I’m sure you’ve all sat through the flight attendants telling you to figure out beforehand the nearest exit to your seat. I’d bet that most people don’t look to see where that exit is, and instead just kind of assume that when there’s an emergency they’ll figure out where the exit is. Or, they’ll simply follow everyone else, under the assumption that everyone else knows where the exit is and that they are heading toward it.

Understanding The Nature Of Panicking

Interestingly, recent studies seem to show that when people are in a burning airplane and you would assume they would be heading toward the exit as fast as they could, they actually often try to grab their belongings from the overhead bin first.

This creates a significant delay. This creates heightened risk of getting caught inside the burning airplane. This creates the strong possibility of dying on-board the plane. Yet, people do this anyway, in spite of the seemingly obvious aspect that you should just get off the plane.

There was a flight out of Cancun that included 169 passengers and 6 crew members, and while on the ground the plane started to get engulfed with flames. Some of the passengers opted to try and retrieve their bags before getting out of the plane. The evacuation time took over three minutes. Tests done with people getting out of the same sized plane have indicated that it should take about ninety seconds. The difference in the doubling of the time in actual practice could have led to deaths (fortunately, no deaths occurred in this Cancun instance).

A more dramatic example would be the Air France A340 in Canada that ran off the runway and the plane split into two pieces, including erupting into flames. Reports afterward indicated that about half of the passengers first retrieved their bags before getting off the plane. Remarkably, this occurred while the flight attendants were yelling to get out of the plane and not first grab your bags. I guess keeping your toothbrush safe and your other personal items in the bag that’s jammed into the overhead bin is worth possibly losing your life over.

Let’s also clarify that this act of grabbing your bag is more than just one that can harm you.

It’s one thing if you do something ill-advised and it is only you that can get hurt from it, but in the case of an airplane, the act of grabbing your bag is undoubtedly going to create a delay for other humans trying to get past you to exit from that plane. So, it’s not just a personal choice with personal consequences, it’s a choice that involves deciding whether other people should also suffer a worse fate because of your decision.

That’s an important added twist to this discussion about panic, namely contagion.

Contagion And Panic

When a person panics while in a crowd, it can have spreading consequences like a kind of virus.

One person grabs their bag, it slows down everyone else. The slowing down of others might cause them to panic more so. Their getting into a deeper panic might cause them to do something untoward, and the cycle keeps repeating with others all doing things to indirectly or directly harm others.

Indeed, it is believed that often times the grabbing of the bag in the burning airplane is done partially because others are doing a copycat.

They see one person that does it, and they opt to do the same. This could be a monkey-see, monkey-do kind of reaction. Or, it could be a follow-the-leader reaction, namely they assume that the other person knows something they don’t know, such as maybe it is prudent to grab your bag, and so they follow that leader.  Or, it could be a competitive juices kind of thing, wherein you think if that person gets to keep their bag intact, you should be able to do so too.

Or, it could also be that since the other person has now created a delay by getting their bag, others might think they might as well also create a delay, but in their minds they figure they are just using the delay time that the other person has created. In other words, I see a person grab for their bag, and I calculate that the person has now created a delay of some kind. During that momentary period of delay, I’ll grab my bag too. Thus, I’ve not expanded the delay time, and instead merely efficiently used the otherwise already created delay time, and put it to good use that otherwise the time would have been me just watching the other person grab their bag.

How’s that for some impressive logic?

It turns out there are other adverse consequences beyond just the time delay of getting a bag.

People that are carrying a bag are typically going to take longer to get down the aisles and to the exit. Thus, they not only delayed others by grabbing their bag, the act of carrying the bag adds more delay too.

Furthermore, there are documented instances whereby the person carrying their bag came to the exit, saw that the chute that was inflated, and decided to toss their bag onto the chute, doing so before they jumped into the chute to slide down it. In some cases, the tossed bag actually punctured the chute. In other cases, the tossed bag hit other people on the chute, or blocked the chute and made it harder for others to slide down the chute. Similarly, the person that opts to keep their bag in their own clutches is likely to be a heavier and more awkward of a slider down the chute, often taking longer or hitting others on the chute.

In terms of the nature of panic, it is tempting to think that since on an airplane you already are vaguely aware that something can go amiss, and since the flight attendants at the start of the flight warn you about things that can go amiss, presumably there would not be much panic during an actual incident. People were forewarned that something can happen. If you are in the woods, maybe you didn’t anticipate that a bear might appear in front of you. Maybe no one warned you beforehand that these particular woods have bears. On a plane, you would certainly be aware that the plane can go on fire and that you might need to exit quickly.

I realize that you might quibble with me about the “panic” aspects of the people on the plane that grabbed their bags. You might try to argue that they weren’t panicked and instead mentally carefully weighed the risks of deciding whether to get their bags or not. In a very rational way, they decided that they had time to get their bags and that it was worthwhile to do so. If you watch videos of some of these incidents, I would suggest you see more panic-like reaction than what seems to be a chess match kind of consideration of what to do.

Ranges Of Panic Behavior

Overall, I’ll concede that there are ranges of panic.

You’ve got your everyday typical panic.

You’ve got the panic that is severe and the person is really crazed and out of their head.

You’ve got the person that seems to be continually in a semi-panic mode, no matter what the situation.

And so on.

We’ll use these classifications for now:

  • No panic
  • Mild panic
  • Panic (everyday style)
  • Severe panic

These forms of panic can be one-time, they can be intermittent, they can be persistent. Therefore, the frequency can be an added element to consider:

  • One-time panic (of any of the aforementioned kinds)
  • Intermittent panic
  • Persistent panic

We can also add another factor, which some would debate fervently about, namely deliberate panic versus happenstance panic.

Most of the time, for most people, when they get into a panic mode, it is happenstance panic. It happens, and they have no or little control over it. It is like a wave of the ocean water that rises, reaches a crescendo, and then dissipates. There are some though that claim they are able to consciously use panic to their advantage. They wield it like a tool. As such, if the circumstance warrants, they force themselves to deliberately go into a panic mode. It is hoped or assumed that doing so might give them herculean strength or otherwise get their adrenaline going. This is somewhat debated about whether you can truly harness panic and use it like a domesticated horse.

In any case, here’s these factors:

  • Happenstance panic (most of the time)
  • Deliberate directed panic (rare)

Panic Related To Cars

Let’s consider how panic can come to play when driving a car.

If you watch a teenage novice driver, you are likely to see moments of panic.

When they are first learning to drive, they are often quite fearful about the driving task and the dangers involved in driving a car (rightfully so!). As long as the driving task is coming along smoothly, they are able to generally keep their wits about them. This is why it is usually safest to start by having them drive in an empty parking lot. There’s nothing to be distracted by, there are less things that can get hit, etc.

Suppose a teenage novice driver is driving in a neighborhood and a dog darts out from behind some bushes.

For more seasoned drivers, this is something that is likely predictable and that you’ve seen before. You might apply the brakes or take other evasive actions, and do so without much panic ensuing. In contrast, the novice driver might begin to feel their blood pumping through their body, their heart seems to pound incessantly, their hands grip the steering wheel with a death like grasp, their body tenses up, they lean forward trying to see every inch of the road, and so on.

Should I hit the brakes, they are thinking. Should I try to accelerate past the dog? Should I honk the horn? Should I swerve? What to do? Their mind can become muddled and overwhelmed. They might pick any of those driving options and do so out of pure panic and not due to having decided which approach was the most prudent in the situation. They probably wouldn’t have the presence of mind to look in their rear view mirror to see what is behind them, which would be handy to know, since if they do hit their brakes it could cause the car behind them to ram into their car.

Besides taking some kind of driving related action, the novice driver might do other things such as yell at the dog, which might not be sensible if the windows of the car are rolled up anyway and the dog couldn’t hear you. Or, maybe you might flail your arms, taking them off the steering wheel, as though you are trying to motion at the dog to inform it to take action like getting out of the road. These motions might have little value and not be sensible in the circumstance, but panic often leads to people doing seemingly senseless things (like grabbing their bag when exiting a burning airplane!).

Autonomous Cars And Human Panic While Inside The Vehicle

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One important aspect involves considering what humans might do while inside an AI self-driving car and how to cope with their potential panic.

For the case of the dog that darts out into the street, let’s change the scenario and assume that you are in an AI self-driving car. The AI is driving the car. You are not driving the car. Indeed, let’s go with the notion that this is a Level 5 self-driving car, which is considered the level at which the AI is the sole driver of the car.

There isn’t any provision let’s say for you, as a human, to be able to drive the car. There’s no pedals, there isn’t a steering wheel.

The driving is entirely up to the AI system.

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

You are an occupant in the car.

Maybe you were reading the newspaper and enjoying having the AI drive you around the neighborhood. Out of the corner of your eye, you see that a dog has suddenly darted into the street.

What do you do?

For those of us that have grown-up in an era of cars that allow humans to drive the car, I’d bet that you’d be very tempted to want to suddenly take control of the car.

You might instinctively reach for where the steering wheel used to be placed, or you might use your leg and jam downward instinctively as though you are slamming on the brakes. But, in this case, none of that is going to do any good. You are not driving the car.

As an aside, if we do ever become a society where only the AI is the driver, and you have people that have never driven a car themselves, I would guess that they won’t react as you do, in that they aren’t going to be tempted to “drive” the car, since they have always accepted the notion that it’s up to the AI to do so. Eerie, kind of.

Anyway, back to that poor dog that’s run into the street and is facing potential injury or death at the hands of the AI.

You can see that the dog is possibly going to get hit. You likely are hoping or assume that the AI is going to detect the dog being there, and will take some kind of evasive maneuver. But, in those few seconds between your realization of the situation and before the AI has overtly reacted, you aren’t sure what the AI is going to do. You don’t even know if the AI realizes that the dog is there.

I suppose if you were someone that doesn’t care about dogs or animals, you might just slump back into your seat in the car and shrug off the situation. You might think that if the car hits the dog, so be it. If the car misses the dog, so be it. Leave this up to the AI. You don’t have a dog in this fight (a great pun!).

Perhaps you have blind faith in the AI and so you again slump back in your seat. You are calm because you know that the AI will make “the right decision” which might be to avoid the dog, or might be to hit the dog since maybe it’s the lesser choice of two evils (perhaps if the AI were to swerve the car, it might injure or kill you, and so it chooses instead to hit the dog).

For my article about the ethics of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For my article about the need for defensive driving tactics by AI self-driving cars, see: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

I’m betting that the odds are high that you’ll actually be very concerned about the welfare of the dog, and also be concerned too about what the AI is going to do as a driving aspect. If the AI makes a wild maneuver, maybe it goes off the road and runs into a tree, and you get injured. Perhaps the AI doesn’t recognize that there’s a dog ahead and isn’t going to do anything other than straight out hit the dog. This could harm or kill the dog, and likely damage the car, and you might get hurt too.

Well, in this situation, you might panic.

You could potentially wave frantically in hopes that the dog will see you, but this is low odds because the car has tinted windows and the windows are all rolled-up. You might wave your arms anyway, similar to what the novice example earlier suggested might be done. You might yell or scream. You might start crying, doing so because you believe the dog is about to get harmed and your body is reacting in the moment. Your heart starts pounding, you are frantic because you can see what is about to happen but have little or no control to avert the situation.

What The AI Should Do

Here’s a question for you to ponder – what should the AI do?

Now, I’m not referring to whether the AI should hit the dog or avoid the dog, I’m instead asking what the AI should do about you, the human occupant of the self-driving car.

Few of the automakers and tech firms are considering that question right now.

They are so focused on getting an AI self-driving car to do the everyday driving task that they consider the aspects of the human occupants to be an “edge” problem. An edge problem is one that is considered not at the core of a problem. It’s something that you figure you’ll get to when you get to it. It’s not considered primary. It’s considered secondary to whatever else is primary.

The AI in our scenario is presumably focusing on the dog and what to do about the driving. That’s suitable and sensible.

Should it though also consider the humans inside the self-driving car?

Should it be observing the humans to see how they are doing?

Should it be listening for the humans to possibly say something that maybe the AI needs to know?

Suppose you were driving a car and you had a passenger in the car with you. A dog runs out into the street. The passenger in your car says to you, hey, watch out, there’s a dog there. Maybe you, as the driver, were looking to the side of the road and had not noticed the dog. Thank goodness that the passenger noticed the dog and alerted you about it. You now see the dog and take evasive action. Dog saved. Humans saved.

If the AI of the self-driving car is only paying attention to the outside world, it might miss something that a passenger inside the AI self-driving car might have noticed that it didn’t notice. Could be that the passenger provides valuable and timely information, similar to my example about the dog running into the street.

As a human driver, you already know that sometimes a passenger in your car might panic. They might see that dog, your passenger yells and screams about the dog, flails their arms, and you meanwhile are trying to keep a cool head. Yes, you see the dog. Yes, you are going to take appropriate driving action. The passenger doesn’t necessarily know this. They are just in a panic mode. They are yelling and screaming, and maybe things even worse they try to reach over and grab the wheel from you. That could be quite dangerous.

Would we want the AI to be like that calm driver that also is allowing the passenger(s) in the self-driving car to provide input, which might or might not be useful, which might or might not be timely, or do we want the AI to completely ignore the human occupants?

It is our belief that the AI should be observing the human occupants in a mode that involves gaining their input, but that it also needs to be tempered by the situation and cannot just obediently potentially do whatever the human might utter. There is already going to be a need to have interaction between the AI and the human occupants, which will arise naturally in the course of being in the self-driving car and traveling, such as the human wanting to stop someplace to get food or go to the bathroom, or the human to ask the AI to slow down so the person can see the scenery, etc.

We also believe that it will be important for the AI to at times explain what it is doing and why. If the AI had told the human occupants that there was a dog in the road and that the AI was going to swerve to avoid it, the human occupants would be at least reassured that the AI realized the dog was there and that the AI was going to take action. This interaction with the human occupants can be tricky, such as in the case of the dog in the road there might not be sufficient time to forewarn the human occupants and the tight time frame needed to react might preclude providing an explanation.

For explanation based AI and self-driving cars, see my article: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/

For the importance of natural language processing and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

For my framework about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

Complexities Of Handling Human Panic

Just like you aren’t supposed to yell “Fire!” in a crowded theater (unless there is a fire, presumably), the AI cannot blindly do whatever the human might say.

Suppose the human tells the AI to come to an immediate halt and should slam on the brakes, and yet let’s say the self-driving car is going 80 miles per hour on a crowded freeway and there is a semi-truck right on the heels of the self-driving car? Does hitting the brakes in that scenario make sense? Likely not.

So, the AI needs to realize that the input to the driving task by a human occupant will need to be filtered and gauged as based on the situation. Furthermore, if the human seems to be panicked, this could be a further indicator of being cautious about whatever the human has to say. If you were a human driver and the passenger next to you seemed utterly panicked, I dare say you would likely consider their advice to be dubious and not give it as much weight in comparison to if it seemed to be carefully reasoned.

Let’s pursue further the overall notion of a human occupant and the nature of panic.

Suppose the AI is driving the Level 5 self-driving car and it’s a nice quiet and easy going drive.

The human occupant is reading the newspaper. They read a story about how the stock market is dropping. The person realizes their life savings is being drained away. They go into a panic mode. They start yelling for no apparent reason. They seem out of their head.

Most AI developers for self-driving cars would say that this is something that has nothing to do with the driving task, therefore, the AI has nothing to do with the situation. The AI should just keep driving the car. It makes no difference that the human is going nuts. If the matter doesn’t pertain to hitting a dog up ahead in the roadway, or some other matter directly linked to the driving, it has no relevance to the AI.

But, if a human was driving the car, and they had a passenger that started uncontrollably weeping or otherwise went into a panic mode, what would the human driver do? I’d bet that even if you were in an Uber or Lyft, and maybe even in a taxi, the human driver would say something to you.

Are you OK?

What’s wrong?

Besides asking those kinds of questions, it might be handy to ask too because suppose that their panic has to do with the driving of the car? You don’t know for sure that it does not. It might be handy to ascertain whether there is a connection between their apparent panic and the driving task.

Whatever underlies the panic, it could be that the panic somehow becomes pertinent to the driving task. Suppose the human occupant needs to be taken to the hospital because they believe they are having a heart attack (maybe it’s just a panic attack that feels like a heart attack)? Or, maybe they are genuinely injured and need medical care. Or, suppose the human occupant desperately needs to meet with a friend and the friend lives up ahead a mile or two? In essence, the panic of the human occupant could lead to a needed change related to the driving task, whether it be to alter where the self-driving car is going, or even how the self-driving car is being driven (such as slow down, speed-up).

It is anticipated that most AI self-driving cars will have cameras pointed not only outward to detect the surroundings of the car, but also inward too. These inward facing cameras will be handy for when you might have your children in the self-driving car, doing so without adult supervision, and you would want to see how they are doing. Or, if you are using the AI self-driving car as a ridesharing service, you’d likely want to see how people are behaving inside the car and whether they are wrecking it. All in all, there is more than likely going to be inward facing cameras.

With the use of these inward facing cameras, the AI has the possibility of being able to detect that someone is having a panic moment. Besides the audio detection by the person’s words or noises, the camera could be used in a facial recognition type of mode. Today’s facial recognition can generally ascertain if someone seems happy or sad. It won’t be long before the facial recognition will be coupled with body recognition, being able to then more comprehensively detect someone’s mood and demeanor.

Aiding Humans That Are Panicking

The AI could try to aid a person that’s in a panic mode.

For example, the AI system might seek to calm the person and reassure them. The AI system could offer to connect with a loved one or maybe even 911. Some believe that we’ll eventually have AI systems that act in a mental therapist manner, which would be easy then to include into the AI add-on’s for the AI self-driving car.

Of course, this calming effort should not detract from the AI that is intended to be operating the self-driving car, and thus any use of processors or system memory for the calming effort would need to be undertaken judiciously. Other considerations include should the AI open the windows to let in fresh air, or would it be better to keep the windows closed (maybe the panicked human might try to jump out the window!). Should the AI come to stop to let the human out, or is it safer for the human to stay inside the car. These are tough choices to be made.

Help, I’ve fallen and I can’t get up.

That’s the famous refrain from a commercial that gained great popularity.

Suppose instead we use “I’m panicking inside this AI self-driving car and I don’t know what to do” and the question arises as to what the AI of the self-driving car will do.

We assert that the AI needs to be aware of the human occupants and be attentive in case they panic. The panic might be directly related to some aspect of the AI driving task. Or, it might not be related, but the AI might end-up having to alter the driving task due to the panic mode of the human. Furthermore, the AI could potentially try to aid the human in a manner that a fellow human passenger might or that perhaps even a human therapist might.

The odds are that people are going to become panicked when in an AI self-driving car to the degree that they are unsure or feel uneasy about the AI driving the car. There will be many human occupants that will become “back seat drivers” in that they are desirous of giving advice to the self-driving car, and also reacting as to the driving by the AI. Some AI developers have even suggested that the human occupants should not be allowed to look out the windows of the self-driving car, since all that it will do is get those pesky humans into a panic mode whenever the AI needs to make a tricky maneuver.

Some believe that the AI does not have any obligation to placate or aid the human occupants. In this view, the AI drives the car. Period. It’s like a chauffeur that will only listen to you about whether to drive home or to the store, and nothing else.

Maybe the initial versions of the AI would be that simplistic, but it would seem unwise to stop there.

The AI needs to be fully able to contend with all aspects of the driving task, which means not just the pure mechanics of driving down a street and making turns. It means instead to be the captain of the ship, so to speak, and be able to aid the passengers, even when they go into a panic. Of course, we also need to make sure that the AI doesn’t itself go into a panic mode.

But, that’s a story for another day.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

This UrIoTNews article is syndicated fromAITrends