Active Shooter Responsiveness for AI Autonomous Cars

Run, hide, fight: three words to remember in an active shooter situation.

By Lance Eliot, the AI Trends Insider

Sadly, the topic of active shooters has been in the news recently.

Upon reflecting about these recent horrific incidents, I remembered one that happened earlier this year that took place on March 27, 2019 in Seattle, and consisted of an active shooter situation that involved a city bus, though fortunately the outcome was not as dire as the recent incidents.

I believe that once the advent of autonomous vehicles begins to fully appear on our public roadways, those self-driving driverless vehicles might find themselves immersed in an active shooter setting by happenstance, and for which the AI of the autonomous vehicle should be prepared to respond accordingly.

Let’s explore what happened in the Seattle incident and then see if there are lessons that can be applied to the development of AI systems for autonomous cars and other driverless vehicles.

Seattle Active Shooter Incident In March 2019

Thank goodness for a heroic bus driver in Seattle.

Things went haywire on an otherwise normal day (this is a real story that happened on March 27, 2019).

The metro bus driver managed to save a bus of unassuming passengers from grave danger, not a danger by a wild car driver that might have veered into the bus or a large sinkhole that might have suddenly appeared in the middle of the street, but instead this involves a life-or-death matter of an active shooter menacing the streets of North Seattle.

A crazed gunman was walking around on Metro Route 75 and was wantonly firing his pistol at anything and anyone that happened to be nearby. Characterized as a senseless and random shooting spree, the active shooter took shots at whatever happened to catch his attention. Unfortunately, the bus got into his shooting sphere.The bus was on its scheduled route and unluckily came upon the scene where the gunfire was erupting.

Unsure of what was going on, the bus driver at first opted to bring the bus to a halt. The shooter decided to run up to the bus and then, shockingly, shot pointedly at the bus driver. The bullet hit the bus driver in the chest. Miraculously, he was not killed. In spite of the injury and the intense bleeding, and with an amazingly incredible presence of mind and spirit, the bus driver took stock of the situation and decided that the right thing to do was to escape.

He could have perhaps tried to scramble out of the bus and run away, aiming to save himself and not put any thought towards the passengers on the bus. Instead, he put the bus into reverse and backed-up, which is not an easy task with a bus, and not when you are hemorrhaging from a bullet wound, and not when you have a gunman trying to kill you.

After having driven a short distance away, he then put the bus into forward drive and proceeded to get several blocks away. His effort to get the bus out of the dire situation of being in the vicinity of the meandering shooter was smart, having saved his passengers and himself from becoming clay target-like pigeons cooped up inside that bus.

Fortunately, he lived, and we can thank him for his heroics.

When asked how long it took him to decide what to do, he estimated that it all played out in maybe two seconds or so. He got shot, looked to see if he was still able to drive the bus, and figured that he could do so.

Interestingly, he had previously taken a two-day training course for bus drivers that involved how to deal with confrontations, though as you can imagine the formal course did not include dealing with a demented gunman that’s taking potshots at you and your bus. That’s not something covered in most bus operations classes or owner’s manuals (it’s more so about unruly passengers).

What To Do In An Active Shooter Situation

Often, an active shooter tends to go into a building and wreaks havoc therein. Sometimes this occurs in a restaurant, or a nightclub, or a warehouse, or a grocery store, or an office environment. If you are caught up in such a situation, the difficulty often involves being trapped in a confined space. The gunman has the upper hand and can just start shooting in any direction, hoping to hit those that are within the eyesight of the killing spree.

Perhaps you’ve taken a class on what to do about an active shooter. I’ve done so, which was offered for free by the building management that housed my office. The facilities team at the building decided that it might be helpful for the building occupants to know what to do when an active shooting might arise. Though I doubted that I would ever be stuck in that kind of circumstance, I figured it would be wise to take the complementary class anyway, always wanting to be prepared.

There is a mantra that they drummed into our heads, namely run-hide-fight, or some prefer the sequence of hide-run-fight.

Those three words need to be committed to memory. You want to recall those three words when the moment of needing them is at hand. The odds are that you’ll be in a state of shock when a shooting erupts, most likely feeling intense and overwhelming panic, and without memorizing the three words you might do either nothing at all or the wrong thing.

You can use variants of the three words, such as hide-flight-fight, cover-run-fight, and others, whichever is easiest for you to recall. Some even use the three words of avoid-deny-defend.

There is also some debate about the sequencing of the three words. Some believe that you should always try to hide first, and if that doesn’t seem viable then run, and if that doesn’t seem viable than fight. Thus, the three words are purposely sequenced in that manner.

Not everyone believes that you can always proceed in that sequence. It might be better in a given situation to run away and not consider hiding. In that case, it would be run-hide-fight as the three words to be used. Others would say that the trouble with running is that you probably will remain momentarily as a target while undertaking the escape, while if you are hiding you are presumably or hopefully unable to get shot.

The approach selected will generally be context based. If there is no place to hide, you should not be wasting time trying to decide whether to hide or not. Time is often of the essence in these situations. Of course, those that argue for the hiding as the first step would say that you should make your decision rapidly and if the odds of hiding seem slim, resort to the escape.

The third element, the fight part, almost always is listed as the last option.

Most would say that fighting your way out of an active shooter situation should be a last resort. Unless you happen to be trained in bona fide fighting methods, and only if the fighting approach is seemingly “better” than the hide or escape methods, only then should you try to fight. Again, this is a contextual decision. The average person, if unarmed, and faced with a gunman shooting with a loaded weapon, probably does not have much chance of overtaking the shooter.

One valuable point in the class that I attended involves the notion that you can potentially get the shooter to become distracted or be perturbed off-balance by making use of the “fight” approach in even a modest way. For example, suppose you are in an office environment, you might pick-up a stapler and throw it directly at the head of the shooter. Though the stapler is unlikely to knock-out the gunman, the odds are that the shooter will flinch or duck, reflexively, generating a pause in the shooting, allowing either you or others to try and overpower the gunman or provide a short burst of time to hide or run.

I’ll repeat that it all depends upon the situation. Standing up to toss a stapler might be a bad idea. It could make you into an obvious target. You will likely draw the attention of the shooter. Being in a standing position might make it easier to get shot. Nonetheless, there might be a circumstance whereby the stapler throwing or coffee cup throwing or throwing of any object could be a helpful act.

Active Shooter Situation That Is Outdoors

What would you do when you are outside, and an active shooter gets underway?

You can still use the handy three words of hide-run-fight. I’ll list them in that order of hide-run-fight, but please keep in mind that you might instead memorize it as run-hide-fight, whichever you prefer. I don’t want to get irate emails from readers that are upset about my somehow urging toward which of the sequences to memorize, so please memorize as you see fit.

Getting back to the matter at-hand, what would you do if you were outside and came upon an active shooter?

You would look for anything substantial that you might be able to hide behind. If there isn’t anything nearby as a hideaway or if the hiding seems to be nonsensical in the moment, you would consider then whether to run. Running in an outside situation might be dicey if the shooter has a clear shot at you while you are running. When running and confined inside a building, there might be walls, pillars, and other structures that make it harder for the shooter to aim and shoot directly at you, though of course it also makes it harder for you to make a quick getaway. Being outside might not offer protective obstructions, though it might provide you with an open path to run as fast as you can.

Let’s revisit the mindset of the heroic bus driver.

The bus driver wasn’t standing around. He wasn’t “outside” per se. He was inside a bus. At first, you might assume that being inside a bus is a pretty good form of protection. Not particularly, especially for the driver. The driver is sitting in the driver’s seat, buckled in. There are glass windows all around, so that the driver can see the roadway while driving. It’s kind of a sitting duck situation.

The passengers on the bus have a greater chance of dropping to the floor of the bus to hide than does the bus driver. The passengers are usually not buckled in. Plus, the design of most buses makes it hard to see into the passenger compartment area by someone standing outside that’s shooting at the bus. I’m not suggesting the passengers were safe, only pointing out that overall they were likely in a less risky place of getting shot than the driver was.

One thing the passengers could not do was presumably drive the bus, at least not in the instant that the active shooter started shooting directly at the bus. I suppose if the bus driver had gotten shot badly and could not drive the bus (or died), the passengers might have tried to yank the bus driver away from the steering wheel and one of them could have tried to drive the bus. Besides the physical aspects of trying to get into the driver’s seat being a barrier to this action, the question arises whether an average passenger would have known immediately how to drive the bus.

In any case, the heroic bus driver realized that he was still alive and could drive the bus. With that decision made, the next matter to ponder would have been which way to drive the bus.

Recall the three magical words, hide-run-fight.

If there was a nearby wall, maybe pull the bus behind that wall, attempting to hide the entire bus. It seems doubtful in this case that there was any nearby obstruction large enough to hide the bus behind. So, the hide approach probably wasn’t viable in this situation.

This meant that the next choice would be to consider running away. Apparently, if he had chosen to drive forward, he would have been going toward the gunman. I’ll assume that in the heat of the moment, the bus driver decided that going forward would make the bus a greater and easier target for the gunman. Perhaps the shooter could have raked the bus with gunfire if it proceeded to go further up the street. Or, the gunman might have had a better bead on the bus driver, possibly providing a killing shot and causing the bus to go awry.

We can also likely assume that trying to go left or right was not much of an option. The bus was probably on a normal street that would have sidewalks, houses or buildings on either side of the street, making it nearly impossible to simply make a radical left or right turn to escape. It was like being stuck inside a canyon. The sides are impassable.

Therefore, the bus driver decided to put the bus into reverse. Driving backwards is not a particularly safe action when in a bus. I’ll assume he was trying to drive backwards as fast as he could. In fact, when interviewed, the bus driver said he wasn’t quite sure what was behind him and hoped that there wasn’t anything that he might hit. Luck seemed to overcome the unlucky moment and permitted the bus driver to rapidly back-up the bus. For more details about the matter, see this article in the Seattle Times:

You might be wondering whether the third element of the three-word approach might have been used in this situation, namely, could the bus driver have chosen to fight?

I’ll dispense with the kind of fighting in which the bus driver jumps out of the bus and tries to do a hand-to-hand combat with the shooter. The bus driver was already wounded and partially incapacitated. That’s enough right there to rule out this option. Even if the bus driver had not been shot, the idea of having him open the bus door, leap out, run at the shooter, well, this seems like a very low chance of overcoming the gunman and a high chance of the bus driver getting killed.

Maybe he could have tried to run over the gunman, using the bus as a weapon.

That would have been a means to “fight” the shooter. This seems to happen in movies and TV shows. I’m betting though that trying to run down a gunman that is shooting at you would not be as straightforward as the rigged efforts for making a film. The situation seemed to be one that if the bus driver had tried to drive at the shooter, the gunman would have likely shot the bus driver dead, before the bus rammed into the shooter.

One also wonders how hard it might be to decide to run down someone. Yes, I realize that the gunman was on a rampage and so stopping the shooting by a means of force was well-justified. If the bus driver had run down the gunman, I think we’d all have expressed that the act was appropriate in the moment. In any case, I’m guessing that the mainstay of the choice was that trying to run over the gunman was a combination of low odds of success and also a heightened risk of getting shot at further.

I’d like to add that the bus driver emphasized afterward that he was especially concerned about the bus passengers. By backing up, this would seem like a means to try and ensure greater safety for the passengers too. Consider that the bus would have been facing the gunman, thus, as the bus drove in reverse, most of the bus would be hard for the gunman to shoot into. If the bus had gone forward, presumably the shooter would have had an easier time of riddling the entire bus with bullets. It could have gotten the passengers shot by random chance, even if the shooter couldn’t see into the bus directly.

Let’s hope that none of us ever find ourselves in such a situation. Imagine if you were the bus driver, how would you have handled things? If you were a passenger, what might you have done? These are nightmarish considerations.

Either way, I hope you will remember to hide-run-fight if you ever find yourself in such a bind.

Active Shooter And Reaction By AI Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars.

One rather unusual or extraordinary edge or corner case involves what the AI should do when driving a self-driving car that has gotten itself into an active shooter setting. It’s a hard problem to consider and deal with.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:  

For the levels of self-driving cars, see my article:  

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task: 

  • Sensor data collection and interpretation 
  • Sensor fusion 
  • Virtual world model updating 
  • AI action planning 
  • Car controls command issuance 

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see:  

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

Returning to the topic of what the AI should do when encountering an active shooter, let’s consider the various possibilities involved.

I’ll readily concede that the odds of an AI self-driving car coming upon a scene that involves an active shooter is indeed an edge or corner case. An edge or corner case is considered a type of situation or part of a problem that can be dealt with later on when trying to solve an overarching problem. You focus on the core parts first, and then gradually aim to deal with the edges or corner cases. For AI self-driving cars, the core or primary focus right now is getting the AI to be able to safely drive a car down a normal street on a normal day. That’s a handful right there.

For the active shooter aspect, I am okay with saying it is an edge case. Hopefully there won’t be many of those instances.

For my article about edge problems in AI self-driving cars, see:

There are some AI developers that might say that not only is it an edge case, it is a far-off edge case. It is an unlikely edge. They would suggest that there really isn’t much that can be done on the matter anyway. So, besides the same odds as worrying that the AI self-driving car might get struck by a falling meteor, those AI developers would say that the AI wouldn’t be able to do much about the situation and thus toss the matter into the not-gonna-work-on-it bin.

For egocentric AI developers and their mindsets, see my article:

I’m not so willing to concede that there isn’t anything the AI can do about an active shooter situation.

For the moment, let’s set aside the low odds of it happening.

We’ll focus instead on what to do – in the extraordinary case, if the astronomically low odds happen to befall an unlucky AI self-driving car and it comes upon an active shooter.

Also, I’m going to focus herein only on the true Level 5 AI self-driving car, one in which the AI is solely doing the driving and there isn’t any co-sharing with a human driver. If the AI is co-sharing the driving, I’m assuming that by-and-large the human would take over the driving controls and try to deal with the situation, rather than the AI having to do so on its own.

Begin by considering what the AI might do if it was not otherwise developed to cope with the situation. Thus, this is what might happen if we don’t give due attention to this edge case and allow the “normal” AI that’s been developed for the core aspects of driving to deal with the situation at-hand.

Detection And Response Aspects

First, the question arises about detection. Would the sensors of the self-driving car detect that there was an active shooter? Probably not, though let’s clarify that aspect.

The odds are that the sensors would indeed detect a “pedestrian” that was near or on the street. The AI system would be unlikely though to ascribe a hostile intent to the pedestrian, at least not more so than any instance of a pedestrian that might be advancing toward the self-driving car. The gunman won’t necessarily be running at the self-driving car as though he is desiring to ram into it. That’s something that the AI could detect, namely a pedestrian attempting to physically attack or run into the self-driving car.

I’d guess that the gunman is more likely to let his gun do the talking, rather than necessarily charging at the self-driving car on foot.

If the gunman is standing off to the side and shooting, the normally programmed AI for a self-driving car won’t grasp the concept that the person has a gun, and that the gun is aimed at the self-driving car, and that the gunman is shooting, and that there are lethal bullets flying, and that those bullets are hitting the self-driving car. None of that would be in the normal repertoire of the AI system for a self-driving car.

That kind of logical thinking is something that AI does not yet have per se. There isn’t any kind of everyday common-sense reasoning for AI as yet. Without common sense reasoning, the AI is not going to be driving a car in the same manner that a human driver would. A human driver would likely be able to make sense of the situation. They would discern what is happening. It might be surprising, it might be unnerving, but at least the human would comprehend the notion that an active shooter was on the attack.

For my article about the lack of common-sense reasoning in AI, see:

The AI then is not going to do anything special about there being an active shooter. In the bus driver scenario, it’s likely the AI would have just kept driving forward. Unless the shooter ran into the street and stood directly in front of the AI self-driving car, there would be no reason for the AI to stop the self-driving car or consider going into reverse. The shooter presumably could have just kept shooting into the self-driving car.

If there weren’t any occupants inside the AI self-driving car, the worst that would happen is that the shooter might disable the self-driving car. That’s not good, but at least no human would be injured. Though, if the bullets hit inside the self-driving car in just the wrong way, it is conceivable that the AI self-driving car might go wayward, perhaps inadvertently hitting someone that might be a bystander.

If there was an occupant or various passengers in the self-driving car, the situation might make them into sitting ducks. The AI self-driving car would not realize that something is amiss. It would be driving the legal speed limit or less so, trying to drive safely down the street. The passengers would need to either persuade the AI to drive differently, or they might need to hide inside the self-driving car and hope the bullets don’t hit them, or they might need to escape from the AI self-driving car.

For the escape from an AI self-driving car, the occupants might try to tell the AI to slow down or come to a stop, allowing them to leap out. If the AI won’t comply, or if it takes too long to do so, the occupants might opt to get out anyway, even while the self-driving car is in-motion. Of course, jumping out of a moving car is not usually a wise thing to do, but if it means that you can avoid possibly getting shot, it probably would be a worthy risk.

For my article about the dangers of leaping from an in-motion AI self-driving car, see:

For the Natural Language Processing (NLP) aspects of AI self-driving cars, see my article:

For the need to have socio-behavioral NLP in AI self-driving cars, see my article:

Suppose the occupants try to tell the AI what is happening and do so to persuade the AI to drive the self-driving car in a particular manner, differently than just cruising normally down the street. This is not going to be easy to have the AI “understand” and once again brings us into the realm of common sense reasoning (or the lack thereof).

You could try to make things “easy” for the AI by having the human occupants merely tell the AI to stop going forward and immediately go into reverse, proceeding to back-up as fast as possible. This seems at first glance like a simple way to solve the matter. But let’s think about this. Suppose there wasn’t an active shooter. Suppose someone that was in an AI self-driving car instructed the AI to suddenly go in reverse and back-up at a fast rate of speed.

Would you want the AI to comply?

Maybe yes, maybe no. It certainly is a risky driving maneuver. You could argue that the AI should comply, as long as it can do so without hitting anything or anybody. This raises a thorny topic of what kind of commands or instructions do we want to allow humans to utter to AI self-driving cars and whether those directives should or should not be obediently and without hesitation, carried out by the AI.

It’s a conundrum.

I’ll challenge you with an even tougher conundrum. We’ve discussed so far that there is the hide-run-fight as a means to respond to an active shooter. The bus driver selected to run in this case. We’ve ruled out that hiding seemed a possibility. We then have leftover the “fight” option.

Controversial Use Of A Fight Option

For an AI self-driving car, suppose there are human occupants, and they are in the self-driving car when it encounters an active shooter setting. I’ve just mentioned the idea that the humans might instruct the AI to escape or run away from the situation.

Imagine instead if the human occupants told the AI to “fight” and proceed to run down the active shooter?

Similar to the discussion about the bus driver, I think we’d agree that trying to run over the active shooter would seem morally justified in this situation. Unfortunately, we are now into a very murky area about AI. If the AI has no common-sense reasoning, and it cannot discern that this is a situation of an active shooter, it would be doing whatever the human occupants tell it to do.

What if human occupants tell the AI to run someone down, even though the person is not an active shooter. Maybe the person is someone the occupants don’t like. Maybe it is a completely innocent person and a randomly chosen stranger. Generally, I doubt we want the AI to be running people down.

You could invoke one of the famous Isaac Asimov’s “three laws” of robotics (it’s not really a law, it is just coined as such), which states that robots aren’t supposed to harm humans. It’s an interesting idea. It’s an idealistic idea. This notion about robots not harming humans is one that has its own ongoing debate about, and I’m not going to address it further herein, other than to say that the jury is still out on the topic.

In any case, for the moment, I think we might rule-out the possibility that the AI would be instructed to run down somebody and that the AI would “mindlessly” comply. To clarify, someone might ask the AI to do so, but I’m saying that presumably the AI has been programmed to refuse to do so (at least for now).

Here’s where things are with the current approach to AI self-driving cars and an active shooter predicament:

  • The AI won’t particularly detect an active shooter situation. 
  • The AI correspondingly won’t react to an active shooter situation in the same manner that a human driver might. 
  • Furthermore, human occupants inside the AI self-driving car are likely to be at the mercy of the AI driving as though it is just a normal everyday driving situation. This would tend to narrow the options for the human occupants of what they might do to save themselves. 

And that’s why I argue that we do need to have the AI imbued with some capabilities that would be utilized in an active shooter setting. Let’s consider what that might consist of.

First, it is worth mentioning that some would argue that this is yet another reason to have a remote operator that can take over the controls of an AI self-driving car. The notion being that there is a “war room” operation someplace with humans that are trained to drive a self-driving car, and when needed they are ready and able to take over the controls, doing so remotely.

This is an approach that some believe has merit, while others question how viable the notion is. Concerns include that the remote driver is reliant on whatever the sensors can report and with delays of electronic communication might be unable to truly drive the self-driving car in real-time safely. Etc.

For more about remote operators of AI self-driving cars, see my article:

For the moment, let’s assume there is no remote human operator in the situation, either because there is not a provision for this remote activity, or the capability is untenable. All we have then is the AI on-board the self-driving car. It alone has to be prepared for the matter.

How would the AI ascertain that an active shooter and an active shooting is underway in its midst?

The answer would seem to be found in examining how a human driver would ascertain the same matter. It is likely that the bus driver in Seattle was able to see that the gunman was in or near the street and was carrying a gun. The gunman might have appeared to be moving in an odd or suspicious manner, which might have been detected by knowing how pedestrians usually would be moving. There might have been other people nearby that were fleeing. In a manner of speaking, it is the Gestalt of the scene.

An AI system could use the cameras and other sensors to try and determine the same kinds of telltale aspects. Let’s be straightforward and agree that there could be somewhat everyday circumstances that might have these same characteristics and, yet, not be an active shooting setting. This means that the AI needs to gauge the situation on the basis of probabilities and uncertainties. Not until the point at which there are actual gun shots being detected would a more definitive classification be seemingly feasible.

Once we have V2V (vehicle-to-vehicle) electronic communications as part of the fabric of AI self-driving cars, it would imply that whichever cars or other vehicles first come upon such a scene are potentially able to send out a broadcast warning to other nearby AI self-driving car to be wary. If the bus driver had gotten a heads-up before driving onto that part of the Metro Route, he undoubtedly would have taken a different path and avoided the emerging dire situation.

Albeit even if there was V2V, this doesn’t necessarily provide much relief for those AI self-driving cars that would first happen upon the scene of the shooting. If we assume that there wasn’t a tip or heads-up by V2V, nor by V2I (vehicle-to-infrastructure), and nor via V2P (vehicle-to-pedestrian), those AI self-driving cars arriving at the place and time of an active shooting have to figure out on their own what is taking place.

I’ve already mentioned that the human passengers, if any, might be able to clue-in the AI about the situation. Such an indication by the passengers would need to be taken with a grain of salt, meaning that those passengers might be mistaken, they might be drunk, they might be pranking the AI, or a slew of other reasons might explain why the passengers could be faking or falsely stating what is taking place. The AI would presumably need to have its own means to try and double-check the passenger’s claims.

Another element would be the gunshots themselves. Humans would likely realize there is a gunman shooting due to the sounds of the gun going off, even if the humans didn’t see the gun or see the muzzle blast or otherwise could not visually see that a shooting was underway.

I’ve previously written about and spoken about the importance of AI self-driving cars being able to have audio listening capabilities outside the self-driving car, doing so if for no other purpose than detecting the sirens of emergency vehicles. Those audio listening sensors could be another means of detecting an active shooting situation when a gun goes off. I suppose too that there might be screaming and yelling by those nearby or immersed in the setting that might be another indicator of something amiss.

For my article about scenes analysis, see:

For aspects of probabilistic reasoning and AI self-driving cars, see my article:  

For the notion of omnipresence due to V2V, see my article:

For aspects of the audio listening features, see:  

For my article about pranking AI self-driving cars, see:

Detection of the active shooting is the key to then deciding what to do next.

If the AI has detected that there is an active shooting, which might be only partially substantiated and therefore just a suspicion, the AI action planning subsystem needs to be ready to plan out what to do accordingly. There’s not seemingly much point in only having an ability to detect an active shooting without also making sure that the AI will alter the driving approach once the detection had been undertaken.

The point being that each of the stages of the AI self-driving car driving tasks must be established or imbued with the active shooter responsiveness capabilities.

The sensors need to be able to detect the situation. The sensor fusion needs to put together multiple clues as embodied in the multitude of sensory data being collected. The virtual world modeling subsystem has to model what the situation consists of. The AI action planner needs to interpret the situation and do what-if’s with the virtual world model, trying to figure out what to do next. The plan, once figured out, needs to be conveyed via the self-driving car controls commands.

What kind of AI action plans might be considered and then undertaken?


That’s the hallmark of what the AI needs to review. Similar perhaps to the bus driver, each of the approaches would involve trying to gauge whether the chosen action will make for greater danger or lessen the danger. In this case, the danger would be primarily about potential injury or death to the passengers of the self-driving car, though that’s not the only concern. For example, suppose the AI self-driving car could make a fast getaway by driving up onto a nearby sidewalk, but in so doing it might be endangering pedestrians that are on the sidewalk and perhaps fleeing the scene on foot.

Should the AI self-driving car consider the “fight” possibilities?

As mentioned earlier, it’s a tough one to include. If a “fight” posturing would imply that the AI would choose to try and run over the presumed active shooter, it opens the proverbial Pandora’s box about purposely allowing or imbuing the AI with the notion of injuring or killing someone. Some critics would say that it is a slippery slope, upon which we should not get started. Once started, those critics worry how far might the AI then proceed, whether the circumstance warranted it or not.

For my article about the global ethics of AI self-driving cars, see:

For why people are untrusting of AI self-driving cars, see my article:

For the potential use of ethics review boards and AI self-driving cars, see my article:

For my article about OTA, see:


I’m sure we all would hope that we’ll never be drawn into an active shooter setting. Nonetheless, if a human driver were driving a car and came upon such a situation, it’s a reasonable bet that the driver would recognize what is happening, and they would try to figure out whether to hide, run, or fight, making use of their car, if they otherwise did not think that abandoning the car and going on-foot was the better option.

AI self-driving cars are not yet being set up with anything to handle these particular and admittedly peculiar situations. That makes sense in that the focus right now is nailing down the core driving tasks. As an edge or corner case, dealing with an active shooter is a lot further down on the list of things to deal with.

In any case, ultimately there are ways to expand the AI’s capabilities to try and cope with an active shooting setting. Most of what I have described could be a kind of add-on pack to an existing AI self-driving car and provide an additional capability into the core once the software for this specialty was established. It’s the kind of add-on feature that an OTA (Over-the-Air) update could then be used to download the module into the on-board AI system at a later date.

In theory, maybe we will be living in a Utopian society once AI self-driving cars are truly at a Level 5 and no one will ever be confronted by an active shooter. Regrettably, I doubt that society will have changed to the degree that there won’t be instances of active shooters. For that reason, it would be wise to have an AI self-driving car that is versed in how to contend with those kinds of life-and-death moments.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

This UrIoTNews article is syndicated fromAITrends