Are driverless car searches constitutional?
Image from Shutterstock.
Whether we like it or not, automated, driverless vehicles are quickly becoming a reality and a norm in our society. Along with all the benefits the technology and associated services provide, there are also detriments—for civilians and law enforcement alike.
The criminal defense implications for self-driving vehicles are becoming much more prevalent as the technology proliferates. For example, an attempted traffic stop video recently made its rounds on social media, ultimately garnering publicity from outlets such as CNN. The footage showed law enforcement attempting to effectuate the seizure of a driverless vehicle.
As you can see, the officers are quite confused. One would think they knew a completely driverless taxi system was operating in their backyard, but perhaps not. Either way, it’s easy to see how hesitation would follow from an individual’s initial interaction with this type of technology.
Maybe they are aware—maybe they’re stopping driverless vehicles in the hopes of searching them for contraband. After all, what better way to move controlled substances or other illegal items than in a car with no occupants to risk a consent to search? And just to be clear, I am not in any way advocating for individuals to attempt that activity.
Regardless, this viral video got me thinking: How do driverless vehicles affect Fourth Amendment search and seizure law?
History of the autonomous automobile
Self-driving cars are nothing new. Experimentation began as early as the 1920s, and by 1977, the world saw its first semi-automated car. Research continued through the ’80s and ’90s, with various agencies—including the U.S. Army and Navy—pumping plenty of funds into furthering the technology.
By 1995, Carnegie Mellon University introduced a self-driving car that could travel almost 3,000 miles through 15 U.S. states at a rate of 98% autonomy. That stood as the benchmark until 2015 when the world was presented with the first vehicle and system capable of traveling over 3,000 miles while remaining in self-driving mode for up to 99% of the duration.
The technology continued to advance into 2017 when Waymo began testing driverless cars that did not utilize a safety driver inside the vehicle. Waymo doubled down on this technology in 2018 when it launched the first fully autonomous taxi service in the U.S. Fast-forward to 2022, and automated driving is slowly becoming more pervasive.
Liability issues
As with any technology, it’s probably best to view the system from two perspectives: the system itself as developed and instituted by the system’s creators and the users employing the system. When it comes to self-driving vehicles, there are various degrees to keep in mind.
The National Highway Traffic Safety Administration adopted a six-level classification system (initially developed by Society of Automotive Engineers International) in 2016. According to that model, automobile automation can be broken down as follows:
• Level 0: No Driving Automation
• Level 1: Driver Assistance
• Level 2: Partial Driving Automation
• Level 3: Conditional Driving Automation
• Level 4: High Driving Automation
• Level 5: Full Driving Automation
While most of us have likely never experienced levels four through five, we’ve become accustomed to levels zero through two for quite some time, and three is more affordable these days as well. Level one includes things such as your average cruise control. Level two is a step forward to systems like “supercruise.” Level three incorporates things such as Automated Lane Keeping Systems. All of these rely on technology while still requiring at least some level of user interaction.
When we get to level four, though, we start seeing the user become less and less necessary. Level four examples include vehicles that are geofenced and require operation within a specific predetermined area for success. Meanwhile, level five involves vehicles where a human occupant is entirely optional at any time or place. As you can tell, the lower levels merely assist the driver while the higher levels automate the entire driving experience.
With that in mind, it becomes much easier to distinguish certain degrees of liability in relation to the degrees of automation. When the driver is more involved and required, liability will likely flow more to the driver. When the opposite is true, the liability will flow more to the system operating the vehicle in the driver’s absence. But it’s not that simple.
Those middle levels of two through three present a mix of driver responsibility and vehicle automation that can create a confusing cooperative. Depending on the amount of automation provided, we see different degrees of liability that can be more or less applicable. It’s not crazy to consider negligence more appropriate when the operator is responsible for more of the driving and perhaps recklessness when the driver relies on the technology to handle the vast majority of the drive.
But what about those situations where there is no driver at all? What do we do with level four through five scenarios?
Law enforcement issues
To be fair, I don’t have the knowledge to decipher that conundrum. I’m not a personal injury attorney, and I haven’t studied much of torts since I took the bar exam. Those of you who follow this column are well aware, but for my new readers, I’m a one-trick pony: I’m a criminal defense attorney by trade, and that’s all I do outside of pontificating in these installments about various other aspects of the law.
But anyone who practices in criminal defense knows that law enforcement regularly—and arguably intentionally—employs and relies on the most subjectively interpreted reasons for a traffic stop available. They do this because it creates a very challenging environment for a defendant and his counsel to challenge the constitutionality of the stop. Think about it: How do you prove that you weren’t lane straddling? How do you prove that you activated your signal when you switched lanes? How do you prove that you made a complete stop at an intersection?
You can’t. And that’s precisely how law enforcement wants it. But what if these cars can?
As a preliminary issue, anyone unfamiliar with criminal law needs to understand that once a defendant challenges the constitutional basis for a traffic stop, the burden is on the prosecution to prove the stop was valid under the Fourth Amendment. Recently, at least in Oklahoma, prosecutors have been relying more and more on the “good faith” exception developed through Fourth Amendment jurisprudence. Under this exception to the exclusionary rule, many issues that might otherwise sink a stop will otherwise be excused if law enforcement was operating in good faith when the foul occurred.
In 2014’s Heien v. North Carolina, the United States Supreme Court held that an officer’s mistaken understanding of the law—he pulled over a vehicle for a light malfunction violation that didn’t exist under North Carolina’s traffic code—was enough to salvage an otherwise unconstitutional seizure. The court held it was objectively reasonable for law enforcement to believe a malfunctioning brake light was a violation of the law; thus, the officer pulled the vehicle over in good faith.
As driverless vehicles begin to proliferate our roads and highways, I could see situation where law enforcement engage in stops based on nonexistent, but objectively reasonable, violations of the applicable traffic code. While that might lead to more hilarious viral videos like the one I’ve been referencing, it could also easily lead to more violations of our constitutional rights.
See also:
ABA Journal: “The dangers of digital things: Self-driving cars steer proposed laws on robotics and automation”
ABA Journal: “Who’s to blame when self-driving cars crash?”
Adam R. Banner is the founder and lead attorney of the Oklahoma Legal Group, a criminal defense law firm in Oklahoma City. His practice focuses solely on state and federal criminal defense. He represents the accused against allegations of sex crimes, violent crimes, drug crimes and white-collar crimes.
The study of law isn’t for everyone, yet its practice and procedure seems to permeate pop culture at an increasing rate. This column is about the intersection of law and pop culture in an attempt to separate the real from the ridiculous.
This column reflects the opinions of the author and not necessarily the views of the ABA Journal—or the American Bar Association.