The following story was written and reported by Eric Peterson of The Utah Investigative Journalism Project in partnership with the Deseret News.
SALT LAKE CITY — On June 28, 2018, a Tesla Model S was cruising through the desert west of Salt Lake City as part of an unusual, and potentially dangerous, experiment.
Heading west past Tooele on Interstate 80, where the views of the Great Salt Lake disappear and the billboards start to alternate between pro-life messages and advertisements for loose Nevada slots, the driver began to nod off.
Sitting in the back seat, a researcher with the University of Utah took notes. The objective of the study was to study drivers’ alertness while operating this new technology currently on the market. As futuristic as the technology is, it is not, however, fully autonomous. Previous studies have shown that the self-driving features can disengage abruptly, forcing drivers to take control of the vehicle again at a moment’s notice to avoid crashing the vehicle.
On this day in June, the researcher had been asked to track how the driver behind the wheel dealt with fatigue. The freeway was their laboratory. Commuters, long-haul truckers, revelers returning from West Wendover, Nevada, had no way of knowing they were part of the experiment.
For purposes of the study, the driver behind the wheel had been labeled No. 47. He started his run at 9:15 a.m., and driving past Utah Department of Transportation signs that warn “Drowsy Drivers Cause Crashes,” No. 47 was losing focus as he passed Grantsville, where the freeway curves away from the dramatic Oquirrh range and the landscape gives way to a vista of seemingly endless sage and scrub of the west desert.
“(Participant) is basically sleeping this entire drive,” the research associate’s notes read. “This is actually terrifying.”
Participant No. 47, being paid $20 an hour, stopped at the ghost town of Aragonite, after an hour of “driving” and then began the return leg of the journey, when the research associate started timing his “sleeps/microsleeps” and clocked him being asleep for 10 minutes, eight seconds, before deciding to take over and drive the rest of the way back.
While the Tesla Model S is commercially available, it isn’t designed for drivers to fall asleep. The technology currently available for the Model S, called the L2, offers what the company calls semi-autonomous capabilities, meaning the vehicle can take over the wheel, and automatically, follow shifting lanes and even apply the gas and brakes as needed.
Since 2017, researchers at the U. had been documenting incidences where their research participants had been nodding off at the wheel of multiple L2 cars. These vehicles were being “driven” by participants who were given limited instruction in their use and were told that as part of the study they would have to just monitor the cars while they drove long, desolate stretches of I-80 west of Salt Lake City and back.
Two research associates, Jess Esplin and Elise Tanner, complained to the Institutional Review Board, the board at the U. tasked with oversight of research studies on campus to make sure they are done safely and ethically. Esplin and Tanner complained about the lack of safety protocols for test drives and vetting of participants — one participant was allowed to get behind the wheel after acknowledging that he had slept less than five hours and had as many as seven alcoholic drinks before going to bed, according to an internal audit.
The pair documented multiple occasions of participants falling asleep behind the robot wheel when at the same time they and their fellow research associates lacked clear policies and direction from the lead researchers on what to do in these dangerous situations.
“If the system disengages when you’re not expecting it or you had fallen asleep when that happens, you could go off the road you could hit another car — there’s a lot of things that are very scary that could happen,” Esplin said.
The U.’s IRB audit found the research study in “serious and continuing non-compliance.” Among its findings:
— There was no protocol to exclude subjects from participating who had consumed alcohol 24 hours before driving.
— Lead researchers had unclear safety policies and this confusion extended to the research associates, who “were aware that sleeping was an issue but were unsure if speaking up was the appropriate course of action.”
— There were also no documented “stopping” guidelines clearly spelling out situations requiring the halting of a test run for safety reasons.
The result of this audit?
The Institutional Review Board had the researchers halt the study, but then allowed them to file a new research application and begin a new automated driving study, with assurances made that their procedures would be much safer going forward.
Tamra Johnson, a corporate spokesperson for AAA, said in a written statement that they had been informed in October 2018 of the review board audit, but say that research was not officially contracted by the AAA Foundation for Traffic Safety, though the new research study approved in late 2018 is.
“AAA is confident that the University of Utah’s Institutional Review Board has thoroughly vetted and approved the research project to ensure the rights and welfare of human subjects,” the statement says.
According to records provided by the University of Utah, the university’s Center for Distracted Driving received $3.8 million in AAA Foundation funding in 2016 — though some of those funds were used for other research studies. Esplin and Tanner say part of that funding went toward the dangerous automated driving studies conducted in 2018. Regardless, the records show that AAA has also funded the newest automated driving study begun in January 2019 to the tune of $977,583.
For Esplin, the university’s actions were not enough given the dangers to research staff, participants and unsuspecting motorists.
“Other people aren’t expecting an experiment to be happening on the road,” Esplin said.
Mind wandering to sleepy driving
Automated driving technology seems so futuristic that most people likely have only heard of the Google car, rolled out in 2009, and now known as Waymo. The car has already driven more than 8 million miles of the straight, flat roads of Chandler, Arizona.
Waymo has a Youtube video with nearly 4 million views showing the car in action, set to the kind of poppy, uplifting soundtrack one usually hears playing in the aisles of a grocery store. The video shows the vehicle cruising through city streets with awe-struck passengers inside not sure what to do with their hands and feet inside the car with no brake pedals and no steering wheel. The clip even features passengers fast asleep, while a lullaby tune plays and text reads: “Looks like feeling safe puts my riders to sleep. No problem. I got this.”
But this car is still in the test phases and is not available for regular purchase. The Waymo is trying to leapfrog to what automation experts call L4, essentially a fully autonomous driving experience, as opposed to L0 which are cars that have no automated driving components or L2, the most advanced currently on the market. L2 cars can follow a lane of traffic and slow or even stop in an emergency situation — but they still require drivers to be ready to take the wheel at a moment’s notice.
Even though its commercially available, much is still to be learned about how drivers interact with L2 technology. The technology is cutting edge but are driver’s ability to use them keeping pace? Can the average motorist use L2 as casually as they would flip on the blinker or fiddle with the stereo? Are the advanced safety features actually safe yet? To answer these questions, AAA enlisted researchers at the University of Utah.
The first research project was conducted from October 2017 to April 2018, with the research administrated by Drs. David Strayer and Joel Cooper.
Strayer is a minor celebrity in the academic world for his work on distracted driving. In 2010 he even went on “The Oprah Winfrey Show” to explain how talking on a cellphone while driving impairs a driver the same way alcohol does.
The research approved in 2017 was for a “mind wandering” study of drivers of vehicles with semi-autonomous driving features.
Researchers hooked participants up with devices to monitor their heart rate and brain waves while they drove automated vehicles and were measured to see how well they focused on certain driving tasks.
In June 2018, the research shifted away from “mind wandering” to focus on “fatigue and arousal,” with participants now clicking a button every time they felt a vibration coming from a device attached to their person, to measure their responsiveness.
Despite repeated attempts over the course of weeks by email, phone and through the U.’s media relations department, Cooper and Strayer have not responded to requests for comment for this story. Documents showed the men, however, have maintained from the outset that participants could stop the study anytime they want, and so could research associates if they felt it was unsafe. Run notes from the study, however, show confusion from the RAs on what to do when confronted with sleepy participants.
Participant No. 47 was the clear outlier, the one that was documented as asleep or semi-asleep for over 10 minutes. But there were other troubling rides.
“OMG so sleepy,” reads the notes on participant No. 245. Later when No. 245 was receiving no buzzes to keep them alert the RA noted: “There was a shovel in the road and (participant) didn’t make any indication of noticing. I was terrified.”
“He’s a sleeper,” one RA noted of participant No. 233, who fell asleep on two legs of his trip. “Based on observation of his eye movements, he was drowsy and falling asleep during the drive.”
At this time Esplin and Tanner began to question the premise of the study, compared to other “naturalistic” studies in the field, whereby the research participants owned their own L2 vehicles that were outfitted with multiple cameras to monitor the way they would naturally drive their vehicles day to day. Tanner says the driving was not “natural” in the U.’s study.
“They can’t keep themselves awake with the radio, they can’t talk to anybody. But then they’re also not even driving, they’re just watching a machine drive,” Tanner says. “They also typically don’t feel like they can pull over if they feel tired because they are in a research study and that would end the whole study. So, there’s definite a pressure to continue the study.”
On Aug. 20, 2018, according to Esplin and Tanner, they and the other RAs met with Cooper to share concerns about the safety of the study.
Esplin says a week after the meeting, Cooper took her on a two-hour drive in one of the study vehicles up Parley’s Canyon using the automated feature to show her how safe the technology was.
Tanner says a few days later, Cooper also met with her and she complained that the tight study deadline made it hard for the other RAs to feel like they could actually stop the studies. She said Cooper dismissed her as “opinionated,” telling her that people were more likely to get injured in normal cars than the L2 cars.
“The implication for both of us was to stop talking about it,” Esplin said.
Instead they filed an official complaint with the university.
Expert opinion
“Don’t brake. Don’t brake. Don’t brake.” This is the advice coming from Mark Williams, a Ken Garff Volvo car salesman in the passenger seat, riding with me while I take a test drive of the Volvo XC90. It reminds me of the exact opposite words of caution my father offered to me 20 years ago while taking his Galaxy 500 out the first time I learned to drive. Instead, here, Williams was asking me to trust in the automated driving feature of the vehicle as we approached a full stop in traffic eastbound on I-80 just outside of Salt Lake City. Listening to the words, my foot was flexed uncomfortably over the brake pedal as the car approached a complete stop in traffic on the freeway just ahead of me. Amazingly, the vehicle came to a full and gentle stop comfortably within the truck in front of me. “It’s a trip, I know,” Williams said.
Earlier in the trip we had been cruising westbound on I-80 when Williams casually pushed a button on the steering wheel and activated the automated mode. Immediately I felt the ghostly tug of the steering wheel moving on its own and instinctively took my hands off the wheel in fright. It reminded me of the moment in “Fight Club” when Brad Pitt’s character tells Edward Norton to just let go and let the car drive itself into oncoming traffic. But in my case the car effortlessly followed the road, even navigating the bend of newly painted lanes diverting around road construction.
Amazing as it was, Williams gave me the same disclaimer he gives all customers. “This is a semi-autonomous assist vehicle,” he said. “And I emphasize the word ‘assist.’”
Williams is proud of Volvo’s role in being one of the first to develop front sensor technology that will bring the vehicle to a full stop if it senses a distracted driver not noticing the rapid approach of another car’s rear bumper, or a pedestrian or cyclist.
But he stresses to customers that they can’t turn their brains off while driving. Some people “feel like they don’t have to pay attention and the (automated driving) will do everything for them,” he says. “Bad, bad, bad approach. You should always pay attention at all times even though the car is doing the driving for you.”
It’s a point echoed by Katlynn Downey of General Motors.
“All customers purchasing CT6’s with Super Cruise are educated how to use the system at their Cadillac dealership,” Downey says, where they are taught to make sure to pay attention at all times.
She also explains that the Cadillac CT6 is designed to give numerous warning alerts to drivers using the automated features if it appears that they’ve stopped paying attention, even going as far as having the vehicle come to a controlled stop with hazard lights activated and the OnStar alert being sent in case the driver is having a medical emergency.
The L2 paradox is that it can make driving safer while also lulling drivers into a false and dangerous sense of security. This was also evidenced in documents from the U.’s research.
As part of the study participants made their own comments about the experience. One participant stated, “I honestly forgot what I was doing. I paid way less attention to driving.”
Mica Endsley, a researcher with SA Technologies, in 2017 monitored her own use of a Tesla Model S over a six-month period. She was motivated by the dilemma the L2 technology presents to drivers, including the 2016 death of a woman whose Tesla did not sense an 18-wheeler pulling in front of her car.
She drove 2,864 miles, 84 percent of her trips utilizing some of the car’s automated driving features. She was disturbed by how often the car transitioned suddenly out of automated driving, requiring her to take over and how the automated features often decreased vigilance causing her to even text while driving which she normally wouldn’t do.
“The highly reliable automation was quite seductive, serving as an enabler of bad behavior,” Endsley wrote in the September 2017 issue of Journal of Cognitive Engineering and Decision Making.
In a phone interview, Endsley explained that she found problems developed with her understanding of how the vehicle handled in the automated mode because the vehicle’s software was actually updating while she owned it, changing the driving experience from one day to the next.
It’s a point echoed by Dr. John Lee, an automated driving researcher at the University of Wisconsin Madison.
“Cars are becoming more like cellphones in the sense that you get updates on the apps on your phone that you’re not used to and so it can be confusing as a result to use it,” Lee says.
This ever-shifting evolution of vehicles that are currently on the market now, is why both Lee and Endsley believe on-the-road testing like the kind being done by the University of Utah is vitally needed — as long as the studies are being conducted safely.
Lee says that if there are serious safety concerns then researchers could add a second steering wheel and brake pedal into the passenger seat so that research associates can take over in an emergency (such a feature was not part of the U.’s studies). For Endsley it’s a bad sign if RA’s feel unsafe while testing.
“If safety backup drivers are saying they’re not safe then you may need to have a change in your protocol,” she said.
The audit
The U.’s Institutional Review Board blasted the study for numerous safety concerns ranging from failing to get signatures to having no real safety protocols or logs for “adverse events,” finding that Cooper and Strayer did not report the participant asleep at the wheel for over 10 minutes to the IRB because they didn’t consider it an “adverse event.”
In an official response, Cooper and Strayer argued the technology was already on the road and they had stressed repeatedly the ability for anyone to halt a test run they felt was unsafe. “Following the initial observation of fatigue, we instructed our research associates to terminate the run if this sort of behavior was observed,” the men wrote in the response. They also explained why the science they were conducting could not be replicated on a track or a simulator.
“Driving simulators lack the sophistication and fidelity for meaningful assessment of these real-world systems,” the men wrote.
The auditors determined the original consent forms warning participants that the study was of “minimal risk” were inaccurate. So when Strayer and Cooper had their new study approved in December 2018, the new consent forms were more specific about the tasks involved but they offer no categorization of the risk level. Instead of saying “greater than minimal risk,” there is simply no category of risk level offered to prospective participants.
Review board director Ann Johnson said it was decided that “greater than minimal” was a kind of jargon that average participants wouldn’t understand anyway. She also stresses that the review board process is designed to make consent forms accessible to the lay person.
Currently she says the U. has over 118 review board panelists that meet in different groups to approve five to 10 new research studies every week covering every field from medicine to social work. The panelists are largely professors with research experience, though they also include community members outside of academia, and all the panelists are independent and not paid by the university for their IRB panel work.
While the panels leverage a lot of multi-disciplinary brainpower, Johnson stresses that they are not simply a “scientific review committee, we are an ethics committee. But its not possible to separate the science from the ethics.”
She says in working with the panel the new study is much improved. There are now clearly articulated “stopping guidelines” to end test runs, and researchers ask participants about sleep and alcoholic consumption and do more extensive driving record checks. They also limit participants to those 25 years and older and even check wind conditions before test drives, since that can affect the functioning of L2 features.
“It’s really valuable for us to know how safe (L2 vehicles) are and ‘Am I more likely to fall asleep behind the wheel of a self-driving car versus a normal car?’” Johnson said. “These are good research questions be to be asking, and the answers to those questions will help keep people as safe as possible.”
Johnson says the Institutional Review Board is in “constant communication” with the researchers now and the study will be reevaluated again in the fall. She says they decided against seeking advice from an automated driving expert outside the university because one of the review board panelists had experience with driving studies.
Esplin and Tanner are disappointed with Johnson’s findings. While they challenge the safety of the study, they also argue that they’re not trying to hold back technology. They too want to see the research into L2 automation build and grow — just not at the expense of safety.
“If anyone were harmed or died as a result of this driving study, the public outcry would be enormous, and it would likely set back this kind of research for decades,” Esplin said. “So it needs to be done right. And the University of Utah isn’t doing that.”
Free to Read Not to Report
The Utah Investigative Journalism Project is a 501(c)3 nonprofit. If you liked this story and would like to support more independent investigative journalism in Utah you can make a tax-deductible donation by clicking HERE.
Leave a Reply