Resolving the desire path in a connected car

01/09/2015

Pedestrian road works signs

How many times have you walked past a sign like this only to find that yes, the pavement is closed ahead, and yes, you probably should have crossed when the sign said.

I think it’s a learned behaviour we all have in the UK – as pedestrians we’ve come to expect out-of-date or out-of-place signs, roadworks that aren’t that bad, or to be able to cross when and where we want to. We have the freedom to create our own desire paths.

I feel similar as a cyclist, although less so, as I’m more bound by the rules of the road. And less so again as a driver, although it’s still sort of up to me whether to chance that gap in oncoming traffic at the junction.

All of this works in a system when we can assume the other participants all have the same learned behaviour, and a shared understanding of which gaps we can get through in time at junctions. But try driving overseas, or cycling in Cardiff, and you begin to understand the dangers in this assumption.

Add in the prospect of driverless cars – where the understanding is less clear, and there’s an interesting new hesitation.

I was interested to read a report this summer about accidents involving driverless cars. The shared characteristic was that human-operated vehicles crashed into them.

What was going on? The article suggested that drivers were just dumbstruck at the sight of something so experimental on the road, and didn’t notice it stopping. But maybe the driverless car really did do something unexpected, or more accurately, un-human. Maybe it didn’t jump that junction, or go ‘right on red’, or it just went when it was allowed to rather than when the other driver thought they had priority.

I saw somewhere that when Google’s first driverless prototypes went into testing, the humans kept taking over the wheel when they didn’t need to. So the new prototypes don’t have steering wheels. But other companies are moving towards driverless as well, by adding incremental technologies now such as cruise control, parking assistants, and so on. They can work in controlled situations to take over some of the driving.

So what will happen in the near future, as these technologies evolve, when my car changes lanes for me because it knows of an accident ahead, or a temporary road closure?

I think right now it’s logical to expect that, as a human, I’ll steer right back into the original lane and continue on my doomed desire path. It’s a fascinating challenge for product designers not just in terms of notifications and dashboards, but also in expectation management, audit history, and network monitoring, and not just for cars…

How can I be sure that any of my smart things have the information that they need, to be able to make the right decisions on my behalf? And how many times will they need to get it right before I trust them? And what is ‘right?’

Posted in Connected Car, Connected Experience, Human-Computer-Trust, Ideas, The Future
Permalink
Comment (0)

Leave a Reply

Your email address will not be published. Required fields are marked *