Author: Glenn Leung
Good evening, parents and teachers. As you all know, I was the engineer in charge of investigating the accident.
I’ll begin by recapping what was on the news. Eighteen-year-old Samantha Chen was on her phone and did not see the STOP signal for the pedestrian crossing. A self-driving car was approaching, and instead of slamming on the brakes while maintaining course, it swerved and hit the group of pedestrians waiting by the side of the road. Two people died, one of them a teacher of this school. Here’s where the news gets a little murky.
I have written programs for similar models, so I know that the car did something it was not supposed to do. For me, autonomous vehicles do not need distractions like the trolley problem. It is simple; the person who is putting their life in the care of the car must be protected. Hence, the sensible thing to do in the event of a sudden slow-moving obstacle is to slam on the brakes and not swerve lest you lose control.
When I checked the vehicle’s programming, I found there were a few additional lines of code that were added in post-production. Through further investigations, I learned the owner has a son, a smart kid; the type who learns multivariable calculus at age five. He was given the ‘Smartbrain’ software for his birthday; the one which allows children to build their very own AI. It was made to be educational and simple, but it was also controversial because it made unnecessarily powerful capabilities available to kids.
Yeah, I see some discomfort in my fellow Millennials. I threw my fair share of sheep back in the day.
Anyway, the kid got really into it and somehow made a terrifyingly competent AI that could crack our encryptions. He decided to test it out on his Dad’s car, just to probe around. That was how he accessed our codes and came across the segment labeled ‘Hazard response’, which housed the procedure I had described earlier.
He thought it was a mistake! He had heard so much about the ‘trolley problem’ when reading up on autonomous vehicles in school that he thought each car should come with its own ‘trolley protocol’. He then proceeded to do what he thought was a public service; he wrote one himself with some help from Smartbrain.
In the milliseconds before the accident, the AI did a cursory internet search and found a lot of Samantha. She is all over social media and a very popular influencer. Through her, corporations have made millions marketing to young people. She is the poster child of trendy, and there’s a good chance your kids know her.
Contrast this with the older people standing by the road, people like you and me. We have less time for social media, don’t know how to ‘full screen’ a hologram, and still think Instagram is relevant. According to that kid’s algorithm, based entirely on digital footprints, the combined worth of the law-abiding adults is less than that of a social media influencer.
Don’t get me wrong, I’m happy that young Samantha is alright and I’m sorry for the loss of Mr. Ross. The message I want to convey today is please, talk to your kids. Have conversations with them about the consequences of their actions. Smartbrain has since been recalled but with all these regulation rollbacks, there will be more irresponsible developers. Intelligence is not wisdom; your kids may be smart but they still need you.
That’s all I have. Please, enjoy the buffet.
Oh dear, per that new code I expect to be run over any (and every) moment …!!! An interesting look at the question that really doesn’t have an answer.
Same here, Simon, same here.
Interestingly, on the day this was published, an automated food delivery service was launched in the city I currently live in. Hopefully on my trip home later, I won’t be run over by 50kg of pizza.
Do they guarantee delivery in less than 30 minutes and fewer than 3 deaths? 😉
No, but they do guarantee being faster and cheaper than a human driver (mostly because they make less stops; they do travel at half the speed limit)
An article on the Trolley Problem by a Catholic theologian (not the answer to my comment):
https://ronconte.com/2017/09/29/the-trolley-problem-and-the-three-fonts-of-morality/
Thanks for sharing, Tom. The inspiration for this story is partly taken from a conversation I had with an engineer working on self-driving cars, and what he said was quite similar to the reply to your comment (hence the speaker’s reasoning in the 3rd paragraph).
The other part of the inspiration was taken from a Business Insider article on dark technology scenarios.
Nice take on the precocious brat problem.
Thanks! That’s my other favourite thought experiment!
Nice take on the Trolley problem!
Thank you! It is my favourite thought experiment.