Author: Glenn Leung

Good evening, parents and teachers. As you all know, I was the engineer in charge of investigating the accident.

I’ll begin by recapping what was on the news. Eighteen-year-old Samantha Chen was on her phone and did not see the STOP signal for the pedestrian crossing. A self-driving car was approaching, and instead of slamming on the brakes while maintaining course, it swerved and hit the group of pedestrians waiting by the side of the road. Two people died, one of them a teacher of this school. Here’s where the news gets a little murky.

I have written programs for similar models, so I know that the car did something it was not supposed to do. For me, autonomous vehicles do not need distractions like the trolley problem. It is simple; the person who is putting their life in the care of the car must be protected. Hence, the sensible thing to do in the event of a sudden slow-moving obstacle is to slam on the brakes and not swerve lest you lose control.

When I checked the vehicle’s programming, I found there were a few additional lines of code that were added in post-production. Through further investigations, I learned the owner has a son, a smart kid; the type who learns multivariable calculus at age five. He was given the ‘Smartbrain’ software for his birthday; the one which allows children to build their very own AI. It was made to be educational and simple, but it was also controversial because it made unnecessarily powerful capabilities available to kids.

Yeah, I see some discomfort in my fellow Millennials. I threw my fair share of sheep back in the day.

Anyway, the kid got really into it and somehow made a terrifyingly competent AI that could crack our encryptions. He decided to test it out on his Dad’s car, just to probe around. That was how he accessed our codes and came across the segment labeled ‘Hazard response’, which housed the procedure I had described earlier.

He thought it was a mistake! He had heard so much about the ‘trolley problem’ when reading up on autonomous vehicles in school that he thought each car should come with its own ‘trolley protocol’. He then proceeded to do what he thought was a public service; he wrote one himself with some help from Smartbrain.

In the milliseconds before the accident, the AI did a cursory internet search and found a lot of Samantha. She is all over social media and a very popular influencer. Through her, corporations have made millions marketing to young people. She is the poster child of trendy, and there’s a good chance your kids know her.

Contrast this with the older people standing by the road, people like you and me. We have less time for social media, don’t know how to ‘full screen’ a hologram, and still think Instagram is relevant. According to that kid’s algorithm, based entirely on digital footprints, the combined worth of the law-abiding adults is less than that of a social media influencer.

Don’t get me wrong, I’m happy that young Samantha is alright and I’m sorry for the loss of Mr. Ross. The message I want to convey today is please, talk to your kids. Have conversations with them about the consequences of their actions. Smartbrain has since been recalled but with all these regulation rollbacks, there will be more irresponsible developers. Intelligence is not wisdom; your kids may be smart but they still need you.

That’s all I have. Please, enjoy the buffet.