Piers Dillon-Scott
SENIOR UX DESIGNER
Our lives are increasingly being controlled by algorithms (more than we know). So do programmers now need to worry about their code's ethics?
Imagine this, a tram is hurtling down a track towards a junction. Just beyond the junction, five people are standing on the lines – oblivious to the approaching danger. On the track to their left is a solitary man. You don’t have time to warn them, but there is something you can do to save them. Beside you there’s a lever. If you pull it you’ll switch the junction tracks and divert the tram away from the five people, but into the solitary man.
Do you pull the lever?
If you’ve ever taken an ethics course you’ll be familiar with this problem, it’s called the Trolley Dilemma. Tweed-jacketed professors have been posing this conundrum to psych students since the 1960s, and it asks this basic question; are you less culpable if you allow a harmful situation to take place, or is it better to commit a 'wrong' act for the greater good?
The arrival of automated cars, military and corporate drones, and social media has taken this question out of the psych class and placed it into programming textbooks.
Take this example. You’re in an automated car travelling down a country lane. In front of you are two cyclists, one on either side of the road, and both traveling in the same direction as you. The cyclist on the left is wearing a helmet and other protective gear, the cyclist on the right isn’t. Your car decides to pass both, and speeds up to do so, but as it’s completing this manoeuvre a person walks into the road. What should your car do?
If the car swerves to the right it will hit the unprotected cyclist, if it swerve to the left and it’ll hit the protected cyclist. If the car slowly breaks it’ll hit the pedestrian, if it breaks hard it’ll injure you and your passengers. The computer on the car can make this decision in less than a second.
Should you programme the car to injure the single pedestrian in favour of you and your passengers? Should your algorithm instruct the car to hit the protected cyclist (and thus punish him for doing the responsible thing) or should it take a right and hit the cyclist who took no precautions (and who knew the risks he was taking). Perhaps you should programme the car to choose randomly?
And after the incident, who’s responsible? Is it the owner of the car; the person in the driver’s seat, who isn’t actually driving; is it the manufacturer of the car; or the programmer who wrote the code?
Engineers in Bristol Robotics Laboratory have been researching robotic ethical problems like this, and their latest experiment shows that there many challenges ahead. In their recent experiment, they created an 'ethical robot' and set it a task - it was to prevent two other robots from coming to harm. According to the Roboticist behind the experiment, Alan Winfield, the ethical robot saved at least one other robot about 60% of the time. The remaining 40% of the time it froze, unable to decide which to save.
Scale this experiment up and you can see how things can get difficult for self driving cars. While automated cars are allowed on some roads, the technology is still unable to make its own decisions. According to data obtained by IEEE Spectrum under a Freedom of Information request, Google's automated cars are less advanced than we imagine; during their initial tests in the US in 2012, it was Google that “chose the test route and set limits on the road and weather conditions that the vehicle could encounter." A Google engineer had to take control of the car twice during the test drive.
Ethical problems like these are challenging programmers in other areas too.
We've already seen how some hospitals in the US are using advanced algorithms to match the right organ with the right transplant patient. Programmers just need to define the meaning of ‘right’ (should their code penalise older patients, smokers, or heavy drinkers?).
Algorithms are increasingly being used to make automated trades on the stock market. They can make thousands of trades per minute, but they're not always very good. On May 6, 2010 the Dow Jones stock index suffered its largest single day cash - the index fell by 600 points in 6 minutes and then recovered. Automatic and lightning-fast Algorithmic Trading was to blame, and these algorithms have been responsible for hundreds of similar micro-crashes since. It's possible that a number of bad trades could put businesses, livelihoods, and even economies at risk.
And speaking of large-scale consequences - during the 2010 US mid-term elections Facebook, working with the University of California, San Diego, made a small change to its code that resulted in an estimated 340,000 additional votes being cast. On the day of the election Facebook placed non-partisan messages on users' Facebook pages encouraging them to vote. Some 60 million users were shown the message, except not all Facebook users saw the same one. Facebook customised the messages shown to 600,000 users', and didn't show any messages to an additional 600,000 users (as a control group). By analysing publicly available election data the university's academics estimated that the customised messages resulted in an additional one third of a million votes being cast. According to the University's blog,
"The massive-scale experiment confirms that peer pressure helps get out the vote – and demonstrates that online social networks can affect important real-world behavior."
It's not hard to imagine how a slight alteration to this process could change the course of an election. The Facebook users in this experiment were chosen randomly, but what if the customised 'get out the vote' message was only shown on liberal users' Facebook pages?
Programmers have become new gatekeepers; the choices they make could affect millions of people. We need to be able to trust that the algorithms they create, which govern the media we consume, the services we depend upon, and the technologies we use, are coded ethically and fairly.
Also this week - Android's plan for world domination, Big Brother helps London commuters, and more evidence of banks falling behind the times.
Android One smartphone to rule them all
Google launched its Android One smartphone, which targets developing nations, this week. The company's plan is to stop Windows and Firefox OS from gaining traction in the fastest growing, and potentially largest smartphone markets
The Bank of England: Bitcoin, a lot of hype and some innovation
The UK's central bank released two major reports on Bitcoin this week. They argue that the currency's non-centralised digital ledger is more interesting than the actual currency (which represents "a first attempt at an 'internet of finance'")
Facebook could swing Scotland's referendum (if they wanted)
In 2010, Facebook and the University of California, San Diego manipulated US mid-term voters' Facebook pages. They claim direct responsibility for 60,000 additional votes being cast, and indirectly for a further 280,000. Enough to change the direction of a tight race
The road to hell...
Taboola and Outbrain had a plan help content producers earn more from their work, and help readers find more interesting content. But they ended up creating a new genre of spam
Who knew Big Brother could be so helpful?
Goolge's interactive digital billboards in London predict what commuters want, and shows it to them
"No one has any idea what they're doing. We're all just figuring it out"
Open's Creative Director explains why he works on projects he doesn't know how to do Read More »
"A robot may not injure a human being or, through inaction, allow a human being to come to harm"
Programming drones and autonomous cars to think ethically is even harder (and more important) than you'd think
Banks can't bank on the future
"The most interesting things happening in financial services are not happening in financial services"
The biggest game of Pong you'll see today
See how Vodafone helped Icelanders play Pong on a massive scale Read More »
Illustration: Patrick Cusack