- by Hannah Fry
blurb: "A look inside the algorithms that are shaping our lives and the dilemmas they bring with them."
Knowing that his preferred clientele would travel to the beach in their private cars, while people from poor black neighbourhoods would get there by bus, he deliberately tried to limit access by building hundreds of low-lying bridges along the highway. Too low for the 12-foot buses to pass under.
GPS was invented to launch nuclear missiles and now helps deliver pizzas.
It’s about asking if an algorithm is having a net benefit on society.
the power of an algorithm isn’t limited to what is contained within its lines of code. Understanding our own flaws and weaknesses – as well as those of the machine – is the key to remaining in control.
For the time being, worrying about evil AI is a bit like worrying about overcrowding on Mars.
After only a few minutes of looking at the search engine’s biased results, when asked who they would vote for, participants were a staggering 12 per cent more likely to pick the candidate Kadoodle had favoured.
All around us, algorithms provide a kind of convenient source of authority. An easy way to delegate responsibility; a short cut that we take without thinking. Who
about our human willingness to take algorithms at face value without wondering what’s going on behind the
Stanislav Petrov was a Russian military officer in charge of monitoring the nuclear early warning system protecting Soviet airspace. His job was to alert his superiors immediately if the computer indicated any sign of an American attack.
having a person with the power of veto in a position to review the suggestions of an algorithm before a decision is made is the only sensible way to avoid mistakes.
There’s just one issue with that logic: we’re not always aware of the longer-term implications of that trade. It’s rarely obvious what our data can do, or, when fed into a clever algorithm, just how valuable it can be. Nor, in turn, how cheaply we were bought.
Palantir is just one example of a new breed of companies whose business is our data. And alongside the analysts, there are also the data brokers: companies who buy and collect people’s personal information and then resell it or share it for profit. Acxiom, Corelogic, Datalogix, eBureau – a swathe of huge companies you’ve probably never directly interacted with, that are none the less continually monitoring and analysing your behaviour.8
This digital shadow of a pregnancy continued to circulate alone, without the mother or the baby. ‘Nobody who built that system thought of that consequence,’ she explained.
Their approach was to identify small groups of people who they believed to be persuadable and target them directly, rather than send out blanket advertisin
The experimenters suppressed any friends’ posts that contained positive words, and then did the same with those containing negative words, and watched to see how the unsuspecting subjects would react in each case. Users who saw less negative content in their feeds went on to post more positive stuff themselves. Meanwhile, those who had positive posts hidden from their timeline went on to use more negative words themselves
Sesame Credit, a citizen scoring system used by the Chinese government.
Nicholas Robinson was sentenced to six months in prison
Johnson escaped jail entirely.
, on the basis of identical evidence in identical cases, a defendant could expect to walk away scot-free or be sent straight to jail, depending entirely on which judge they were lucky (or unlucky
whenever judges have the freedom to assess cases for themselves, there will be massive inconsistencies. Allowing
the best-performing contemporary algorithms use a technique known as random forests,
Random forests have proved themselves to be incredibly useful in a whole host of real-world applications. They’re used by Netflix to help predict what you’d like to watch based on past preferences;22
sparked a heated debate, and not without cause: it’s one thing calculating whether to let someone out early, quite another to calculate how long they should be locked away in the first place.
Unfortunately for Zilly, Wisconsin judges were using a proprietary risk-assessment algorithm called COMPAS
The algorithm’s false positives were disproportionately black.
Chapters on power, data, justice, medicine, cars, crime, art.
Symbiosis seems best. E.g. extra safety mechanisms seem best rather than driverless cars. Ai can detects tumors better than human (faster at least) but bad as a gp. They can augment or make more.efficient a police investigation. But needs human intuition still.
I read Hello World - How to Be Human in the Age of the Machine by Hannah Fry. It's about the increasing pervasiveness of algorithmic decision-making in everyday life, and how much we should rely on them.
It's a really good book - very engagingly written and easy to read, on what could potentially be a pretty dense topic. It's full of real-world stories to ground the more abstract questions, and it also weaves into that a nice basic overview of what algorithms are, and how the latest crop of machine-learning algorithms work.
So briefly - very broadly an algorithm is just a set of step-by-step logical instructions that show, from start to finish, how to do something. However generally the world algorithm is used a bit more specifically, still in some sense a set of step-by-step instructions, but a more mathetmatical and defined series of steps, and usually run by a computer.
And when people talk about whether algorithms are good or bad, they pretty much always mean decision-making algorithms - something that makes a decision that affects a human in some way. So for example long division is an algorithm, but it's not really having any decision making effect on society. We're talking more about things like putting things in a category, making an ordered list, finding links between things, and filtering stuff out. And they might be 'rule-based' expert systems, in that the creator programs in a set of rules that the system then executes, or more recently machine learning algorithms, where you train an algorithm on a dataset by reinforcing 'good' or 'bad' behaviour. Often with these we can't always be sure how the algorithms has come to a conclusion.
So what the book is really focused on is the effect our increased use of decision-making algorithms like these is having on things like power, advertising, medicine, crime, justice, cars and transport, basically stuff that makes up the fabric of society, and where we're starting to outsource these decisions to algorithms.
The book does a really good job of explaining some of the problems in outsourcing those decisions.
One big problem being that we have a tendency to trust the decision made by a computer. But we have to really aware of the biases in these systems. Part of this bias is part of the bigger problem endemic in the tech industry - that's it's overrepresented by white men who have a very limited world view and a particular set of biases. The system is often going to be made in the image of its creator, right.
But aside from that ML can also biased in that if the data that goes in to them is biased, so will the outcomes be. Garbage in, garbage out. And there's a lot of biases and garbage statistics in the world. So say if policing disproportiately targets a particular group in arrests and justice treats them differently in sentencing, then they're more likely to be targeted by an algorithm based on existing policing and crime stats. You have to really challenge existing biases, not build them in to the system.
The book is very even-handed, and isn't a polemic against machine learning by any regards. There are plenty positives, like image classification of tumours where ML at great speed cases that a pathologist should look at in more detail.
I really liked the conclusion that we should not see machine learning decision making as an either or. Like either we hand it over to machine learning, or we keep everything. It gives the great example of 'centaur chess', where a human plays with an artificial intelligence against another human with an artificial intelligence. Interesting this is something being championed by Gary Kasparov, who was famously beaten by IBM's Deep Blue AI at chess a few decades back. It opens up new possibilities where AI is complementary and not a replacement.
I think my criticism with the book would be that it doesn't really challenge the framing of the debate around ML. So its lettting the current arbiters of ML set the agenda to some degree, and then the critcism is in the details and not the higher level. So I mean there's a whole chapter on whether we have driverless cars or not? But no mention of whether we should rather be endeavouring to take cars off the road completely. And with regards to things like predictive policing there is no questioning of the idea of policing as an institution in the first place, just a question of how we should use algorithms within it. And there isn't a single mention of climate change which I found pretty amazing.
But still it does a great job of outlining the positives and pitfalls of decision-making algorithms. I'd recommend it, I'd just like the follow up book to be how we can use them for more liberatory purposes!