Alexa is the friend who knows everything. Always there for you, ready to help. Always happy and cheerful. But Alexa’s master is Amazon. And her primary function is to produce profit for the company. She’s not really there for your benefit – it’s a marketing tool.
So your friend is also collecting data on you. Spilling all your secrets, desires and fears in order to produce demographic and psychographic profiles for Amazon’s marketing department.
The fact that the command to turn the device on is itself a voice command, shows that it is literally always listening. It may not be collecting and sending back conversations when the device is ‘off’. But tech companies seem to have a habit of saying one thing while doing another. And generally, if something can be done, and is profitable, it tends to happen.
Sales, is really only about solving problems. And can range from identifying a need and offering a solution. To creating awareness of problems and needs, up to literally creating the problems themselves (body shaming/body image issues is a creation of marketing). So personalised marketing can actually cause personal problems.
The way these marketing algorithms work is quite simply. They collect data on demographics etc. As well as on behaviour. E.g searches, adverts viewed, links, interests etc. Which is then compared to purchasing decisions to find common patterns, in order to work out what adverts will work for each person. These algorithms have become so effective that their discovering patterns and categorising people in ways that even the psychologists don’t understand.
A real life example was broadcast on Ted.com. Some psychiatrists noticed that a lot of their bipolar treatment programs were failing when at a particular point in treatment, patients were running off to Las Vegas, and gambling away everything they had. This resulted in a downward spiral in their condition.
Eventually it was discovered that, at this particular point in their treatment, they were all being fed adverts for Las Vegas from Facebook.
There wasn’t some sinister intention behind this. Computers don’t have any evil intention, or any motive at all. They just run programs. But because there is no emotion or sense of morality, they’ll do so ruthlessly, without mercy and without compassion. So if their told to maximise profit, that’s exactly what they’ll do – In a manner more ruthless than the coldest businessperson. Also, not a single human being in even aware what decisions are being made.
We all like to believe that adverts don’t affect us. And most adverts don’t. But we all have our weaknesses and temptations. And most people can point out an advert that did change our mood, point of view or led us into temptation. And everyone has, at some point ended up buying products only after seeing it advertised.
Back to your good friend Alexa
If a clever salesperson happened to overhear a conversation on, for example, the decision to have children (potentially very profitable). H/She could tip the decision in their favour.
Learning algorithms can do the same thing. If, for example, the sticking point is financial worries. The algorithms could, for example, identify other similar couples, with the same issues, what adverts/info they received, then feed the same adverts/info, hopefully for the same results. The scary thing is the system would then learn from results (‘learning algorithms), becoming more as more effective each time. It could, for example, push adverts putting a person in a more adventurous, anxious, chilled out/carefree, nostalgic etc mood that could sway the decision of having children in its favour. As well as the optimum timing (not difficult for the system to work out).
Although, at this point these learning algorithms have evolved to the point where it’s no longer possible to understand how these decisions are being made. They don’t just learn, but learn how to learn as well.
Do we need Sarah Connor?
The fear over the coming world of AI isn’t some apocalyptic nightmare where the robots take over the world. It’s simply that, if we hand decision making over to AI, we may find it difficult to take control back again if we don’t like the results. We could become dependent to the point we can’t live without it. Security systems controlled by AI would be more advanced than we could understand. Policing predictive algorithms (similar to advertising), facial recognition, weaponized AI systems etc could prevent interference in the system (to computers we are part of the system) Computers could simply become too clever for us, and will carry out instructions ruthlessly. We could try to give it some sort of moral instructions. But would probably, again apply it without mercy.
Desperation makes fools of everyone
There are plenty of techies who understand the Issues involved in AI. The trouble is companies, like social media companies who are in trouble see AI as a magic solution that will sort everything out for them. Zuckerberg et al can be a bit of an idiot at the best of times. But desperation, denial and power can be a dangerous combination. And insists on pushing through with his version of AI despite warnings from other techies. As well as other companies like Amazon.
Is it really worth it?
We’ve become so used to progress that we no longer ask the question – do we actually want this? New technology is invented, it’s marketed and sold with the assumption that new tech will improve our lives. But we’re seeing that this isn’t necessarily true. So is this tech really worth it? Is this tech really worth giving up control over our world and our lives?