Every step you take…Part 2 – Surveillance Capitalism
“Maybe you don’t want to ask a question. Maybe you just want to have it answered for you before you ask it. That would be better.” Larry Page, 2014
“We expect that advertising funded search engines will be inherently biased towards the advertisers…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.” Larry Page, 1998.
A couple of years ago I decided to quit Facebook. It was prompted by my concerns over the price we pay for our social media and ad-supported apps and platforms such as Google’s. Last month I finished what I believe is an essential framing and examination of this new form of capitalism – surveillance capitalism – and the power that the companies at its forefront have over us; a power quite capable of reshaping humanity in a way that capitalists and even dictators of the twentieth century could scarely dream of.
Shoshana Zuboff’s The Age Of Surveillance Capitalism is an in-depth study of how our engagement with the internet came to be dominated by a handful of companies, and how the model of our engagement was discovered by engineers at Google and turned against us.
It begins with search. When we use a search engine we “produce a wake of collateral data such as the number and pattern of search terms, how a query is phrased, spelling, punctuation, dwell times, click patterns and location.”
These are behavioural byproducts, or ‘behavioural surplus’ and all the search engines, such as Yahoo, a giant at the time, had this data stored haphazardly. It was a Google engineer, Amit Patel, who had the insight that this collateral data about us was hugely revealing of our thoughts, feelings and interests. And it could be used. These flows of data could be recursively fed into the search to allow it to become more efficient.
The breakthrough that sent Google to the top of the pile was, Zuboff argues, driven by its need, during the dotcom bust that ripped through Silicon Valley, to stay afloat, to find a way to make significant money and retain its investors. During this early period the improvements to search leveraged off the data it could glean from our online behaviour was repurposed solely to improve search, i.e. it was for our benefit (as well as Google’s regarding its desire to become and remain the number one search engine).
Then, with the AdWords team tasked with making more money in the gloom of the bust, they decided they could leverage this behavioural data to match adverts to queries. Ads would be targeted to individuals based on the behavioural data Google had from them. It worked. Beyond their wildest dreams. It’s important to note it never asked if it could use the data this way. Beg forgiveness has always been the model. It’s obvious now how little chance anyone had of anticipating that this behavioural byproduct of our searches could be leveraged for profit without our consent. So they went on ahead as any good capitalists would do and exploited the freedom. Now the data would not just be used for our benefit, it would benefit the advertisers. Our benefits for using Google’s services are the price we pay to retain and increase their surveillance of us. The quality of their services comes at the cost of our privacy and our identities. Facebook followed, Microsoft later followed, Amazon follows. Our behaviour is the raw material they mine for their gold. We are not the product, we supply the raw materials for the product.
It seems like a leap, to say that Google was somehow all of a sudden all about surveillance. But it’s the logical conclusion of the business model and it was pursued, as Zuboff demonstrates, persistently and aggressively in all the years since behavioural surplus began to be weaponised.
In a nutshell, Google and everyone else relying on ads for revenue, makes more money if its advertisers see higher ‘click throughs’ on their adverts. That’s only going to work if Google can predict when it is best to serve us an advert. It gets better at this the more data it has about us. After all, if someone knows me perfectly, they will know exactly when I am in need of a product and will be able to surface it to me the moment I need it. This is advertising’s wet dream. Every sinew since the discovery of behavioural surplus has been strained in the pursuit of this complete knowledge of everyone. As they’ve learned more about us and improved their algorithms, so they’ve increased their profits, and they’ve ploughed that back into more research and more products, more partnerships, more acquisitions of other social media outlets such as Youtube most famously, to access their datastores of our ‘behavioural surplus’ and increase further their knowledge of us.
But also, it goes further, as Zuboff amply demonstrates. Better for advertisers and corporate partners if the knowledge these internet giants have of us is so detailed that they can use it to ‘nudge’ us, i.e. influence us so as to be even more likely to engage with its corporate partners, as was shown with Pokemon Go. Pokemon gyms and rare Pokemon would usually be found near the entrances to the shops and restaurants that paid Google to drive their players to them.
More chillingly, leaving aside the whole clusterfuck that is Cambridge Analytica, Facebook boasted to potential advertisers that it knew enough about the general sentiment of a particular cohort of teenagers using its site that it could surface in its news feed (no longer merely chronological) activity from friends that was more likely to create certain emotional states (envy, joy). This manipulation, it boasted, could be used by their ad partners to target specific products that spoke to those moods that Facebook could choose when to influence through managing the news feed’s most prominent posts.
Example after example given in her book shows that it isn’t just about serving ads anymore, and hasn’t been for years. The acquisitions of Youtube, Linkedin, Whatsapp, Instagram, all for staggering sums, is small change in terms of the datasets they get access too. All these acquisitions are in fact bargains, presumably paying off the considerable cost with the data they can add to their existing stores that help them to nudge and target us better. The real goal is all-encompassing. It is, though I hesitate to use the phrase for its sensationalist tone, about herding users into behaviours that support the providers of goods and services affiliated with these platforms to maximise profit.
As alluded to in my previous post on this, the internet of things is going to complete the surveillance, and the work is underway. There are already cars that can shut down if you haven’t paid your insurance. There will be cars that monitor how you drive, and tweak your insurance payments accordingly. You’ll no longer be free to consider putting your foot down on that quiet country road and just enjoy the thrill because you’ll be informed that it’s added a surcharge to your car insurance. Driving for pleasure will involve a conscious weighing up of the financial penalties involved for increasing ‘risk’. Your autonomy will increasingly be moderated by cost/benefit decisions based on the advanced surveillance capabilities of the goods and services you use, from fridges to cars to gyms to school fees to healthcare, entertainment etc. in ways that benefit them.
All companies want is to limit risk by increasing certainty in their decision making and thus guaranteeing outcomes and profit. Behaviour in the real world is and will increasingly be capable of modification by these interconnected services. It’s not just about what we say anymore, the ‘text’, for with Alexa and surveillance cameras running on Google’s platforms, or funded by these internet giants partnering with cash-starved and grateful local authorities and governments, they can, via all these devices both within and without our homes, in our offices, in our streets, monitor how we behave, flag our patterns and breaks in them, passing it on so their partners can modify these services as they see fit. They can watch our body language, facial expressions, listen to the tones of our voices, analyse our grammar, what we misspell or emphasise and infer increasingly well what it means for all the services we might have signed up for, first for convenience and then because, as with mass social media now, there’s little else to choose. Our very selves are being harvested and analysed. Zuboff spells out the many efforts that their researchers are putting into these areas.
I could talk about Google’s streetview cars being equipped to gather up information about wifi networks without consent on every street they visited, leading to a lawsuit in Germany when it was discovered that it wasn’t just cameras recording, but active surveillance of people’s data being undertaken. Zuboff is strong on all the ways that they have exploited the ignorance and glacial pace of understanding and response of governments, while lobbying hard to protect their freedoms using billions of dollars to do it.
This is the present day. This is also recent history, though to anyone thirty years ago it would look like science fiction.
How do you fight it? Get off Youtube, stop using Instagram, Android, Facebook, Twitter, Whatsapp, Amazon, Microsoft… And there’s the problem we now face. You might think you can opt out, but can you really? You can request your data, but they will always track you with it. They say it’s anonymised but it is easy to de-anonymise. It’s like the question ‘What gives you the right to take this in the first place?’ was forgotten and we’re now in a universe filled with the assumption it was ok. The horse has bolted. The options for someone like me are pretty limited. Adblockers, different search engines, stop using Gmail, stop using Facebook, stop using Amazon, and I can’t even bring myself to do the latter properly.
Am I, as a budding author, committing publicity suicide by ignoring these platforms? Of course I am. It feels like I’ll become invisible, no longer part of the wonderful and rich conversation that’s ongoing between authors I know and love, the bloggers, the readers, the people I meet at cons, the influencers, the great and the good. I will lose potential readers.
I kept reading.
The latest book in this ouvre, out in the last few weeks as I type this, is Behind The Screen, by Sarah Roberts. It looks at the labour that’s involved in what she calls ‘commercial content moderation.’ And the story Zuboff and others are telling kind of gets worse. All of the platforms where people socialise or search en masse, where we’re able to express ourselves, they function successfully almost entirely as the result of an invisible army of generally contracted out, badly paid moderators who spend years filtering out and deleting the toxic contributions our fellow humans make to these platforms. For as little as one cent per upload, these moderators, often in the ‘Global South’ are asked to screen some of the most wretched content you could imagine. She interviews those who delete content on the likes of Youtube and Facebook all the obvious culprits like child porn, torture and mutilation of people and animals, but also discovered that, for example, in relation to the Arab Spring atrocities that were filmed and uploaded versus similar film of Mexican drug cartel killings used to inspire fear, that different standards are applied to each. The full-time moderators she interviewed had policy guidance, kept secret, that was, necessarily, reflecting of certain political values and affiliations, sometimes in relation to political events and requests from ‘authorities’ through to commercial partners. So, for example, atrocities in Saudi Arabia might get filtered more effectively than those of other political actors. Just recently Twitter admitted that it had run numerous ad campaigns on behalf of the Chinese government blatantly lying about the incarceration of muslims and civil unrest. As with the recent mass murder livestreamed on Facebook, it becomes clear reading this book that all these mass social media platforms would, without the work of these moderators, be absolutely rammed with obscene and dreadful content. The emotional and psychological toll on these workers hasn’t had the benefit of systematic and extensive study due to the NDAs that these platforms use to forbid them talking about their work or what they moderate and why. Only covertly could she interview these moderators over the eight years she’d been researching and they all reported varying levels of stress, unhappiness and an inability to forget what they saw. Many have no outlet to share their experience with, citing a desire not to pollute their relationships with it. As contractors, the platforms are sufficiently removed from having to be accountable for supporting and even training them. There exists no union to support them. No formal counselling support or funds to help them is offered by the companies they work for that win the contracts, let alone those directly employed by the platforms themselves. The latter at least are colocated and can comfort each other. It becomes apparent by the end of the book how the recent exposure of this labour market to the world by researchers such as her, along with the high profile content that gets through, like the streaming of murders, has forced the platforms to be seen to instigate some kind of code to abide by. As with all voluntary self-regulation, it is lip service designed to minimise reputational and litigational damage. Zuboff highlights many cases of such activity in her book too. Again, it is rational, where capital must always pursue profit and evade regulation.
These platforms do not and cannot pretend to aspire to free speech. The moderators laugh at the idea people could believe they can speak freely on them. The entire business model of our giant social media platforms run this tightrope of needing the content not to be so toxic that people leave the platform unwilling to be exposed to it, while drawing the greatest revenues from the controversy created by more polarised and/or extreme content.
The monetisation of these platforms we use to express ourselves tailors and shapes how it is we communicate, and it is shaped to maximise profit, obviously. We are drawn to controversy. All drama is conflict. These essential facts of human nature are exploited ruthlessly. These platforms aren’t guided by moral imperatives, nor do they promote a set of values. There is only a profit imperative and any values we perceive are driven by the need to maintain maximal engagement. If I create a Youtube channel devoted to longform critical appraisals of various current affairs, offering a balanced view with compound sentence structures and arguments thoroughly qualified and caveated I will simply get less engagement than if I tailor that channel’s content towards more extreme statements of opinion, in more emotive language, ideally more hostile and critical. If I give people something to react to, for or against, basically clickbait, I will get more engagement and more views and the platform will reward me with more money. Why would I do anything else to create me some revenue? The nature of these platforms promotes toxicity by design. And we’re all plugged in, any struggling to cut them out feels almost futile.
Consider also that the engagement enjoyed by almost any entity (celebrity, brand) of note on any of these mass platforms is stage managed, another area that Roberts covers. Contributions to these fora can be seeded and/or steered invisibly by brand managers employing fake users. I see a picture emerge of the entirety of mass social media as a brilliantly efficient engine for the rendition of our behavioural surplus to be used, without our consent (because nobody can wade through the intentionally convoluted small print) for the platforms and their partners’ profit by both design and the machinations of the fake content providers and numerous ‘botfarms’. All of it is thrives, too, because of the low-paid and psychologically damaging work of the moderators.
It feels like something’s wrong with mass social media, and indeed any site with a comments thread that allows anonymous users to contribute.
I think then about the smaller platforms, like the Reddit subs I visit, or even the Whatsapp groups I’m part of. These are smaller groups with identifiable individuals following voluntarily stronger codes of conduct, both formal and informal. They are small communities and as such they function with almost none of the problems of the mass platforms full of these effectively anonymous users, both real and fake. As one of the moderators of a weather information site pointed out, its comments were regularly bombarded with anti-gay messages. This was a weather information website. I’ve also wondered for a while at how even quite sophisticated content on Youtube, such as the output of philosophers, economists etc. attract comments and arguments and toxicity from participants who don’t demonstrate, by and large, any sense of having learned something. It never feels like a discussion of the content is taking place, that people have engaged and acknowledged shifts in viewpoint. What if the comments were disabled? It would kill the revenue coming from people’s repeat visits to sustain their arguments, but how much would we lose if we lost that? The raw content exists in a context that is significantly less profitable for the likes of Youtube. But the internet is a great place to put it, and free of the need (and the ‘freedom’) to go kicking each other to pieces over views we each find ‘retarded’, we could leave the content and cogitate on it more deeply, take our views to places more focused to constructive engagement. But we scroll the comments like we scroll our feeds, we stop on the offensive or funny or weird comments, stop for the comments we profoundly disagree with, make our patterns of inference and build our counters to their bullshit views convinced they need to be countered yet vanishingly rarely do we succeed. I don’t think I ever have changed anyone’s opinion on a mass forum and I’ve been at it for fifteen years.
There’s plenty of studies that have found that, for teenagers in particular, but also me these days, one closes a session on social media feeling worse than when one has opened it. It generally makes people feel less happy, consistently. I trust so little of what’s written, usually only by people who I’m sure exist because I’ve met them! I might as well have them on a private chatgroup and do away with all the other shit that comes with these platforms. I’ll state here that I’m no longer going to actively engage on Twitter, either. More and more I get halfway through a response to some post and think ‘this person doesn’t give a shit what I think.’ Whose opinion am I going to change as oppose to re-inforce either way? I signal boost other authors and creative works I admire, along with things I find funny or interesting. Occasionally I find something nice about my work to retweet. Then I read these books and see the price people are paying for my attention, the data we give up for another’s profit, the support of a platform that offers no support itself to the moderators paid a pittance to sustain the very model these companies rely upon. The whole thing stinks and we’re drowning in it, and it’s going to take over, device by device, camera by camera, phone by phone, every aspect of our lives and connect them to the corporations that provide our services. It ‘others’ us, appropriates not only our labour but our identities, in order, gradually, to shape them and herd us into more predictable patterns for profit – walk more with your fitbit to reduce health insurance, watch this content to reduce your school fees etc. etc.
I want to stay in touch with the people I love and respect. I’m going to have to learn how to do it a bit differently, and I hope they won’t mind the inconvenience that might cause if I ask them to join me somewhere a little cosier.