Tech Dealers Now Trying to Save the Tech “Addicts” They’ve Created
A group of former Silicon Valley insiders recently issued a stark warning to the app-and update-obsessed public: Social media, the Web, mobile apps and other technologies developed to “monetize our attention” are “now eroding the pillars of our society,” according to the Center for Humane Technology. The new organization’s goal is to push tech companies to create products that benefit society, rather than simply manipulating people’s behavior for profit.
Led by a former Google “design ethicist” as well as former managers at Mozilla (maker of the Firefox Web browser) and chipmaker Nvidia, the center faces several challenges. Foremost among them is the irony that it will have to rely on social media and the Web to reach a large enough audience to have any impact. The group also acknowledges the futility of trying to change the business practices of YouTube, Facebook, Snapchat, Twitter and other companies that make lots of money by grabbing and holding people’s attention.
Instead, the organization—which includes a number of entrepreneurs who have already made fortunes helping peddle the possibly addictive technologies in question—will encourage tech companies to design new products that are less distracting and actually reduce the amount of time people spend staring at their screens. Another important aspect of the center’s charter is to pressure governments to create greater consumer protections.
Is tech addiction a real condition? If so, how do you know if you’re addicted? And is the Center for Humane Technology a potential solution or just another distraction? Scientific American posed these and other questions to Simon McCarthy-Jones, an associate professor in the Department of Psychiatry at Trinity College Dublin in Ireland. McCarthy-Jones’s most recent book is Can’t You Hear Them? The Science and Significance of Hearing Voices; he has also written about the need to weigh the benefits and costs of social networking.[An edited transcript of the conversation follows.]
At what point does incessant use of technology become an addiction, as opposed to a better way to work or occupy idle time?
One way to consider whether tech use crosses over into addiction is to look at the six criteria that Mark Griffiths [a professor of behavioral addiction at Nottingham Trent University in England] has argued need to be met for a behavior to be deemed an addiction. Using social networking sites as an example, are these the overriding focus of your life, dominating your thoughts, feelings and behaviors? Are you using them to cause a specific change in your mood—for example, to induce euphoria or numbing? Do you need to increase your use of them to gain the same level of benefits? If you can’t access the site, does this have negative effects on you, such as irritability or headaches? Does your use of the site cause conflict with important people in your life such as your partner, children or employer? And do you continually fall back into your problematic social network use after periods of abstinence or being in control?
If you meet all these criteria, you could be said to have a social networking site addiction, although this is not—yet—a recognized psychiatric disorder. The same approach can be applied to other online activities such as gaming. Indeed, internet gaming disorder is identified in the current edition of the DSM–5 [Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition], psychiatry’s bible, as a condition recommended for further study. Gaming disorder has likewise made it into the beta draft of the World Health Organization’s forthcoming updated International Classification of Diseases.
Why do so many people obsessively check their devices and social media accounts for updates?
We can become addicted to certain tech products because our use of them is reinforced. They can help us fulfill a range of psychological needs such as belonging, status and competence. Using them can be intrinsically motivating, resulting in feelings of competence, excitement or of being “in the zone,” each of which has its own reward. Tech can also be extrinsically motivating; its use yields rewards from others, such as approval. Tech products can also help us avoid painful thoughts, feelings and realities. In a society where many people—particularly men—have proved in research experiments that they would rather electrocute themselves than sit in silence with their thoughts, the urge to get away from ourselves should not be underestimated. Also feeding the addictive behavior: Many tech products serve up rewards at unpredictable moments. For example, you never know when a status notification is going to pop up on Facebook. This encourages compulsive use.
What makes someone prone to tech addiction?
There are many such reasons, but let’s take three as examples. First, your personality influences how prone you might be to tech addiction. A Belgian study found people who used social network sites compulsively were less emotionally stable, less agreeable and less conscientious. Those people also had lower levels of perceived control and self-esteem. And compulsive social network users were more lonely and depressed than people who did not use these sites in this way. Second, there is evidence that possessing genetic variants associated with lower levels of dopamine function make you more likely to use internet video games excessively. And, third, why you are using tech products influences addiction proneness. For example, internet gaming addiction is associated with using such games to try and cope with negative emotions, to connect to people and to [master some skill].
To point this out is not to blame the user. People are exposed to a powerful product designed to draw them in while living in [an actual] culture that at times seems callous and liable to push them away. Any answer to problematic tech use involves product, person and society.
Is there evidence that gadget and social media companies have engineered their technology to be addictive?
If someone were trying to design a tech product that was addictive, it would look a lot like many of the products on the market today. Indeed, market forces would appear to encourage the development of addictive tech products. Some companies advertise how their knowledge of mechanisms involved in addiction—such as dopamine release and learning theory—can be used to help other businesses increase their customers’ usage and loyalty, albeit potentially for health and other benefits to the customer. So it could look from the outside like some tech products were designed to be addictive. That said, some have argued this may not have been the case.
Can the center’s digital campaign address the problem, particularly if the group is trying to reach people through the same technology it condemns?
The center seems to be aware of the irony. On their Web site they mention having a Facebook page by writing, “Sigh…, we do have a Facebook group.” The site also recommends using something called Newsfeed Eradicator [a Google Chrome browser extension that allows Facebook users to access their messages, notifications and groups, but not their newsfeed]. Nevertheless, I think I might see the issue slightly differently to the way they do. I would raise the question: Are some tech products violating our right to freedom of thought?
How does freedom of thought tie into the center’s message?
A great article by the international human rights barrister Susie Alegre recently explored the relation between modern technology and the right to freedom of thought. As she notes, if freedom of thought involves the freedom to keep our thoughts private, how will this square with, for example, Facebook’s planned brain–computer interface? Alegre also observes that freedom of thought involves freedom from indoctrination or influence through manipulation. If we think of the now infamous 2014 Facebook newsfeed experiment [in which the company purposely manipulated information posted on 689,000 users’ home pages] or the claim by Center for Humane Technology co-founder Tristan Harris that tech companies are “shaping the thoughts and feelings and actions of people” and are “programming people,” we may wonder if this right is currently being observed.
Given we have an absolute right to freedom of thought, the Center for Humane Technology could consider this in their work. A human-rights based approach, focusing on potential tech violations of the absolute right to freedom of thought, could be a useful additional tactic for them. The center’s argument that tech’s manipulation of us is unethical suggests the tech should change. In contrast, if it were deemed that tech’s manipulation of us violates our absolute right to freedom of thought, this means the tech must change.