Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0
China has just made effective the first law of this kind in history.
Algorithms define our realities. They manipulate our attention and waste our time. We think we don’t want to live without them but end up looking for ways to fight against the strong urge to scroll down on our phones. Tech companies have absolute power over them and don’t respond to the increasingly louder voices asking to revert these unescapable digital prisons.
Algorithms should be subjected to the same rules that govern other new technologies, but the big tech refuses to be held accountable or have its all-important algorithms externally audited.
Now, China has decided to turn the tides. On March 1st they activated a law that will allow users to turn off algorithm recommendations completely, among other unprecedented measures to give people more power over tech. Imagine a world in which Facebook couldn’t show you what it wanted — optimized to keep you engaged — but what you wanted or needed.
That world could soon be our world.
China is leading the way towards a better relationship with algorithms
The new legislation, entitled “Regulations on the Administration of Algorithm Recommendations for Internet Information Services,” was jointly designed by the Cyberspace Administration of China (CAC) and another four governmental departments. The law, which the CAC published in January aims to “regulate the algorithm recommendation activities … protect the legitimate rights and interests of citizens … and promote the healthy development of Internet information services.”
Relevant to this article is another one I published recently, “In Search of an Algorithm for Well-Being.” I explain how algorithms — recommendation systems in particular — aim to keep users engaged on the apps. The reason is that companies like Facebook or Google enjoy a highly-profitable business model dependent on ad revenue. The more time you spend on the app, the better for them — but not for you.
What I defend in that piece is that we shouldn’t accept this mechanism as an indisputable reality. Instead of fighting against algorithms with individual tools — like developing the habit of spending time off social media — we should direct conversations to question the premises: Algorithms needn’t be optimized for engagement. They can be optimized for well-being.
I argued that internal ethics teams, although an important initiative, aren’t sufficient to enforce tech companies to adopt beneficial practices for users. Regulation, either national or international is paramount. That’s what China is proposing with the new legislation that’s already affecting giants like Alibaba and Tencent.
Unprecedented legislation for the well-being
Here’s what the regulation says and why it’s so relevant (as translated from Chinese to English on Google Translate). (Each article starts with the phrase “Algorithmic recommendation service providers shall,” which I remove for the sake of clarity.)
Art 6: “… actively disseminate positive energy, and promote the application of algorithms to be good.”
Nowadays we accept algorithms as a necessary evil. We embrace the false idea that the inherent nature of the algorithm is to give us something (the recommendation) in exchange for our attention, our time, or our mental health. We then try to fight back by spending time off the internet or uninstalling apps for some days.
I say we should reframe this mindset. As I said in the article I mentioned above, “with very little effort, algorithms could be modified to protect our sensitive psychology instead of exploiting it. They could be trained to optimize well-being instead of engagement.”
That’s exactly what this new Chinese Regulation is about. They will force companies like ByteDance (TikTok and Douyin) to make user well-being the main goal of their algorithms.
Art 8: “… not set up algorithm models that induce users to indulge in addiction, excessive consumption, etc. …”
The Youtube algorithm was initially designed to maximize time spent on the platform and the likelihood that a user clicked on a video. Now, around 70 percent of the time we spend on YouTube we’re watching recommended videos. When Facebook was starting, the main objective was to consume as much time and conscious attention as possible.
Algorithms can be extremely addictive, and engage in what’s called “content intoxication.” For instance, TikTok For You feeds are intended by design to keep users engaged by showing a constant flow of customized content, without exhausting our shortened attention spans.
Art 13: “… not generate synthetic false news information or disseminate it. …”
Fake news is the information problem of our times. With AIs capable of writing like humans and algorithms optimized to keep people discussing online, it’s to be expected that fake news proliferate.
Given that people like to read what they already believe — forming what’s called an echo chamber — algorithms will certainly promote news articles (real or fake) that reinforce those beliefs as they’ll receive considerable amounts of likes, attention, and debate.
Fake news articles that reflect a manufactured reality can be targeted towards vulnerable groups. Firm believers will further strengthen their opinions instead of finding new perspectives that may provide them with a tridimensional view of reality. And doubters could see their decisions made for them by the overwhelming presence of a particular narrative.
If Facebook were a Chinese company, it’d have to take really seriously the pervasive problem of fake news and misinformation that floods the platform — that it’s believed had a definitive impact on the 2016 US election.
Art 14: “… not use algorithms to falsely register accounts, illegal transaction accounts, manipulate user accounts, or falsely like, comment, or forward …”
Another critical problem in opinion social media like Twitter is the existence of fake accounts, popularly known as bots. Those accounts can be created by the thousands to promote specific messages and spread them through the platform. People who fall victim to them can end up forming their beliefs or views of reality on falsehoods.
It can also be a problem for companies like Amazon or Alibaba. Bots created by the sellers can fill the review pages with high ratings, creating the sensation that a product is extremely good, when in reality it may not be so.
Often, bots and fake news go hand in hand, creating an extremely dangerous mix that blurs and deforms the perception of people in general, while providing a hardly traceable source of revenue for the unscrupulous minds behind the deception.
We’re witnessing the gravity of this issue in the current Russian-Ukrainian war. It’s difficult to assess the veracity of a source because fake accounts and fake news flood the channels of information.
Art 16: “… notify users in a conspicuous manner of their provision of algorithmic recommendation services, and publicize the basic principles, purposes, and main operating mechanisms …”
Users can now be aware of the mechanisms behind the algorithms and the purposes by which they’re employed. This means people can make informed decisions. If Facebook explicitly tells the reason each article is promoted to users’ feeds, they may think twice about their habits and behaviors — or whether they want to keep their Facebook account up.
Articles 18 and 19 are similar but emphasize the particularities of minors and the elderly — demographics that have very particular needs and requirements (e.g. minors shouldn’t have easy access to unsafe content and elders have specific medical necessities).
Art 17: “… provide users with options that are not tailored to their personal characteristics, or provide users with a convenient option to turn off the algorithmic recommendation service. …”
This is arguably the main article of the new regulation. It forces companies to provide users with the option to turn off recommendations and select or remove user tags. This legislation could, by itself, radically change the internet tech business landscape in the West. It’d allow users to impact the main sources of revenue for Facebook, Instagram, YouTube, Twitter, Reddit, and many other platforms that heavily rely on ad money to exist.
Internet tech companies in China that provide recommendation services guided by algorithms will have to adapt their platforms and apps to the new legislation. They could lose a lot of money. But what is that money worth compared to the well-being of the people?
Allowing people to live in a world that’s not artificially customized to them is liberating and unprecedented (people who choose to keep recommendations on can always do so). This measure is intended to reduce companies’ power and the incentives to earn money through targeted ads that use not only our personal information but also our behaviors and tendencies online: What we like, what we think about a topic, or what we’re more likely to do or buy.
China is — to some, surprisingly — getting ahead in terms of AI ethics and algorithm auditing (although I predict it’ll be soon followed by legislation in Europe and the US).
This marks a new age for the internet companies that have been filling their pockets out of our time, attention, and psychological vulnerabilities, in exchange to give us what we supposedly want. But we don’t want to be constantly fixed on our screens. If we lived in a world optimized for well-being, I’m sure our phones would be most of the time out of sight— which isn’t the best for those companies, but it is for us.
If you’ve read this far, consider subscribing to my free biweekly newsletter Minds of Tomorrow! News, research, and insights on AI and Technology every two weeks!
You can also support my work directly and get unlimited access by becoming a Medium member using my referral link here! :)