Back to the hot seat, Facebook’s problem is more moral than technological
Disclaimer: This is the translation of an article published at TAB UOL.
Last week, Facebook faced an outage of its services and was also surprised by the testimony of its former product manager Frances Haugen. Initially, all the information was shared anonymously on an article for Wall Street Journal, but later on an interview for the TV show 60 Minutes we found out that the whistleblower was Haugen, a 37-year-old data engineer who worked at Facebook after working for Pinterest, Google, and Yelp. Haugen was also heard by the American Senate on Wednesday (6).
Haugen was able to save thousands of confidential pages from Facebook’s database, from where she based her serious accusations against the platform. She claims that Zuckerberg is aware of the risks his platforms pose for democracy, and he has power of decision over all of them. Both Netflix documentaries “The Great Hack” and “The Social Dilemma” have already exposed several of the issues of Facebook, but now Haugen has internal documents to support that the company was aware of everything that the directors did not take any action.
In an interview for 60 Minutes, the former employee argued that, despite Facebook’s campaign against disinformation during the last American presidential elections, this system was abandoned right after the event. Soon the company put the former algorithm back online which, by its turn, prioritizes content that may provoke violent reactions (hence more engagement) and intensify symptoms of depression, eating disorder and suicidal tendencies among teenagers and women — especially when it comes to Instagram, which is also owned by Facebook.
In response to Haugen’s accusations, Facebook said that the company has signed contracts with 80 checking agencies in 60 idioms, and that they know that a violent environment is not only bad for users, but also for sponsors and business in general — which is the opposite of what Haugen claims. Other people, however, are not very surprised to see that a platform created to rank the “hottest” girls at Harvard would turn into a locus for extremist groups to organize and spread their hate speech.
Still, Facebook’s outage had its positive and negative aspects. In the case of Brazilian entrepreneurs which had their businesses based on WhatsApp, for instance, loss and trouble were registered during the event. On Twitter, users celebrated the outage of the company’s ecosystem as a justification for taking more time before replying to messages or to avoid the endless scrolling of timelines, thus obeying such a vicious mechanism. Between broadcasters from the TV channel GloboNews, Demétrio Magnoli was one of the few who found the incident funny, since he didn’t use WhatsApp as his colleagues do, so he wasn’t affected by the outage. But isn’t it a privilege to be out of social media these days?
We are already tired of hearing how harmful social media can be. But when we speak of entrepreneurs and freelancers who use social media to create their online presence and attract clients, deleting your profiles is not exactly helpful. Sure, almost everything can be outsourced these days and there are indeed companies and professionals that offer managing services and content creation for social media profiles, thus allowing the entrepreneur to dedicate more time to their own business… provided that there is enough budget to pay for that. And there is also the option to keep your professional profile on a different phone, so you avoid turning the habit into an addiction.
Still, it is easy to go for the extremes: either using no technology at all or fall prey to them. Back in 2018, when the scandal of Cambridge Analytica was made public, entrepreneurs like Elon Musk joined the #deleteFacebook campaign. However, as argued by the culture and media historian Siva Vaidhyanathan, at that time, deactivating Facebook could mean quite the opposite from protesting and acting against the company’s mechanisms:
“But even if tens of thousands of Americans quit Facebook tomorrow, the company would barely feel it. Facebook has more than 2.1 billion users worldwide. Its growth has plateaued in the United States, but the service is gaining millions of new users outside North America every week. Like most global companies, Facebook focuses its attention on markets like India, Egypt, Indonesia, the Philippines, Brazil and Mexico. At current rates of growth, it could reach three billion users by 2020.”
The debate now is less about the end of the company and more about the regulamentation of the platform, especially from the perspective of public institutions. After all, while it’s still possible and legal to use this current algorithm that generates “astronomic profit” to the company, as argued by Haugen, Facebook will simply carry on. For Vaidhyanathan, the long-term agenda should be “to bolster institutions that foster democratic deliberation and the rational pursuit of knowledge. These include scientific organizations, universities, libraries, museums, newspapers and civic organizations. They have all been enfeebled over recent years as our resources and attention have shifted to the tiny addictive devices in our hands.”
To that we can add the still persistent belief that, in the absence of trustful public institutions, companies are those responsible for taking action for the benefit of people and welfare. During the pandemic, in Brazil, big companies even created new factories to produce masks, alcohol in gel and oxygen while the government only made things worse, as we see now in the investigations of the CPI of Covid. And this is where lies the trouble, as stressed by Soledad Barruti and João Peres in an article for the website Bocado, which addresses the way brands have been influencing people and becoming more “charismatic” among us all:
“By the beginning of this romance, during the first half of the past century, brands were already synonyms of modernity, of the entrance into a brilliant future where the passport was consumerism. They simply needed to exist and be on the shelves to be desired. Soon this “natural” tone was absorbed by their strategy: some brands, with the help of marketing specialists, joined the triumphal spirit of the wars, becoming official sponsors of the army in the United States. Coca Cola made sure that every soldier received their products in Vietnam, and Hershey’s sold chocolates to the marines. (…)
From the 1980s on, brands started to occupy spaces that were neglected by governments. They built shelters, schools, water wells, and, sure enough, they also delivered and donated some of the food leftover to people in poverty who couldn’t afford it. It is never about going after the solution for the structural problems, after all, on the contrary of what happens with politicians, nobody would ask that much of them. Brands concentrate on patching and advertising, so it looks like they are doing something much more meaningful. Advertisements on the streets, videos, photos which provoke emotional reactions, commitment and gratitude. Let them seal this pact with society, people, and consumers.”
Facebook is a company that sells services, so there is a difference in this case. This is why the historian Siva Vaidhyanathan suggests that, more than deleting our social media profiles, another strategy would be to use this same platforms to think and act collectively, reporting and exposing their mechanisms through the breaches of the algorithm — just like it happened in 2020, when the influencer Sá Ollebar exposed racism in the algorithm of Instagram.
Even in face of such concrete examples, it is never easy to quit certain habits fed in increasingly more sophisticated ways. Francis Haugen mentioned during her interview to 60 Minutes that teenagers who use Instagram have increasingly feeling hatred against their own body. The same goes for influencers exposed by pages such as Dica Anti-Coach or Física e Afins, which, in any case, still have millions of followers and get money from their empty courses and mentorships.
Strategies such as the #unfollowterapeutico (therapeutic unfollow) is an attempt to help people breaking this vicious circle of social media, but, sometimes, it only takes a click on a post or perhaps a “relapse” in the consumption of such triggering content that the algorithm once again starts to recommend similar stuff — test that on Instagram, for example, and you will see your “reels” being molded by that. On social media, it is much easier to fall prey to filter bubbles where people and their content build a parallel reality where having a submissive wife or a certain face shape means that you are superior.
In the case of YouTube, the platform owned by Google started to defund negationist content. In other cases, such as Twitter’s, it is by blocking profiles or keywords that people may find it easier to avoid such content, but like suggested by Sendhil Mullainathan, professor of behavioral and computational science at University of Chicago, biased algorithms are easier to fix than biased people.
If those who write the algorithms are human beings, the problem is therefore much more cultural than technology — therefore, much more complex that we may think. It would be great if the solution would be creating the most potent and accurate artificial intelligence, capable of flagging and removing fake information or hate speech, but the core of the problem is in the very tendency and desire of people to produce and consume this kind of content, no matter the reason behind it.
Did you like the post? What about you Buy me a Coffee? :)