Social media usage should be regulated in young adults

The role of social media and online speech in civil society has come under heightened scrutiny. The deadly riot at the U.S. Capitol on January 6 is just one example of violence which national security experts say was fomented in large part on social media platforms. Elsewhere in the world, social media has contributed to religious and ethnic violence, including against Muslims in India and Rohingya in Myanmar. Harmful misinformation, including about the COVID-19 pandemic, has also spread with ease and speed. 

More From Our Experts

Platforms such as Facebook and Twitter have become the de facto public squares in many countries, and governments are adopting varying approaches to regulating them.

How do the major platforms regulate content?

More on:

Digital Policy

Social Media

Censorship and Freedom of Expression

The most popular platforms, most of which are run by U.S. companies, have similar content moderation policies. They bar posts that glorify or encourage violence; posts that are sexually explicit; and posts that contain hate speech, which they define as attacking a person for their race, gender, or sexual orientation, among other characteristics. The major platforms have also taken steps to limit disinformation, including by fact-checking posts, labeling the accounts of state-run media, and banning political ads.  

These platforms generally comply with the laws of the countries where they operate, which can restrict speech even further. In addition to using moderation software powered by artificial intelligence, Facebook, Twitter, and YouTube (which is owned by Google) employ thousands of people to screen posts for violations. 

What are some of the controversies? 

Critics say these platforms do not enforce their rules consistently. For example, both Twitter and Facebook have allowed accounts they say serve the public interest—most notably those of politicians such as former U.S. President Donald J. Trump—to post abusive or misleading content that might have been removed if it were posted by an ordinary user. 

More From Our Experts

15,000

The number of moderators Facebook employs to screen content on its services.

Source:

NYU Stern Center for Business and Human Rights

In Trump’s case, the companies instead appended fact checks to some of his posts, which some experts who track social media and misinformation criticized as insufficient. The two platforms eventually banned Trump, following the U.S. Capitol riots, but both have faced criticism for not taking similar actions abroad. YouTube has also come under fire for allegedly treating its star users, who bring in more revenue, more leniently. It was also criticized for not removing videos with false claims of U.S. election fraud and other misinformation quickly.

Critics say the companies are not incentivized to regulate hateful or violent speech because their ad-driven business models rely on keeping users engaged. At the same time, politicians in some countries, including the United States, argue that social media companies have gone too far with moderation, at the expense of free speech. 

More on:

Digital Policy

Social Media

Censorship and Freedom of Expression

The World This Week

A weekly digest of the latest from CFR on the biggest foreign policy stories of the week, featuring briefs, opinions, and explainers. Every Friday.

Daily News Brief

A summary of global news developments with CFR analysis delivered to your inbox each morning. Most weekdays.

Think Global Health

A curation of original analyses, data visualizations, and commentaries, examining the debates and efforts to improve health worldwide. Weekly.

By entering your email and clicking subscribe, you're agreeing to receive announcements from CFR about our products and services, as well as invitations to CFR events. You are also agreeing to our Privacy Policy and Terms of Use.

For their part, social media companies have argued that their policies are difficult to enforce. It can be tricky at times to distinguish hate speech from satire or commentary, for example. Some companies say the onus should not be on them to write the rules for the internet and have called for government regulation.

How are governments around the world approaching the issue?

In the United States, social media platforms have largely been left to make and enforce their own policies, though Washington is weighing new laws and regulations. Other countries have implemented or proposed legislation to force social media companies to do more to police online discourse. Authoritarian governments generally have more restrictive censorship regimes, but even some Western democracies, such as Australia and Germany, have taken tougher approaches to online speech.

Should social media be restricted to certain age?

Due to the various dangers and effects of social media, it is necessary that parents restrict their children from using social media until at least 13 years old. At that age, they may introduce those apps to their children so the process becomes more gradual and easier to monitor.

Why does media need to be regulated?

The global regulation of new media technologies is to ensure the cultural diversity in media content, and provide a free space of public access and various opinions and ideas without censorship.

What are the dangers of social media for adults?

The negative aspects of social media However, multiple studies have found a strong link between heavy social media and an increased risk for depression, anxiety, loneliness, self-harm, and even suicidal thoughts. Social media may promote negative experiences such as: Inadequacy about your life or appearance.

What are the positive and negative effects of social media on youth?

Social media use may expose teens to peer pressure, cyberbullying, and increased mental health risk..
But, social media can also connect isolated teens and help them find supportive networks..
Parents can set limitations and communicate openly with teens about healthy social media use..