Jump to content
Our commitments

How does YouTube address misinformation?

With billions of people visiting us every day - whether they’re looking to be informed, to catch up on the latest news, or to learn more about the topics they care about, we have a responsibility to connect people to high-quality content. So the most important thing we can do is increase the good and decrease the bad. That’s why we address misinformation on our platform based on our “4 Rs" principles: we remove content that violates our policies, reduce recommendations of borderline content, raise up authoritative sources for news and information, and reward trusted creators.Learn more about how we treat misinformation on YouTube.

Fighting misinformation

What type of misinformation does YouTube remove?

As detailed in ourCommunity Guidelines,YouTube does not allow misleading or deceptive content that poses a serious risk of egregious harm. When it comes to misinformation, we need a clear set of facts to base our policies on. For example, for COVID-19 medical misinformation policies, we rely on expert consensus from both international health organizations and local health authorities.

Our policies are developed in partnership with a wide range of external experts as well as YouTube Creators. We enforce our policies consistently using acombination of content reviewers and machine learningto remove content that violates our policies as quickly as possible.

What types of misinformation are not allowed on YouTube?

Several policies in our Community Guidelines are directly applicable to misinformation, for example:

  • Misinformation policies

Thesemisinformation policiesapply to certain types of misinformation that can cause egregious real-world harm such as promoting harmful remedies or treatments, certain types of technically manipulated content, or content interfering with democratic processes such as census participation.

  • Elections misinformation policies

Ourelections misinformation policiesdo not allow misleading or deceptive content with serious risk of egregious real-world harm like content containing hacked information which may interfere with democratic processes, false claims that could materially discourage voting, or content with false claims related to candidate eligibility.

  • COVID-19 medical misinformation policy

TheCOVID-19 medical misinformation policydoesn't allow content that spreads medical misinformation which contradicts local and global health authorities’ medical information about COVID-19. For example, we don’t allow content that denies the existence of COVID-19 or promotes unapproved treatment or prevention methods.

  • Vaccine misinformation policy

Thevaccine misinformation policydoesn't allow content that poses a serious risk of egregious harm by spreading medical misinformation about currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and by the World Health Organization (WHO). This is limited to content that contradicts local health authorities’ or the WHO’s guidance on vaccine safety, efficacy, and ingredients.

How does YouTube limit the spread of borderline content and potentially harmful misinformation?

Sometimes, we see content that comes close to—but doesn’t quite cross the line of—violating our Community Guidelines. We call this borderline content. Globally, consumption of borderline content or potentially harmful misinformation that comes from our recommendations is significantly below 1% of all consumption of content from recommendations. That said, even a fraction of a percent is too much. So, we do not proactively recommend such content on YouTube, thereby limiting its spread.

We have careful systems in place to help us determine what is borderline content and potentially harmful misinformation across the wide variety of videos on YouTube. As part of this, we ask external evaluators and experts to provide critical input on the quality of a video. And these evaluators usepublic guidelinesto guide their work. Based on the consensus input from the evaluators, we use well-tested machine learning systems to build models. These models help review hundreds of thousands of hours of videos every day in order to find and limit the spread of borderline content and potentially harmful misinformation. And over time, the accuracy of these systems will continue to improve.

How does YouTube raise authoritative content?

For topics such as news, politics, medical, and scientific information, the quality of information is key. That’s why we have continued to invest in our efforts to connect viewers with quality information and introduced a suite of features to elevate quality information from authoritative sources and provide context to help you make informed decisions.

How does YouTube elevate quality information for viewers?

For content where accuracy is key, including news, politics, medical, and scientific information, we use machine learning systems that prioritize information from authoritative sources in search results and recommendations.

To help you stay connected with the latest news, we highlight authoritative sources in news shelves that appear on the YouTube homepage duringbreaking newsmoments, as well as above YouTube search results to showtop newswhen you are looking for news-related topics. And to make it easier for you to find content from authoritative health sources, you may see ahealth content shelfwhen you search for certain health topics, such as diabetes or breast cancer.

Elevating quality information 1Elevating quality information 1

News and health content shelves

How does YouTube determine what is an authoritative source?

We use a number of signals to determine authoritativeness. External raters and experts are trained usingpublic guidelinesto provide critical input and guidance on the authoritativeness of videos.

Additionally, we use inputs from Google Search and Google News such as the relevance and freshness of the content, as well as the expertise of the source, to determine the content you see in our officially-labeled news surfaces.

And to identify authoritative health sources that you see across ourhealth product features,we use principles and definitions developed by an expert panel convened by the National Academy of Medicine (NAM). These principles include that sources should be science-based, objective, transparent, and accountable.

How does YouTube provide more context to viewers to help them evaluate information?

We highlight text-based information from authoritative third-party sources using information panels. As you navigate YouTube, you might see a variety of different information panels providing additional context, each of which is designed to help you make your own decisions about the content you find.

For example, indeveloping news situations,when high quality video may not be immediately available, we display links to text-based news articles from authoritative sources in YouTube search results.

Developing news information panel

Developing news information panel

We display information panels above certain search results to highlightrelevant fact checksfrom third-party fact-checking experts.

Fact check information panel

Fact check information panel

Forwell-established historical, scientific, and health topicsthat are often subject to misinformation, such as “Apollo 11” or “COVID-19 vaccine”, you may see information panels alongside related search results and videos linking to independent third-party sources including the World Health Organization, and locally relevant health officials.

Topical information panelTopical information panel

Topical information panel

Since knowledge around funding sources can provide context when assessing an organization's background and help you become a more informed viewer, we also showgovernment or public funding for news publishersvia information panels alongside their videos.

Publisher funding information panel

Publisher funding information panel

Information panels alongside health videos provide health source context and can help you better evaluate if a source is an accredited organization or government health source.

Information panel that provides health source context

Information panel that provides health source context

How does YouTube encourage trustworthy creators?

YouTube’s unique business model only works when our community believes that we are living up to our responsibility as a business. Not only does controversial content not perform well on YouTube, it also erodes trust with viewers, advertisers, and trusted creators themselves.

All channels on YouTube must comply with ourCommunity Guidelines.We set an even higher bar for creators to be eligible to make money on our platform via theYouTube Partner Program (YPP).In order to monetize, channels must also comply with theYouTube channel monetization policies,which includes ourAdvertiser-friendly content guidelineswhich do not allow ads on content promoting or advocating for harmful health or medical claims; or content advocating for groups which promote harmful misinformation. Violation of our YouTube channel monetization policies may result in monetization being suspended. Creators canre-applyto join YPP after a certain time period.

Putting users in control

While YouTube addresses misinformation on our platform with policies and products based on the "4 Rs" principles, we also empower the YouTube community by giving users controls to flag misinformation and by investing in media literacy efforts.

How can the broader community help flag misinformation on YouTube?

YouTube removes content that violates ourCommunity Guidelines,however, creators and viewers may still come across content that might need to be deleted or blocked. Anyone who is signed in can use ourflagging featuresto submit content such as video, comment, playlist for review, if they think it is inappropriate and in violation of our Community Guidelines. We also have tools and filters that allow creators to review or remove comments that they find offensive to themselves and their community.

What are YouTube and Google doing to help people build media literacy skills?

While YouTube tackles misinformation on the platform by applying the 4Rs principles, we also want to support users in thinking critically about the content that they see on YouTube and the online world so that they can make their own informed decisions.

We do this in three ways: help users’ build media literacy skills; enable the work of organizations who work on media literacy initiatives; and invest in thought leadership to understand the broader context of misinformation.

Helping users

YouTube launched a media literacy program to help adults and kids better assess the accuracy of information so that they can confidently explore YouTube, and beyond. The program features practical media literacy tips for adults as well as kids to help them spot misleading information.

This program is live in select countries and we are working to expand to more countries.

As parents play an important role in helping kids learn the rules of the road, YouTube has also developeda family guidein partnership with National PTA and Parent Zone to cover media literacy tips and tools for parents to share with kids.

These efforts build on Google’s continued commitment to support digital media literacy. In 2017, Google partnered with online safety and media literacy experts to create the “Be Internet Awesome”program to help educators and parents teach kids the fundamentals of digital safety and citizenship. As a part of that curriculum, Google also launched a Media Literacyresourcefor teachers to help kids understand persuasion and credibility in content they see online.

Enabling organizations

In 2021, Googlecontributed €25 millionto help launch the European Media and Information Fund. The five-year commitment will support the work of the European University Institute, the Calouste Gulbenkian Foundation, and the European Digital Media Observatory to fund organizations seeking to help adults and young people strengthen their media literacy skills.

This five-year commitment is a continuation of Google’s history of supporting and scaling the critical work of organizations focused on media literacy. In 2018, Google.org invested in supportingMediaWise,an initiative designed to help millions of teens in the U.S. discern fact from fiction online. MediaWise is composed of industry leaders Poynter Institute, Stanford University, Local Media Association, and the National Association for Media Literacy Education.

Investing in thought leadership

As the nature of misinformation rapidly evolves, it is critical that people understand the broader context of misinformation on the internet. Jigsaw, a unit within Google, has developedresearch,technology,andthought leadershipin collaboration with academics and journalists to explore how misinformation campaigns work and spread in today’s open societies.