Google's EEAT: what is it?

Known to SEO professionals since 2018, the EEAT criteria have revolutionized the Google algorithm and thereby natural referencing and therefore content indexing. What are the EEAT criteria? How do they express themselves through a website? How to use them wisely to improve the natural referencing of your website?

What are the EEAT criteria?

The EEAT criteria are used by Google to evaluate content on the web, including some in particular. These are so-called YMYL (acronym for Your Money Your Life) content that can directly impact the lives of users: money management, health advice, and which Google defines as"on the future happiness, health, financial stability or safety of users”.

EEAT is (in turn) an acronym for Experience, Expertise, Authoritativeness and Trustworthiness.

Faced with the many websites that provided erroneous information, voluntarily or not, Google decided to tackle the problem head on and therefore implemented these EEAT criteria. The EEAT criteria are intended to limit the spread of fake news and more generally not to mislead the user. They evaluate:


According to the latest Google update, experience is added to the 3 criteria in order to enrich the relevance of the results. According to the research, a certain degree of experience is now expected. For example, if you are looking for information about weight loss tips, you expect to read content created by a nutritionist (with real experience and knowledge of the subject).


According to Google, if you create a thematic site, it means that you are an expert in the subject. Which is pretty logical reasoning. The contents of the site must thus demonstrate their expertise and their specific specialization to the search engine. This means that all information must be correct and that the site must not take part in any conflict of interest.

In order to demonstrate its expertise, the site must also publish regular news and articles on the subject of which it claims to be a specialist. He must also have a significant background, that is to say a sufficient number of contents that allows Google to judge his expertise. Finally, it is easier to become an expert in the eyes of Google when you make yourself known through social networks, Content Marketing and PR (generally all Off-Page SEO techniques).


Well known to webmarketing services, this criterion concerns the reputation andexperience of a website. Very close to expertise, it is measured in particular by its relationship to other sites via the number of referring sites (the famous backlinks). A website also asserts its authority through highly informative content intended to be shared, such as white papers or infographics for example. Finally, a site that exchanges regularly on specialized forums is also valued by Google.


Last of the EEAT criteria, reliability is mainly due to the security of the site: compliance with GDPR standards, security of personal and banking data, reinsurance and certification of opinions.

How does Google identify content that meets the EEAT criteria?

Google identifies this content in two different ways.

1. Crawlers

First, there are indexing robots, which crawl pages. These pass on many pages of the web to analyze them and inform the Google algorithm which then proposes a ranking of websites for a given problem.

These robots are based on the usual SEO criteria (On-Page and Off-Page), and mainly on the following:

  • an optimized main query (which fully corresponds to the content of the article),
  • striking titles (but not ) and preferably expressed in interrogative form,
  • the presence of Hn titles and subtitles (in a relevant order of preference),
  • relevant content that reasonably includes LSI keywords (the main, adjacent keywords and long corresponding tails),
  • an interesting visual (and not too heavy) with a correctly optimized tag.

Once these criteria have been met, crawlers focus on the relevance and completeness of the content. It must imperatively be value-added content and even ideally high value-added content.

How are simple robots able to detect the added value of content?

Google's crawler robots work thanks to Machine Learning, an artificial intelligence directly applicable to computer algorithms and robots. A great revelation of the 21st century, Machine Learning learns from the use of humans and develops its own operation modeled on systems developed by human intervention thanks to a data history and automation accordingly. Machine Learning-dependent systems are also able to predict future events by also tracking the experience observed so far. Google is one of the pioneers in terms of Machine Learning and one of the largest developers of this system in the world.

How to produce content with high added value when there is already a lot of it on the net?

Producing content with high added value means offering differentiating content with qualitative information. It can be a video tutorial for a cooking recipe, an interview for a news item, a podcast for regular briefs, an infographic to illustrate data, a slideshow to highlight worth an image gallery, etc. The information must be checked and sometimes synthesized again to offer a fluid and pleasant reading experience accessible to all, sometimes illustrated, and sometimes justified by sourced data. Content has added value when it stands out by its relevance if it is information, or by its presentation if it is media content. It is always possible to propose improvements to content that has already been created, and thus offer a superior quality experience to the user.

2. Quality Raters

But there are also Quality Raters. When we think of Google, we automatically think of crawler robots, responsible for analyzing and indexing pages, but humans also intervene in the equation. Quality Raters are humans, hired by Google, who crawl web pages and rate the quality of their content. Similar to inspectors, the Quality Raters monitor websites and evaluate the content according to a grid given beforehand (the Quality Raters Guidelines available here) of 172 pages - in French! This grid includes all the instructions given by Google to allow humans to evaluate content according to the principle of the algorithm. These are the people responsible for the famous manual actions.

screenshot search console erreur

First asked by Google to certify legitimate and relevant content compared to the first black hat that prevailed a few years ago and that the algorithm was then unable to detect, Quality Raters now allow to enrich Machine Learning. Indeed, if the robots learn little by little to differentiate content purely dedicated to SEO from content dedicated to humans and therefore with real added value, it is thanks to the Quality Raters who arbitrate between these different types of content and highlight value those who are the most cognitively prolific for humans.

Who are the Quality Raters?

There are approximately 10,000 Quality Raters scattered around the world. These stakeholders would not be directly employed by Google, but by intermediary companies. Experts in the selected theme and the language of the sites they evaluate, Quality Raters must also master the rules of SEO and know perfectly how search engines work - something acquired thanks to their Quality Raters Guidelines, which have an impressive number of pages!

What about EEAT criteria on YMYL pages?

YMYL pages are the most important that can be found on search engines. Indeed, by their nature, they have implications in the financial field and in the field of the health of users, which therefore constitute essential aspects of everyone's life. This is why Google pays particular attention to it. It would not be a question of highlighting a page giving bad advice on taking medication or on managing a bank account or a financial investment.

The specific areas affecting the YMYL pages are as follows:

  • Finance sector: taxes, banks, insurance, stock market investments, investments,
  • E-commerce: everything related to online shopping,
  • Government & rights: citizenship issues , official documents, marriage, divorce, birth, vote, etc…
  • Health: any medical advice, including nutrition,
  • Subjects linked to a community: religion, sexual identity…

We therefore better understand the importance of the EEAT criteria on the YMYL pages: these are a real information resource for many users and some of them even have transactional purposes (e-commerce for example).

5 tips for meeting the EEAT criteria on your YMYL pages

1. Reveal the purpose of the page from the start of the content

What do you want to say, what conclusion do you want to reach? It is strictly forbidden to use a roundabout way to say or get something on the YMYL pages. Clearly, it is strictly forbidden to deceive the user in any way whatsoever. Websites pretending to be an institutional site by asking for money to facilitate administrative tasks related to obtaining official documents are a concrete example. Another example is a website for information on abortion (voluntary termination of pregnancy) which defends a point of view and an orientation contrary to the legislation in France.

2. Respond clearly to questions posed by users

On the one hand, robots favor the ranking of sites which offer a direct response to a question (which does not go through detours or overly long explanations), their allowing you to find yourself among the People Also Ask. On the other hand, the user himself is looking for an informative and concise answer to a given problem, and this allows you to create an airy, clear and unadorned page.

screenshot people also ask

Example of People Also Ask

When presenting information, it is often customary to refer to another website: an institution, an online directory, a networking platform … Each link proposed must lead to a quality site, also respecting the EEAT criteria and offering informational content with high added value. Placing an external link in content is like opening a door to another place, or dropping the user off by car somewhere in the street: this place must be secure and offer an interesting experience that complements it.

Similarly, in order to confirm its authority on a given subject, it is customary to receive backlinks, that is to say referral links from another site. A backlink has the value of a recommendation from one site to another. The backlink obtained must come from a website dealing with the same theme or an issue adjacent to those dealt with by the site. (It is not relevant to obtain a backlink from a DIY site for a site specializing in baby food.)

5. Respect the main principles of SEO when creating your content

Finally, of course, it should respect the main principles of natural referencing, namely: the alternation of titles and subtitles, the use of keywords, the creation of an optimal internal mesh, etc. All this is part of the rules of SEO-On-Page.