{"id":192828,"date":"2023-06-15T09:22:44","date_gmt":"2023-06-15T16:22:44","guid":{"rendered":"https:\/\/www.bruceclay.com\/?p=192828"},"modified":"2023-11-13T22:33:55","modified_gmt":"2023-11-14T06:33:55","slug":"complete-guide-basics-googles-e-e-a-t","status":"publish","type":"post","link":"https:\/\/www.bruceclay.com\/blog\/complete-guide-basics-googles-e-e-a-t\/","title":{"rendered":"The Complete Guide to the Basics of Google\u2019s E-E-A-T"},"content":{"rendered":"
\nIn the world of Google Search, there are few opportunities to peek inside the inner workings. The Search Quality Rater Guidelines is one such opportunity.<\/p>\n
In it, we get a better understanding of Google\u2019s view on what is a quality website. From there, we can piece together how that might factor into Google\u2019s algorithms.<\/p>\n
After the dust has settled with algorithm updates, we can see much more clearly that Google actions speak louder than words. Many of you know we study, then speak, so while we are not the first to the party, what we say when we arrive is worthy of a listen.<\/p>\n
In this article:<\/p>\n
The concept of E-E-A-T, also known as experience, expertise, authoritativeness, and trust, originated in Google\u2019s Search Quality Rater Guidelines (SQRG).<\/p>\n
We first found out about search quality teams in 2004<\/a> \u2014 people who evaluate the quality of the search results \u2014 then later when the internal SQRG document was leaked from Google.<\/p>\n In 2015, Google made the full version of Search Quality Rater Guidelines<\/a> available to the public. Since then, it has gone through several iterations, with the latest version dated December 2022.<\/p>\n (This is a good summary<\/a> of big changes since the last iteration of SQRG.)<\/p>\n The concept of E-E-A-T within the SQRG debuted in 2014, giving us clues into what Google believes is quality. The added \u201cE\u201d for experience debuted in 2022.<\/p>\n E-E-A-T can apply to individual pages or whole sites, and how important E-E-A-T also depends on the type of topic. I\u2019ll touch more on that later. The SQRG allows Google to better understand if the changes it\u2019s making to its Search algorithms are producing quality results.<\/p>\n Human evaluators (thousands of them) use the guide as a way to evaluate the search results for certain queries and then report back what they have found. This can act as a feedback loop for Google engineers to make further tweaks to the algorithm.<\/p>\n Here are some snippets from Google explaining how search quality raters work.<\/p>\n In a help file here<\/a>, Google explains how raters work:<\/p>\n We constantly experiment with ideas to improve the results you see. One of the ways we evaluate those experiments is by getting feedback from third-party Search Quality Raters. Quality Raters are spread out all over the world and are highly trained using our extensive guidelines. Their feedback helps us understand which changes make Search more useful.<\/em><\/p>\n Raters also help us categorize information to improve our systems. For example, we might ask what language a page is written in or what\u2019s important on a page.<\/p>\n We use responses from Raters to evaluate changes, but they don\u2019t directly impact how our search results are ranked.<\/p><\/blockquote>\n Another explanation from Google here<\/a> on its \u201chow search works\u201d page:<\/p>\n We work with external Search Quality Raters to measure the quality of Search results on an ongoing basis. Raters assess how well content fulfills a search request and evaluate the quality of results based on the expertise, authoritativeness, and trustworthiness of the content. These ratings do not directly impact ranking, but they do help us benchmark the quality of our results and make sure these meet a high bar all around the world.<\/em><\/p><\/blockquote>\n And here\u2019s a 2012 video of former Googler Matt Cutts (remember him?) discussing it:<\/p>\n E-E-A-T does not directly impact rankings as an algorithm would. Instead, Google uses a variety of signals in its algorithm to align with the concept of E-E-A-T.<\/p>\n For example, I believe the \u201cPanda\u201d update was about expertise, the \u201cPenguin\u201d update about authority, the \u201cMedic\u201d update about trust, and the \u201cProduct Review\u201d update about experience.<\/p>\n In my opinion, experience is just a different form of expertise \u2026 not good for all topics, but important for some.<\/p>\n Those who watch Google know how to read between the lines. When the Medic update hit, we saw both a blog post<\/a> from Google and a tweet from Googler Danny Sullivan about the SQRG:<\/p>\n Want to do better with a broad change? Have great content. Yeah, the same boring answer. But if you want a better idea of what we consider great content, read our raters guidelines. That’s like almost 200 pages of things to consider: https:\/\/t.co\/pO3AHxFVrV<\/a><\/p>\n \u2014 Danny Sullivan (@dannysullivan) August 1, 2018<\/a><\/p><\/blockquote>\n In that blog post, Google said:<\/p>\n Another resource for advice on great content is to review our search quality rater guidelines. Raters are people who give us insights on if our algorithms seem to be providing good results, a way to help confirm our changes are working well.<\/em><\/p>\n It\u2019s important to understand that search raters have no control over how pages rank. Rater data is not used directly in our ranking algorithms. Rather, we use them as a restaurant might get feedback cards from diners. The feedback helps us know if our systems seem to be working.<\/p>\n If you understand how raters learn to assess good content, that might help you improve your own content. In turn, you might perhaps do better in Search.<\/p>\n In particular, raters are trained to understand if content has what we call strong E-E-A-T. That stands for Expertise, Authoritativeness and Trustworthiness. Reading the guidelines may help you assess how your content is doing from an E-E-A-T perspective and improvements to consider.<\/p><\/blockquote>\n Back in 2019 (and the tweet looks like it has since been deleted by Sullivan), he had this to say about how E-E-A-T factors into search:<\/p>\n It’s almost like we look for signals that align with expertise, authoritativeness and trustworthiness. We should give that an acronym like E-E-A-T and maybe suggest people aim for this. Oh wait, we did: https:\/\/t.co\/1fs2oIS54L<\/a> pic.twitter.com\/xNL424dDdq<\/a><\/p>\n
\n<\/a><\/p>\nHow Does Google\u2019s Search Quality Rater Guidelines Work?<\/h2>\n
E-E-A-T and Rankings<\/h2>\n
\n
\n