{"id":39680,"date":"2016-03-03T15:30:14","date_gmt":"2016-03-03T23:30:14","guid":{"rendered":"http:\/\/www.bruceclay.com\/blog\/?p=39680"},"modified":"2019-07-23T12:49:35","modified_gmt":"2019-07-23T19:49:35","slug":"how-google-works-a-google-ranking-engineers-story-smx","status":"publish","type":"post","link":"https:\/\/www.bruceclay.com\/blog\/how-google-works-a-google-ranking-engineers-story-smx\/","title":{"rendered":"How Google Works: A Google Ranking Engineer\u2019s Story #SMX"},"content":{"rendered":"
Google Software Engineer Paul Haahr<\/a> has been at Google for more than 14 years. For two of them, he shared an office with Matt Cutts. He’s taking the SMX West 2016 stage to share how Google works from a Google engineer’s perspective \u2013 or, at least, share as much as he can in 30 minutes. After, Webmaster Trends Analyst Gary Illyes<\/a> will join him onstage and the two will field questions from the SMX audience with Search Engine Land Editor Danny Sullivan<\/a> moderating (jump to the Q&A portion<\/a>!).<\/p>\n Haahr opens by telling us what Google engineers do. Their job includes:<\/p>\n Two parts of a search engine:<\/strong><\/p>\n Before the Query<\/strong><\/p>\n The Index<\/strong><\/p>\n Query Processing<\/strong><\/p>\n Scoring Signals<\/strong><\/p>\n A signal is:<\/p>\n Metrics<\/strong><\/p>\n “If you cannot measure it, you cannot improve it” \u2013 Lord Kelvin <\/em><\/p>\n Google measures itself with live experiments:<\/strong><\/p>\n At one time, Google tested 41 different blues to see which\u00a0was best.<\/p>\n Google also does human rater experiments:<\/strong><\/p>\n Google judges pages on two main factors: <\/strong><\/p>\n Needs Met grades:<\/strong><\/p>\n Page quality concepts:<\/strong><\/p>\n Google engineer development process:<\/strong><\/p>\n There are two kinds of problems:<\/p>\n Here’s an example of a bad rating. Someone searches for [Texas farm fertilizer] and the search result provides a map to the manufacturer’s headquarters. It’s very unlikely that that’s what they want. Google determines this through live experiments. If a\u00a0rater sees the maps and rates it as “Highly Meets” needs, then this is a failing at the point of rating.<\/p>\n Or, what if the metrics are missing? In 2009-2011, there were lots of complaints about low-quality content. But relevance metrics kept going up, due to content farms. Conclusion:<\/strong> Google wasn’t measuring the metrics they needed to be. Thus, the quality metric was developed apart <\/em>from relevance.<\/p>\n Here’s Paul Haahr’s slide deck, which is worth a look:<\/strong><\/a>
How Google Works<\/h2>\n
\n
\n
\n
\n
\n
\n
\nDoes the query name any known entities?<\/li>\n\n
\nEach shard<\/p>\n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
What goes wrong?<\/h3>\n
\n
\nUpdate 7\/19:<\/strong> Presentation has now been marked private by the author.<\/em>
\n<\/a>
\n