Google changes how its searches work to deal with false news
False news is a serious problem that is always topical. Moreover, its reach has reached a point where they have even been accused of being determinants in the last US elections. Its existence has caused the great technology to come together in a common front. However, Google knows that this is not enough.
That is why they have announced changes in searches with which to deal with the issue of false news and problematic content. The Great G knows that the quality of its searches has been in question since last November, and that hate speech appears in positions of authority does not help either. That is what they want to change.
For this, they have established three fronts to improve searches. These three fronts go through changes in Search, provide more tools for users to report offensive content and be more transparent with users.
Changes in Search rankings
First, there will be changes in the Search rankings . Although the algorithms help the search engine to identify reliable sources, there is a percentage (according to Google, 0.25%) that offer deceptive or offensive content.
To make it easier to identify the sources and avoid hate speech appearing in the first results, they have announced different updates, which we will now enumerate …
- Changes in the search quality evaluation guide. In addition to the algorithms, Google has a team of real people who value the quality of search engine results. This change is aimed at these evaluators, since more detailed examples of low quality pages have been introduced so that this team can better identify false news and offensive content.
- Changes in valuations . From Google they say that they combine “hundreds of signals” to determine the results that are shown before a certain search request. This change has to do with an adjustment of these signals to “show more reliable pages” and so that articles with a clear hate speech do not appear in positions of authority.
Google wants to receive your feedback more directly
Another important change has to do with the “Autocomplete” function . As you know, when we start writing something in the search engine, Google suggests a few topics that appear under the search field. It was designed to save time, originally.
These suggestions are shown to the user, and are derived from the most popular terms that the public searches for and which are related to the first letters that the user enters in the search form. Sometimes this reflects extremist ideas (such as Holocaust denial), or it can divert the user from their original search.
Google has been having problems with this for years, and finally it has begun to solve it. From now on, a link will appear with which to report inappropriate predictions. When we click on it, we will see a form like the one that presides over these lines, so that an incorrect prediction can be reported in several categories.
These corrections can also be made in the highlighted snippets. Google wants to use these two functions for users to help them improve their algorithms, and in this way they expect to receive comments and corrections on search suggestions more quickly. New policies have also been published to explain why certain suggestions have been removed.
More transparency to increase user confidence
As we said at the beginning of the article, the last months have been plagued by criticism for the quality of their searches. They recognize it themselves …
During the last months, we have been questioned a lot by the fact that the AutoComplete function made offensive or scandalous predictions. After that, we evaluated how we could improve our content policies to update them properly and updated them; they can be consulted in the Help Center. Anyone can now know more about how the AutoComplete tool works and our approach to content removal.
It’s not going to be perfect, but Google is going to try. As Danny Sullivan points out in SearchEngineLand, much of all this is more of a public relations problem than anything else for most users. Putting it in perspective: Google processes 6,000 million daily searches, of which 0.25% (approximately one million) are problematic.
For Sullivan, Google should focus on making searches as accurate as possible, and not just on what public opinion thinks. Providing a transparent way to report suggestions and new updated policies can help. At least, users are offered a way to tell Google what is wrong.