Windows.  Viruses.  Notebooks.  Internet.  office.  Utilities.  Drivers

Google Panda is a filtering program from Google. Its task is to monitor and block sites with low-quality content.

More videos on our channel - learn internet marketing with SEMANTICA

The Google search engine considers its main function to provide users with relevant, interesting and useful information. That is why she fights hard against black methods of promotion, when the site gets into the TOP of the results not by creating high-quality and in-demand content, but by manipulating search engines.

Panda's story

The Google Panda algorithm is automatic. The first step towards its creation was the assessment a large number websites to meet Google's content requirements. It was carried out manually by a group of testers and made it possible to formalize the main factors that affect the quality of site content.

The first time Google Panda was published in February 2011, however, it was fully operational in April. During 2011 alone, the algorithm was updated 57 times, and then 5 more times in 2012. The next 2013 brought Last update Pandas with their own number - Panda 25, which has undergone minor changes in June and July of the same year. latest version Panda 4.0, released in May 2014, shook news aggregators and affected giants like Ebay.

To date, the algorithm is being improved almost continuously, updated every month for 10 days. The company does not publish the dates of updates in order to make it as difficult as possible to manipulate the positions of sites in search results.

Google Panda site requirements

Google Panda is designed primarily to combat low-quality content, so its main requirements for sites are based on the quality of the published information.

  1. The site should mainly contain unique content, it should not contain duplicate texts and images.
  2. The use of automatically generated texts is not allowed.
  3. Search engine robots and users should see the same thing, you can not use pages that are visible only to search engines and are needed solely to promote the site.
  4. The keywords of each page of the site must correspond to its content, spamming with keywords is unacceptable.
  5. Links and advertisements on the pages must correspond to the subject of the texts and other content of the page.
  6. It is forbidden to use doorways, hidden links or hidden text, the purpose of which is to deceive search engine.

The Google Panda algorithm will punish you if the pages of the site contain content copied from other resources without reference to the source, template articles that have an identical structure, duplicate pages. Sites where the texts are a continuous sheet, without illustrations, videos or infographics, with the same meta tags on different pages, can fall under the Google Panda filter.

It is believed that this algorithm does not pay much attention to links, but it requires that links in articles necessarily correspond to their topic. That is, texts about plastic windows should not contain references to the sale of tea, etc.

In addition to everything described, Google Panda pays a lot of attention to behavioral factors. If your site has a high bounce rate, users leave it after the first page and never return, you will certainly fall into the field of view of the algorithm.

How to save yourself from Google Panda sanctions

How to identify Panda's action

Before you take action to get out from under the Google Panda filters, make sure that you have suffered from its paws. How to determine if Google Panda is the culprit of your problem, or is it something else?

Pay attention to the connection between the algorithm update and the drop in traffic to the site. Update Google Panda is held monthly, lasts 10 days, you can find out the time of it on the website Moz.com. If there is a match, then action must be taken.

The second way to catch a Panda is to use special service Barracuda. One of the services of this site, Panguin Tool, having accessed your account information in Google Analytics, overlays the graph taken from it with the dates of the algorithm updates and issues a response. This method has two disadvantages - it is not suitable for those who have a Google Analytics counter recently and requires access to an account, which, in turn, allows you to get to the money in your Google Analytics account.

The third method is quite simple, but it takes time and patience. You will have to check every page of the site. You need to do it like this:

  1. Copy a few sentences from the text on the page into the Google search box. It is enough to take 100-200 characters.
  2. See if your site appears at the top of the SERPs.
  3. Conclude the same passage in search line in quotation marks and again look to see if the site is in the search results or not.

If the site appears in the issue only in the second case, then the culprit of your troubles is Google Panda. Remember that you will have to check every page.

How to get out of the filter

To get out from under the Panda filter, you will have to do the following:

  1. Make a complete revision of the content of the site and replace most of the texts with interesting and useful ones, and rework the rest so that they become unique and relevant.
  2. Remove excess keywords from headings of all levels and meta tags. Change the headings to relevant and attractive ones, make them catch the visitor and make them want to read the text.
  3. Clean the site from irrelevant and aggressive advertising, which will significantly improve behavioral factors.
  4. Remove all duplicates and broken links, deselect keywords.
  5. Check the links on the pages for compliance with the content of the pages and replace or remove irrelevant ones.

These steps will help you get out of Panda's filters over time. However, please note: Google Panda is an automatic algorithm, and no amount of appeals will help you quickly free yourself from its sanctions. Fill the site with interesting unique content, do not get carried away by placing ads on it, and you will not fall for the teeth of Google's smart panda beast.

Long ago, in a galaxy far, far away, the Google search engine created an algorithm called Panda. Only two people developed this algorithm, this is Amit Singal together with Matt Kats. This algorithm differs from its counterparts in that, first of all, it is designed for the human factor. The algorithm also takes into account other factors when ranking a site.

In this material, we tried to collect everything that can relate to the Google Panda algorithm, how to determine it and how not to fall under it in order to successfully promote the site.

What does the Google Panda algorithm take into account

Is the content on the site unique?

The Panda algorithm first of all pays attention to whether the same content is found on other sites. Moreover, it searches not only for similar texts on sites or pages, but also fragments of texts are taken into account. In other words, the uniqueness of the text is determined using a percentage. Let's say if the article was copied completely, then the uniqueness is equal to zero. If a part of the text is copied, then the uniqueness can be 80%, for example.

The algorithm calculates the uniqueness both the site itself and a separate page, including in relation to other pages of the site. In addition to this, the panda algorithm is armed with such a tool as templates. In other words, it can identify similar pages on different sites, even if they are promoted with different keywords.

Moreover, this the algorithm pays attention to ads too. If a site that provides legal services contains advertisements from competitors, then the algorithm fixes this and takes it into account when ranking the site.

The task of search engines is to give the user an answer to his question. The task of the Panda algorithm is to bring down a little to the ground, the owners of sites whose purpose is only to earn money. In general, the purpose of this algorithm is terribly simple. Only unique, relevant and useful sites should appear in search results. Here is an example, how to create good educational content.

Number and quality of links (link profile)

The algorithm also takes into account links. Moreover, it takes into account all links, both incoming and outgoing. There is only one requirement, and both sites must be of the same theme. Of course, the algorithm keeps track of the number of incoming and outgoing links, but this is not the most important factor when ranking a site. The first thing that interests him is the human factor. In other words, he carefully watches the user on the site. What he does, how much time he spends, whether he reads material, whether he fills out forms, and so on. Of course, the developers have done a great job of creating and testing this algorithm. In fact, this algorithm required polishing and other algorithms so that the search engine could return only high-quality sites.

With the introduction of this algorithm when ranking a site, get to the top of search results can only really useful and high-quality sites. It was this algorithm that made the search engine Google system powerful search tool. This is confirmed by the fact that Google's audience is growing exponentially. Moreover, the company's budget is also growing by leaps and bounds. After all, it is still possible to deceive the user using the so-called black methods of promotion. Various chips and aggressive advertising. But the investor cannot be deceived, and he knows and realizes perfectly well that Google is a profitable business.

Of course, such growth of the company is the merit of not only one algorithm. And now back to the topic of the article.

Behavioral factors

This indicator includes a number of criteria, such as:

  • bounce rate;
  • how much time a person spends on the site in one session;
  • how many pages he visited in one visit;
  • whether he returns to the site and how often;
  • how much the site is shown in the search results, and how many people follow the link (CTR).

The algorithm keeps track of all this, and it does this for each user. That is, he follows each visitor and carefully looks at what he does, whether he clicks on links, fills out forms, scrolls down the page or not, and so on. Thus, the algorithm determines whether the user is responding to requests from the site or separate page and whether the user finds the answer to the query entered in the search bar.

How to understand that the site has fallen under the Panda filter and what to do?

  1. A sharp drop in traffic. If you find that the number of clicks from Google has dropped dramatically, then one of possible causes, may be Panda and technical problems sites that were not identified:
    — slow server response;
    — unavailability of pages and others.
    To check this, carefully analyze this Google Analytics And Search Console.
  2. Be careful with links. Although links for Google remain key ranking factors, you should not focus too much on them, because the search engine has long learned to distinguish natural links from purchased ones, as well as to identify an unnatural increase in link mass.
  3. Bad texts. Texts written in non-human language, with many spelling errors, short, not unique and placed in the form of a newspaper page - will kill any site, as they serve as a signal for Panda to lower the site in the search results.
    What to do? Write high-quality, unique content that is useful to users, carefully check and edit it.
  4. No video or photo. If you do not dilute the content of articles and pages of the site with pictures, infographics and videos, then the reader's eye simply has nothing to catch on, and if this is bad for the user, then Panda does not like it.
    Add more optimized photos and videos by filling in attributes alt and title.
  5. Few internal links. Having entered the site, the user should easily and not be forced to navigate through it, while getting exactly to the pages that he expected to see. And here it is very important to make competent internal linking, which will allow you to go to such pages. It is also a great solution to place widgets with the latest blog entries, the most popular articles and the most interesting sections.

And remember: “Better not anger the Panda!”.

Google panda is an algorithm developed and launched in 2011 Google, which is used for qualitative analysis of sites in the search engine results.

It is able to find and exclude from the ranking (or lower the positions of documents) those pages that have a large amount of non-unique content, do not contain any useful information and are intended solely for making money.

First of all, Google Panda takes into account how user-oriented a particular resource is, however, to determine the quality of a site, this algorithm uses not only this criterion, but also a number of other equally important criteria, which must be taken into account when promoting search engines.

Site evaluation criteria that the Google panda algorithm takes into account

  • . The Panda algorithm takes into account the time that users spend on the site, the percentage of returns to the site, the percentage of bounces, the number of returns in relation to the total number of transitions, the number of transitions within the site, and some other factors. In order to provide the site with good positions in the search results, the webmaster must create an attractive resource for users, fill it with unique articles, high-quality internal links, implement clear navigation, provide quick search the necessary information and think about what else will make the user stay on the site for as long as possible.
  • . Even with the simultaneous use of different methods of website promotion, special attention should be paid to the quality of the content posted on the pages of the site. The Panda algorithm analyzes the uniqueness of the information presented on the site by the percentage of content borrowed from other resources to the total amount of content on the site and on each page separately. You need to understand that low-quality information posted on just one page can worsen the position in the search results not only of this page, but of the entire resource. A similar negative impact on the site is exerted by an excessive number of texts of the same type in content with different key queries(high stereotype), as well as texts with a high density of keywords and phrases (great nausea).
  • link mass and advertising. The Google panda algorithm determines the degree of relevance of the subject matter of ads posted on the resource to the subject matter

Many SEO-optimizers have long been aware of such Google algorithms like Panda, Penguin and Hummingbird. Someone himself suffered from them, someone managed to successfully avoid their action.

But most beginners who seek to promote their sites in search engines completely misunderstand what exactly these algorithms are hitting.

There is a considerable part of webmasters who do not take them into account at all when creating sites. And at the same time, although the destructive action of Panda, Penguin and Hummingbird mows down many sites every day, getting around them is not so difficult if you understand everything correctly and do everything right.

In fact, the principles by which these algorithms operate are not so new, it’s just that their appearance made it possible to significantly streamline Google’s fight against low-quality (in its opinion) sites, and to clean up the issue more efficiently.

So let's get to the review.

Google Search Algorithm - Panda

The very first of this trinity (2011), and, in the opinion of newcomers, the most terrible. Basically, Panda search algorithm terrible not for the beginners themselves, but for their careless attitude to the sites they create.

Of course, every webmaster is the full owner of his website, he creates it the way he wants, and there is no order for him from Google. However, we should not forget that Google is the complete owner of its issuance, and which sites it wants, it allows them to be in its issue, and which it doesn’t want, it doesn’t. For this, they are used Google search algorithms.

Panda was the very first instrument of protest of search engines against mistakes and negligence not only of beginners, but also against the slovenly attitude to search engines (and to their own projects as well) on the part of all sorts of "authorities". The fact is that many large sites, mostly commercial ones - shops, services, directories and the like - allow the creation of pages with similar content.

Basically, this is a description of goods or services that are similar in basic parameters, but differ only in some small details, for example:

  • size;
  • color;
  • price, etc.

The main content, therefore, is the same on such pages, often the similarity reaches 90 percent or more. Over time, a very large number of clone pages accumulate on the site. Of course, this does not interfere with visitors in any way, but search engine results are literally clogged with the same type of web documents.

Before it was introduced panda algorithm, Google simply “glued” such duplicates, that is, it allowed only one of them to appear in the search results, and placed the rest in the “additional results” category. But, as they say, everything is fine for the time being and new google search algorithms have appeared. And the moment came when even "additional results" could not improve the picture.

The Panda algorithm is aimed at finding and identifying such web pages, determining their acceptable number and taking action on projects where the number of such pages is too excessive.

As soon as the Panda algorithm began to operate in full force, a lot of authoritative sites followed from their "tasty" positions in the tops of the issue, and, accordingly, "dipped" in traffic very much.

Of course, the owners of these websites, having figured out the causes of such a catastrophe (having studied the new Google algorithms), tried to correct the situation as quickly as possible, but not everyone returned to the "bread places". Their places were taken by more successful competitors who reacted in time to google search algorithms, or did not violate the new rules at all initially.

Google Algorithm: Penguin

The next algorithm - Penguin (2012) - is aimed at completely different areas, and does not directly affect sites, but only indirectly hits them - by link mass. Until recently, many webmasters and even quite experienced SEO-optimizers were very careless about the links that they acquired to their sites from other people's web resources. And when it was announced that the new algorithm (Penguin) would deal with link ranking, the bulk of the owners of "authoritative" sites practically did not attach any importance to this.

Of course, and long before the appearance of the Penguin, it was known that search engines have a very negative attitude towards purchased links, and in every way they are fighting violators. The most sane webmasters did not contact link exchanges, and acquired links, so to speak, privately, believing that since they could not be linked to exchanges, then no punishment would follow.

However, they did not take into account the fact that many donor sites were themselves buyers of links, and most of them "fell asleep" when the Penguin began to act actively. Naturally, all links leading from these sites disappeared, and the "third" sites simply lost a significant part of their link mass.

And when a site loses part of its link mass, then, of course, it sags in the search results and loses a significant part of its traffic.

Some of the owners who did not understand the situation considered this as a punishment for non-existent sins. However, in the end it turned out that there was no smell of any punishment here. Those sites were punished whose involvement in the purchase of links was proved. And those sites that, in turn, were linked to them by links, simply “were in the wrong place at the wrong time.”

Thus, we see that the Hummingbird algorithm has “covered” a huge Google SERP segment related to the purchase of links. That is why it turned out to be much more effective than “pinpoint” strikes - now almost all Internet sites that receive their traffic (and income along with it) from the issuance of Google, that is, depend on this search engine, will carefully monitor their link mass and try to avoid obtaining links from sites about which there is at least the slightest suspicion that they buy their links on exchanges.

Someone, of course, curses Google, accusing him of dishonest play, but we should not forget that, firstly, Google never plays dishonest- neither American morality nor American laws allow him to do this, and secondly, no one has yet invented anything more effective in the fight against the epidemic of purchased links.

Google Algorithm: Hummingbird

The third algorithm that hit a lot of sites very hard is Hummingbird, which appeared in 2013.. What is the purpose of this algorithm? And it is directed, in the very first place, against doorways and sites that use the so-called key spam when promoting their pages.

A lot of Google users, when searching for the necessary information, enter “simplified” or “incorrect” queries into the search bar, for example, “buy panties where”, “rest thailand” or “restaurant novosibirsk”, and someone generally makes a lot of mistakes in queries .

Of course, Google “customizes” the most relevant pages of sites to such queries, on which, naturally, queries in such a “wrong” form do not occur, but logically answer them more fully.

It seems like what's wrong here? But the fact is that all the queries entered in the search bar are stored by Google in its database, and this database is constantly used by doorway workers and manufacturers of “cloak” sites. As a rule, with the help of automated means, they “sharpen” the pages of their “works” for such “wrong” requests, and thereby immediately give themselves away with giblets.

Indexing robots, analyzing the content of site pages, constantly check with Google's key database, and if there are too many such “wrong” phrases on the pages, then the entire site falls under suspicion.

Someone even loses sight of the fact that Google even pays attention to what letter the proper names or geographical names begin with - with a capital or with a small one. Well, if someone writes a text and misspells a name or title a couple of times, then it's not scary. But if the name of the cities or the names of people on the entire site are with a small letter, then this is already a signal of a not quite (or not at all) high-quality site.

The result of the analysis of Google algorithms

So we have considered the three most important Google algorithms, which together cover very important areas of creating and promoting web resources. All of them stand guard over the cause of improving the quality of web content. Of course, there are many craftsmen who very successfully bypass all these algorithms and continue to fill the Google results with low-quality sites, but this is no longer the volume that it was before the advent of Panda, Penguin and Hummingbird.

It can be expected that these algorithms will improve over time, more powerful modifications will appear. Therefore, it is best to immediately aim at honest work and avoid annoying oversights, which in the end can significantly harm your web brainchild, with which you are going to conquer the tops of Google!

Hello, dear readers of the blog site. In this post, I want to speculate a little about what is currently spinning in my head. Basically, all thoughts are occupied with the problem, under which I managed to please over the past year and a half. At the link provided, you will find an article where I describe the current situation, including correspondence with the technical support of the Runet mirror.

Over the past four and a half months since the publication of that publication, I took turns (starting from the first posts in 2009) proofread and edited (removed possible spam with any words, and also curbed internal linking to sane limits) in 360 of my articles from 411 existing on this moment. Several dozens of them have been completely rewritten, screenshots have been updated in many dozens, corrections, additions, etc. have been made.

In general, he did everything he could, tirelessly. During this time, a monstrous efficiency manifested itself (for 12 hours a day I plowed and from 2 to 6 articles managed to spud during this time). The first positive results appeared in terms of improved positions and increased traffic from Yandex and Google.

But there are still doubts about which Google filter I have been sitting under for a year and a half - Panda or Penguin. In this regard, the question arises whether it is necessary to bother with the selection of spam backlinks and add them to Disavow links, as it is advised to do to everyone who suffered from the Penguin. And how to do it with 30,000 backs in Google Webmaster?

Relaxing filters from Yandex and Google

As I already mentioned, the efficiency now is simply phenomenal and it would be great to apply it to something else useful and promising, because there are not many articles left for rework, and you can’t find spam in them, except maybe just different little things correcting.

Guys, I immediately apologize that I will continue to use intolerant “traffic”, and not the number of visitors from such and such a search engine, because it really turns out easier and more accessible (fewer words, but more meaning). Nothing personal just business. Thank you.

September 25th I found that I was getting traffic from Yandex is two to three times more than usual lately.

In the overall picture of attendance, this was reflected in something like this:

Firstly, it was somewhat unexpected, because just by this time I had completely reconciled myself to the current deplorable situation and just continued to plow without any particular claims to the result. Moreover, even in the summer I was sometimes ready to tear and throw because of the powerlessness of something to change for the better, but then I completely stopped twitching.

Secondly, having no experience of getting out of the filters, I don’t know what to think about future prospects. Not all traffic on Yandex returned, and I reassure myself that it did not fall immediately, but over several ups.

By the way, you can drive through the filter not only with a ton of links from the GS (at the beginning of this spring I had a monstrous surge of links in several thousand backs and I was not involved in any of them). They may try to play on the behavioral, i.e. worsen the metrics of both user behavior in the search results and on the site.

For example, all the same spring, I got thousands of traffic from some sites of jokes, jokes and other nonsense, which were nothing but HS, and it was difficult to call them. What it was is not clear. For example, when switching to reading the next joke, the user got to my 404 page and did not know what to do. I even had to under such visitors in order to at least slightly reduce their bounce rate.

Penguin still adheres to the principle of a general update of the filtering algorithm and you can clearly track (if you get confused by this) all of its updates. Now, by the way, it seems to be its next update to version 2.1, which is confirmed by the corresponding branches on the Seo forums.

Panda has recently become actually part of the general algorithm that Google uses when ranking, and there will most likely not be any pronounced updates for all sites at once. But nevertheless, if we go back a little in history, then by the date the traffic began to fall, we can conclude which filter this or that resource fell under.

How to determine which Google filter a site is under - under Panda or under Penguin

By the way, it is this method that offers to use the bourgeois online resource Barracuda, namely its product called PanguinTool .

The problem is that this service will need to provide access to the statistics of your Google Analytics counter, and he will already superimpose the exact dates of Panda and Penguin updates on the chart torn out of it in order to understand what exactly from this duet crippled your resource.

At first I thought that I had at least two reasons not to do this:

  1. Firstly, I still don’t have Analytix for a year, because before that I was completely satisfied, well, and the LI counter is also quite informative.
  2. Secondly, Adsense also hangs on the same thing, and this is money that should not be trusted to anyone. But a little later, I remembered that Analytics, as well as Metrica, has the ability to create guest accounts, which I tried to use.

In general, I logged into the administration in my main Analytics account and delegated the rights to view and analyze statistics to my other account in Google, to which nothing of the kind is attached at all.

After that, in the browser where I was already authorized under this not very important account, I followed the “Log-in to Analytics” link from the Panguin Tool page and I was asked to confirm my desire to provide this application with statistics data from Analytics.

I didn't mind. After that, I was asked to select a specific counter.

Well, in the end they provided the desired schedule with Panda and Penguin release dates, but, as mentioned a little higher, I have this counter not so long ago and it did not capture the beginning of 2012.

Therefore, to determine what kind of filter did google apply to my site, thus failed. In addition, in late spring and early summer of this year, I generally filmed the Analytix code for reasons that are not clear to me now. That decrease in traffic from Google, which took place in the spring of the same year, does not seem to be correlated with either Panda or Penguin.

But you may have better luck. On the graph and in the tables below, you can also see statistics on landing pages and on keywords from which the traffic came.

The second way to determine which animal bit you is to look at indirect signs (teeth prints). For example, it is believed that when hit by a Penguin, you positions will be squandered only for individual requests, whose landing pages will be spammed with bad incoming links.

With Panda, most likely, there will be a general subsidence of traffic, as if you had, although there is debate about the very appropriateness of using this term. My situation in this regard is more similar to the second option.

However, now the spam has been eliminated, but changes for the better in Google began only after the movement in Yandex. In this regard, I still remain at a loss about my further actions and their expediency.

Disavow links - how to report bad links to a site to Google

This is done on the tab "Search traffic" - "Links to your site". Above the above list, you will have three buttons that will allow you to upload all the backs found in Google. I like the last option the most, because there will be sorting by the date the backlinks appeared.

However, information will be provided here only for 1000 domains from which the most backlinks are put on your site. Judging by various backing analysis services, my blog has links from three thousand domains and their total number is more than thirty thousand available on Google.

But not the point. Let's imagine that I ran through these URLs and with a high degree of probability identified those pages or even entire domains that do not belong in my link profile. Now it remains only to arrange all this wealth in one file with the txt extension and any name convenient for you.

When compiling a file for Disavow links, you will need to stick to simple rules:

  1. URL of each new page or domain are written on a separate line: http://plohoysite.ru/plohaystranichka.html
  2. The domain is issued according to the following principle: domain:plohoysite.com
  3. You can add comments, but they must be preceded by a hash (# - hash)

Once you have the list ready, go to to the Disavow links page(I did not find a direct path from the Google webmaster interface, apparently they are afraid that they will use this tool without thought):

Choose from the list the site you need and click on the button "Dismiss Links". You will be warned about the possible negative consequences of this action, although I seem to have not yet seen such reviews on the forums, but it is possible that those who added their lists here no longer had traffic from Google, and, as you know, it is not possible to go negative .

By the way, this inscription personally worries me, because I would not want to aggravate the situation by rash actions, and not improve the current situation. Well, if you still decide to experiment, then on the next page you will be offered to download the prepared file from the computer with Disavow links and send it to Google for consideration.

But put down the selected ones in a column URL address or domains - this is the simplest. But the most difficult thing is to filter out these very spam and bad backlinks, so that later you can ask Google not to take them into account.

The situation is further aggravated by the fact that there is no unequivocal opinion about the benefits of this action, even if you know for sure that your site suffered from the Penguin or manual sanctions were imposed on it. The opinions of experts are divided. For example, Devaka, for the sake of business, even added every single back to his blog to Disavow links, but did not see the result, although it is he who tends to the usefulness of this tool, oddly enough.

By the way, when imposing manual sanctions, the situation with getting out from under the filter can be even simpler, or rather faster. Manually applied filters can now be seen in Google Webmaster on the “Search Traffic” tab - "Measures taken manually".

In this case, already a couple of weeks after uploading the list to Disavow links and submitting an application for revision (as I understand it, now this can be done on the same tab, and not on a separate tricky page, as it was before), the filter can be removed and the positions will return to their original values.

How to select bad backlinks to add to Disavow links

In the case of Penguin and the successful removal of spam links will have to wait quite a long time for the result - until the next update of this algorithm. It may take months, or it may take a year. Do you still want to try Disavow links? Well, then I will share with you the points that I gleaned while looking for a way to create this list.

I didn’t succeed in doing this manually, not because of the huge amount of monotonous work, but because I don’t have the proper intuition to appearance and the little statistics that are collected, to draw conclusions about adding this page or the entire domain to the list of untrustworthy.

Here clearly need automation, which should clearly separate the wheat from the chaff and in no case "throw out the baby with the water." Obviously, some services are needed that, in addition to being able to collect all the link mass leading to your site, would also be able to make a quality assessment for each of the donors.

There are such services, and many webmasters, and even more so SEOs, know them. Another thing is that you will have to pay for the analysis of a large number of backs in these services, and not even a small amount of money. Moreover, if you professionally use such resources every day, then the money will pay off handsomely, but for our task it will be wasteful, as it seems to me.

There was a publication on the site where a representative of one of the major SEO firms shared his experience in selecting candidates for the Disavow links list. He suggested using three link services: bourgeois from Moz called Open Site Explorer, as well as domestic Solomono (Linkpad) and Ahrefs.

As I understand it, I need to eat and drink for a month in order to pay for the verification of my several tens of thousands of backlinks and upload all the data on them for further analysis and processing. Of course, a large SEO office already has all these paid packages, because they fully pay off when permanent job with dozens or even hundreds of customer projects.

But his method is pretty convincing. A friend suggests downloading donor quality indicators from Open Site Explorer, Solomono and Ahrefs. They are called Domain Authority, iGood and Ahrefs Domain Rank, respectively.

Then bring the whole thing into one table and select candidates for Disavow links based on the principle of the least donor quality weights in all these three services together. This will reduce the chance of error.

Unfortunately, not many people can afford such a luxury, although if someone outsourced this, then I would not refuse to use it for a moderate fee, but at the same time I would like to receive not just a list of candidates for Disavow links, but all these downloads from Open Site Explorer, Solomono and Ahrefs, in order to make sure of the validity of the conclusions drawn and that that they do not lead me by the nose.

The second option I came across to find spammy backlinks in my link profile was in use paid program FastTrust from Alaich (famous blogger). Actually, on the promo site of this program, I came across a description of preparing a list of candidates for deletion for Disavow links.

Everything seems to be great. The price is 3000 rubles, which, although a little expensive, is tolerable, especially since it can also be purchased for Partner Profit bonuses, of which I just have the right amount.

But! I took a closer look at the requirements and desirable conditions under which this program will work correctly. It turned out that not everything is so simple:

In general, this cooled my ardor and I realized that buying a program is still half the battle, but you will still need to look for a proxy, invest money in Ahrefs in order to get the most relevant result. Yes, and I think it will take quite a lot of time to collect data.

Therefore, I decided to take a break and, until it flew out of my head, write down all my current thoughts about the Penguin and Disavow links in this post, so that I can return to this later.

Well, here, somewhere like this, approximately. What do you think about the filter applied by Google to ktonanovenkogo? Is it a Panda or a Penguin? Is Disavow links worth bothering with? If it is, then how to isolate the stupid backs?

Good luck to you! See you soon on the blog pages site

You may be interested

Getting a site out of the Google Penguin filter - step by step guide Why you can get banned in Yandex, fall under the ACS or a foot filter, as well as ways to get out of these sanctions
GoGetLinks - eternal links for website promotion and earnings on the GoGetLinks exchange for webmasters
Statistics search queries Yandex, Google and Rambler, how and why to work with Wordstat
Deciphering and explaining Seo abbreviations, terms and jargon
SEO terminology, acronyms and jargon
Online services for webmasters - everything you need to write articles, their search engine optimization and analysis of its success
Ways to optimize content and take into account the theme of the site during link promotion to minimize costs
Rel Nofollow and Noindex - how to block external links on the site from indexing by Yandex and Google

If you notice an error, select a piece of text and press Ctrl + Enter
SHARE: