Forum Moderators: Robert Charlton & goodroi
- Tedster #:3699468 [webmasterworld.com...] - the most perplexing new SERP observations are those that report cycling, sine waves, yo-yo, rollercoaster, or pick your favorite synonym. sometimes these cycles happen down in the deep results pages after a url has dropped from page 1 - an apparent penalty. and sometimes the cycling appears on page one - from 3 to 10 to 3 to 10, day after day or week after week.I don't have a site under my auspices that is showing this effect, but I've been asked to look at few that are - and so far, I can say that the phenomenon is real, but am mystified by it. I felt this way when the -950 first appeared back in 2006 or so, and slowly some understanding of that has emerged. Sure hope we can get some understanding about the yo-yo phenomenon, too.
Are we seeing something new in how it's applied?
Is it a Google Glitch or intentional ?
Does it effect only site's in penalty situations?
Does it form part of new penalty handling procedures ?
Any more questions and suggestions ?
Tedster #:3708527 This is something that quite a few sites are reporting - and it often (always?) involves position #4 during the periods when the url is on the first page of the SERPs.This seems like it must be some kind of statistical testing to me. but if that's the case, how does a url get picked to be tested - and even more, how can it "pass" the test? Some urls have been on this Google yo-yo for weeks and weeks.
The yo-yo has afflicted sites that were regular fixtures on page #1. Maybe it is unusual fluctuations in backlinks that triggers the test - that's worth watching!
I'm watching a site that was penalised on May 31 & has been flying around on a key term from position 39 down to anywhere on page 7. None of the sites URL's for any previously ranked term appear above position 41.
Tedster had a theory about "let's see" and "test" , but I'm not sure that i understand what you think they may be testing.
My sense is that Google would prefer to adjust the algo to address websites that are at the 'threshold'...
That's what Google has always said - and it makes a lot of sense for scalability over this huge data set.
Also, even the thresholds themselves could be reset dynamically based on future measures of the web, or input from Google's internal QA metrics. So a site could be over the threshold at one time and then the THRESHOLD moves. So rankings improve even though the site made no changes.
Seems to me that no matter how high the trust-o-meter is set, you can still get bonked.
That's my "Ghost Dataset", that seems to just instantly be incorporated in the SERPS
and includes the "beyond trusted authority" sites.
(Note: someone remarked in that thread that even .gov sites were "missing")
That's also the group of sites that I meant, earlier in this thread, avoids getting into the "yo-yo" situation
(didn't have a name for it yet)
This matches up well with what I was seeing during periodic points in the test. Particular websites in the top 5 of previous SERPs were zero'ed - literally past the 1000 mark - and then tested within the context of the Ghost set.
It was almost as if Google was also testing the validity of its own 'seed' websites, as well as the child seeds, applying a '2nd and 3rd level', and then reapplying this back across the entire index. I could see this quite clearly as obvious authority websites were added before the additional websites that had previous rankings, buy would not be considered authority.
At the same time, we seen snapshots of various elements of the overall algo as they were reapplied against the new set. This might explain why many documents ended up in the index that didn't make sense - perhaps those websites were within x degrees of the seed (or 2nd / 3rd level child sets) but had particular attributes that were re filtered and are now slowly moving out of the top rankings.
perhaps those websites were within x degrees of the seed (or 2nd / 3rd level child sets) but had particular attributes that were re filtered and are now slowly moving out of the top rankings.
Nice! Yes! That's exactly what they are now that you put it that way.
I couldn't put my finger on some of the 2nd and 3rd "seed" pages exactly.
Very powerful data there.
Great analysis.
A threshold ... yes I also thought to this interesting idea and in fact yo-yo is for high ranking/high traffic keywords.
But in Google's patents (filed and issued) I fully read there's no evidence (or something could intended as evidence) of yoyo effect.
What I see at the moment is that often yoyoing initially seems to follow a model like the Poisson distribution.
What I see at the moment is that often yoyoing initially seems to follow a model like the Poisson distribution.
Poisson distribution [childrensmercy.org]
What is a Poisson distribution?The Poisson distribution arises when you count a number of events across time or over an area. You should think about the Poisson distribution for any situation that involves counting events. Some examples are:
* the number of Emergency Department visits by an infant during the first year of life,
* the number of pollen spores that imact on a slide in a pollen counting machine,
* the number of incidents of apna and bradycardia in a pre-term infant.
* The number of white blood cells found in a cubic centimeter of blood.Sometimes, you will see the count represented as a rate, such as the number of deaths per year due to horse kicks, or the number of defects per square yard.
I mentioned that the rankings might change by a set time period or a set amount of traffic. I recently thought of a third possibility - a set number of impressions, whether their clicked on or not. A couple of my connections in the SEO world have mentioned that this wouldn't make sense. Well, that may be true but something certainly is happening. The -950 didn't "make sense" to us, but it happens.
[BTW, Inktomi's database containing whitelisted sites (back when they had dual databases) was referred to as "Best of the Web."]
It wouldn't surprise me that something like a Poisson distribution could bee seen in a data-set this large. The ingredients are there - a large number of potential candidates for the event with a very small number of actual examples.
Still, I'm not sure what that can tell us that we can act on. It certainly could be the case that the yo-yo effect is an artifact of some kind of probablistic testing, some way of giving more urls some "time at the top" of a crowded SERP. But still, some factor must make a url-keyword pair a candidate, right?
Still, I'm not sure what that can tell us that we can act on. It certainly could be the case that the yo-yo effect is an artifact of some kind of probablistic testing, some way of giving more urls some "time at the top" of a crowded SERP. But still, some factor must make a url-keyword pair a candidate, right?
Either way, wouldn't it still go back to the sites themselves (aside from snippet generation and displays), with regard to bounce rates, relevancy and user satisfaction with the sites themselves?
But still, some factor must make a url-keyword pair a candidate, right?
One factor could be loss of trust in the amount of x where the url previously held a position of trust over a period of y.
When a website loses a position or two in high ranking SERPs, the measurement level of quality is very small, and there is usually a very small difference between the sum of all factors for site 1,2 and 3.
However, when excessive trust is lost in a short period of time, this could trigger the yo-yo. As opposed to dropping the page off the planet out to page 2 and beyond, the page could yo-yo' in and out for a period of time before finally falling off the map.
Only problem with this theory is that other webmasters have reported newer, less trusted websites yo-yo'ing, in which case it could be that the phenomenon is now built into Google as a preventative tool?
Another theory could be some kind of fluid dynamic trust scale based on user metrics as Marcia described. If it was measured and applied minutely enough, small changes in landing pages could theoretically trigger the effect in the SERPs.
Or maybe no changes in landing pages could be a possibility, n'est pas? After drinking and digesting enough Google Kool-Aid, might not a conclusion be drawn that whatever they're measuring, whether it's site factors or SERPs statistics, what they're trying to gauge is user metrics, and trying to test and accommodate user preferences?
misterjinx, can you clear up for us in what way you see a Poisson distribution with the Yo-Yo effect? Is it in play with the way the phenomenon first appeared and then grew? The number of urls affected over time? The number of query terms affected over time? Or perhaps in the number of url-keyword pairs that are tagged for yo-yo treatment over time?
Please also note that fluctuations or yoyo, call them as you want, are more frequent AFTER the PhraseRank introduction and, as already noted before, very frequent starting from 2008.
But it's possible that this kind pf phenomena was active BEFORE these dates and only now we discover the effects.
Finally another observation studying two different site about the same argument, and related only by an internal link.
Well, from a certain date the first site started fluctuating losing traffic while the second one recover its ranking and traffic.
[edited by: tedster at 10:53 pm (utc) on Nov. 3, 2008]
It was an internal anchor text issue that I fought long and hard to have changed; and when it finally was, it took around a week (give or take a little, not 100% certain since I didn't check daily) to rebound.
Clarification:
Excessive anchor text had been added preceding the drop - and it was an interior page, not the homepage, that was affected.
Excessive internal anchor text started to appear to be an issue around the time of the Florida update, and IMHO it's one of the primary over-optimization factors to look at; I'm still convinced of it, and it's not at all uncommon among DIY SEO'ing, from what I've seen.
[edited by: Marcia at 2:39 pm (utc) on Dec. 1, 2008]
Marcia wrote:
It's possibly from excessive internal anchor alt textwhitenight wrote:
Marcia, if you're familiar with the page, then this would coincide with my theory. (and something that's easy to test)
Are we still on C# chord or are we just seeing the same C# from a new level of awareness?
Yes, seoit I'm seeing the same. It works always this way:
if you drop for bluewidgets and bluewidgets is NOT a competitive term the drop will also affect ALL of your URLs, all of them will drop to similar positions. (some sort of -40).There is a discussion about exactly this problem on the google webmaster help group (September), but no one from google commented. [webmasterworld.com...]
Some more interesting discussion exists on this thread.
Is this a core behavioural symptoms that everyone is seeing ?
Irrespective of the causes , I'm wondering if this is the "new way" [ aka approx late May / early June way ] of filtering "offending" sites.
When you get passed all the filters and manual adjustments made to your site without your knowledge it all boils down to the spread of internal pagerank and the anchor points of incoming pagerank.
Site dynamics dictate that you can fully control how high or how low specific pages rank but you can't make such changes without directly and proportionally affecting other pages on your site.
My advice, if you have an extremely popular page that is entrenched in spot #1 of search engines, begin funneling that pagerank to other pages you want to boost by linking to them. When you start to see the donor page waver from it's position... that's enough. You ONLY want enough "pagerank - authority - whatever you want to call it" on a page to bring it to top spot, more is a waste.
It's not really that simple, every change changes the overall dynamics of the entire site, but such micro changes will help you understand your site better. You planned your entire site layout before you built it right? don't go overboard with changes.
...the yo-yo effect seem to kick in mostly for brand new optimized pages
That's what I've seen too, even for new pages (and new query terms) for well-established brands. It looks like "the algo says this url ranks well for this term - but something about it seems worth testing, because it appeared out of the blue."
That's been my operating premise from the earliest examples I saw - but I still don't get how a url passes or fails the test period. I was astonished to see one url yo-yo between position 15 and 4, only to settle in after a couple months at #16. This particular site belongs to a company who is nearly synonymous with the query term offline, but they had lousy SEO for the phrase on their website. When they woke up and clarified things online, they got the yo-yo and failed the test. All I can think of is that their new backlinks with the specific anchor text were suspect.
When they woke up and clarified things online, they got the yo-yo and failed the test. All I can think of is that their new backlinks with the specific anchor text were suspect
Would you say the alteration was considered too aggressive ?
I wonder what G wants for the site to pass the test and re-appear.
@tedster Yo-Yo effect affects not only brands.
In Italy in 6-7 Dec 2008 I was at a SEO congress and according to latest studies of the SEO I already mentioned in other posts (dechigno) and Yo-yo seems not to be related to a single factor.
Dechigno reports 7 interesting cases of different websites.
Different in size, in content, in topics.
According to his conclusions it seems yo-yo effect could be determined as ...
* a problem of content. You have a trusted domain for a topic and you try to use it to give ranking to another site of different topic;
* depending from a specific crawler scanning the website;
* a problem of near duplicates;
* a bug or issue in Google ...
The SEO reported different cases and not all were caused by trusted domain.
The difference between the filter and yoyo is that with yoyo your pages are in SERP (and not filtered in Googles'supplemental results), but at lower ranking and sometimes are at top.
In some cases your page completely disappear.