Fetch as Google tool for simple ajax site not working

I've a problem with the Fetch as Google tool for my ajax website. My site is a little old, ajax website written using jquery. The developers who have made it haven't used the Hash Fragments. But they've defined static routes and the ajax calls are used only within the views (to load the page content). Now I wanted to make this specific page Google friendly, and I've already implemented what Google asks here.

Since my site is not a full Single page app, I've selected the third step directly. In my route file, what I did is, if i see a ?_escaped_fragment_= parameter, I return a custom template file which will have server generated content. (So it should be crawl-able, right?)

Here is an example: http://example.com/topic/Health/Conditions_and_Diseases

this page uses an ajax call to get details from the server and update the view. (I included the meta name="fragment" content="!" meta tag in this page) so the Google crawler should go to:

http://example.com/topic/Health/Conditions_and_Diseases?_escaped_fragment_=

?????

This page now generates the content in server side, no ajax calls.

Is this the correct setup? But when I try to Fetch this page in the Webmaster tool, it doesn't load anything. The fetching keeps saying pending and ends with an error (it takes a long time to show as Error, but nothing mentioned about the error.) I've confirmed that both these versions are working by manually visiting each url. and before I implement this, the Fetch tool actually showed the image of the page without content. So now I was expecting to see it with content. But no idea why it's taking a long time + it gives the error.

Can somebody please where I've done wrong? Is it correct what I'm thinking about the ?_escaped_fragment_= ???

Thank you in advance, Looking forward for tips from you all.

Answers


I'm worried because no-one here could answer this question. So I had to find it myself. According to this Google Forum answer by a Google employee, the fetch tool doesn't parse the meta-tag. It just renders the page as it sees.

Snapshot url will be crawled only by the crawler later when it's really crawling. So apparently this is the correct answer as of now. Hope this will help somebody else in the future.

Hi Todd It's good to see more sites using the AJAX crawling proposal :-)!

Looking at your blog's homepage, one thing to keep in mind is that the Fetch as Googlebot feature does not parse the content that it fetches. So when you submit http://toddmoyer.net/blog/ , it fetches that URL. After fetching the URL, it doesn't parse it to check for the "fragment" meta tag, it just returns it to you. However, if you fetch http://toddmoyer.net/blog/#! , then it should rewrite the URL and fetch the URL http://toddmoyer.net/blog/?_escaped_fragment_= .

When we crawl and index your pages, we'll notice the meta-tag and act accordingly. It's just the Fetch as Googlebot feature that doesn't check for meta-tags, and instead just returns the raw content.

I hope that makes it a bit clearer!

Cheers John


Need Your Help

Is there a technique to detect members supposedly non-serializable which are in fact serialized?

.net serialization event-handling

I have the next problem: I have a big object graph being binary serialized with many fields and events marked as [NonSerializable] or [field:NonSerializable].

Joining other tables in oracle tree queries

sql oracle tree connect-by

Given a simple (id, description) table t1, such as

About UNIX Resources Network

Original, collect and organize Developers related documents, information and materials, contains jQuery, Html, CSS, MySQL, .NET, ASP.NET, SQL, objective-c, iPhone, Ruby on Rails, C, SQL Server, Ruby, Arrays, Regex, ASP.NET MVC, WPF, XML, Ajax, DataBase, and so on.