about the trackback thing

The question about the trends script with trackbacks was wether a few hundred backlinks was worth the trouble, and it wasn’t. I wrote a second routine to grab the most common significant words from excerpts, and do a second search to grab better results and up to five trackbacks per page.

So I put that online, it grabbed 4000 backlinks in an hour and overloaded the host server.

Baidu, radian6 and google had stepped up indexing after I added sitewide tags and that didnt show up in analytics, the site got the trackback validations and crawlers and the server went haywire. It is a shared host, the resources are too limited to run that kind site on. I put it on hold till I find a solution for the hosting,

Google of course penalised the site with PR0 and dropped the domain from the serp on its main keywords, but in Yahoo it ranks about 20 out of 360 million result pages and in MSN it ranks no 1. I was thinking about adding a translator plugin and see if I can get some traffic from Baidu.



Trackbacks are brilliant stuff. I programmed a trackback module into the trends script yesterday just to see what it yields. As long as you don’t use it to spam and stick to common standards, it’s the fastest deep link building method available. I noticed another trends script is also using trackbacks.

GTrends lists an average 600 different searches per day, that makes 200K pages a year. If you put five blog excerpts with a link on a page you have 1000K backlink opportunities a year, automated, if you use trackbacks.

I got 50% success rate in the first tests, so I put it on a cronjob and it seems to level out at 30% successful links. That seemed a bit much, so I checked the PingCrawl plugin Eli (bluehatseo) and joshteam put together for WordPress. They claim a 80% success rate using Eli’s result scraper, I guess 30% is not aberrant.

For trends, I can’t narrow my search down too much. I need the most recent blogs for the trends buzz. Too narrow searches might exclude the recent news and the script would lose it’s usability. Besides, I figure 10% trackbacks would already be more than enough, a few hundred lines of code with a css template for 100K backlinks a year ain’t bad.

I don’t actually have anything to blog about today, so that’s it.

[added 3-3] ****ing brilliant, 65% trackbacks are accepted, increasing traffic, bots come crawling, finally something that works. Now add proxies.

[added 3-3] bozo style “the script got 4 uniques yesterday!”

Can I be honest ? Dude over at seounderworld gave me a vote of confidence on the trends script and I felt embarrased as the demo looks like shit and didn’t do anything. For scraper basics fine, but it lacked seo potential.

So I added some CSS, validated the source, added caching, gzip, rss-feed, sitemap, and the trackback module. It got 300 uniques yesterday and 400 uniques this morning on its first day out, so it performs better now and I don’t feel so embarrassed anymore.

(nice impression of the trends audience by the way)

I’ll add some proxies to prevent bans and some other stuff, once that’s done I’ll refresh the download.

google trends III

How to get the urls and snippets from the Google Trends details page. The news articles on the details page are listed with an ‘Ajax’ call, they are not sent to the browser in the html source. No easy way to scrape that.

The blog articles are pretty straight forward : first the ugly fast way :

$mytitle='manuel benitez';
$mydate=''; //2008-12-24
$start = strpos($html, '
'); $end = strpos($html, '
'); $content = substr($html, $start, $end-$start); echo $content;

That returns the blog snippets, ugly. The other way : regular pattern matching : you can grab the divs that each content item has, marked with

  • div class=”gs-title”
  • div class=”gs-relativePublishedDate”
  • div class=”gs-snippet”
  • div class=”gs-visibleUrl”

from the html-source and organize them as “Content” array, after which you can list the content items with your own markup or store them in a database.

//I assume $mytitle is taken from the $_GET array.

//array 'Content' with it's members 
Class Content {
	var $id;
	var $title;
	var $pubdate;
	var $snippet;
	var $url;
	public function __construct($id) {

//grab the source from the google page

//cut out the part I want
$start = strpos($html, '
'); $end = strpos($html, '
'); $content = substr($html, $start, $end-$start); //grab the divs that contain title, publish date, snippet and url with regular pattern match preg_match_all('!
.*?< \/div>!si', $html, $titles); preg_match_all('!
.*?< \/div>!si', $html, $pubDates); preg_match_all('!
.*?< \/div>!si', $html, $snippets); preg_match_all('!
.*?< \/div>!si', $html, $urls); $Contents = array(); //organize them under Content; $count=0; foreach($titles[0] as $title) { //make a new instance of Content; $Contents[] = new Content($count); //add title $Contents[$count]->title=$title; $count++; } $count=0; foreach($pubDates[0] as $pubDate) { //add publishing date (contains some linebreak, remove it with strip_tags) $Contents[$count]->pubdate=strip_tags($pubDate); $count++; } $count=0; foreach($snippets[0] as $snippet) { //add snippet $Contents[$count]->snippet=$snippet; $count++; } $count=0; foreach($urls[0] as $url) { //add display url $Contents[$count]->url=$url; $count++; } //leave $count as is, the number of content-items with a 0-base array //add rel=nofollow to links to prevent pagerank assignment to blogs for($ct=0;$ct< $count;$ct++) { $Contents[$ct]->url = preg_replace('/ target/', ' rel="nofollow" target', $Contents[$ct]->url); $Contents[$ct]->title = preg_replace('/ target/', ' rel="nofollow" target', $Contents[$ct]->title); } //its complete, list all content-items with some markup for($ct=0;$ct< $count;$ct++) { echo '

'.$Contents[$ct]->title.''; echo '


'; echo $Contents[$ct]->url.'
'; }

It ain’t perfect, but it works. the highlighter I use gets a bit confused about the preg_match_all statements containing unclosed div’s, so copying the code of the blog may not work.