google suggest scraper (php & simplexml)

Today’s goal is a basic php Google Suggest scraper because I wanted traffic data and keywords for free.

Before we start :

google scraping is bad !

Good People use the Google Adwords API : 25 cents for 1000 units, 15++ units for keyword suggestion so they pay 4 or 5 dollar for 1000 keyword suggestions (if they can find a good programmer which also costs a few dollars). Or they opt for SemRush (also my preference), KeywordSpy, Spyfu, and other services like 7Search PPC programs to get keyword and traffic data and data on their competitors but these also charge about 80 dollars per month for a limited account up to a few hundred per month for seo companies. Good people pay plenty.

We tiny grey webmice of marketing however just want a few estimates, at low or better no cost : like this :

data num queries
google suggest 57800000
google suggestion box 5390000
google suggest api 5030000
google suggestion tool 3670000
google suggest a site 72700000
google suggested users 57000000
google suggestions funny 37400000
google suggest scraper 62800
google suggestions not working 87100000
google suggested user list 254000000

Suggestion autocomplete is AJAX, it outputs XML :

< ?xml version="1.0"? >
   <toplevel>
     <CompleteSuggestion>
       <suggestion data="senior quotes"/>
       <num_queries int="30000000"/>
     </CompleteSuggestion>
     <CompleteSuggestion>
       <suggestion data="senior skip day lyrics"/>
       <num_queries int="441000"/>
     </CompleteSuggestion>
   </toplevel>

Using SimpleXML, the PHP routine is as simple as querying g00gle.c0m/complete/search?, grabbing the autocomplete xml, and extracting the attribute data :

  1.         if ($_SERVER['QUERY_STRING']=='') die('enter a query like http://host/filename.php?query');
  2.  $contentstring = @file_get_contents("http://g00gle.c0m/complete/search?output=toolbar&amp;q=".urlencode($kw));  
  3.    $content = simplexml_load_string($contentstring );
  4.  
  5.         foreach($content-&gt;CompleteSuggestion as $c) {
  6.             $term = (string) $c-&gt;suggestion-&gt;attributes()-&gt;data;
  7.             //note : traffic data is sometimes missing  
  8.             $traffic = (string) $c-&gt;num_queries-&gt;attributes()-&gt;int;
  9.             echo $term. " ".$traffic . "
  10. " ;
  11.  }

I made a quick php script that outputs the terms as a list of new queries so you can walk through the suggestions :

The source is as text file up for download overhere (rename it to suggestit.php and it should run on any server with php5.* and simplexml).

proxies !

I got a site banned at Google so I got pissed and took a script from the blackbox @ digerati marketing to scrape proxy addresses, wired a database and curl into it, so now it scrapes proxies, random picks a proxy, prunes dead proxies and returns data.

Basic, it uses anonymous (level 2) proxies, but it works.

  1.  
  2. /* (mysql table)
  3. CREATE TABLE IF NOT EXISTS `serp_proxies` (
  4.   `id` int(11) NOT NULL auto_increment,
  5.   `ip` text NOT NULL,
  6.   `port` text NOT NULL,
  7.   PRIMARY KEY  (`id`)
  8. ) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
  9. */
  10.  
  11. //initialize database class, replace with own code
  12. include('init.php');
  13.  
  14. //main class
  15. $p=new MyProxies;
  16.  
  17. //do I have proxies in the database ?
  18. //if not, get some and store them
  19. if($p->GetCount() < 1) {
  20.  $p->GetSomeAir(1);
  21.  $p->store2database();
  22. }
  23.  
  24. //pick one
  25. $p->RandomProxy();
  26.  
  27. //get the page
  28. $p->ThisProxy->DoRequest('http://www.domain.com/robots.txt');
  29.  
  30. //error handling
  31. if($p->ThisProxy->ProxyError > 0) {
  32. //7   no connect
  33. //28   timed out
  34. //52   empty reply
  35. //if it is dead, doesn't allow connections : prune it
  36.  if($p->ThisProxy->ProxyError==7) $p->DeleteProxy($p->ThisProxy->proxy_ip);
  37.  if($p->ThisProxy->ProxyError==52) $p->DeleteProxy($p->ThisProxy->proxy_ip);
  38. }
  39. //you could loop back until you get a 0-error proxy, but that ain't the point
  40.  
  41. //give me the content
  42. echo $p->ThisProxy->Content;
  43.  
  44.  
  45. Class MyProxies {
  46.  
  47.  var $Proxies = array();
  48.  var $ThisProxy;
  49.  var $MyCount;
  50.  
  51.  
  52. //picks a random proxy from the database
  53.  function RandomProxy() {
  54.  
  55.   global $serpdb;
  56.   $offset_result =  $serpdb->query("SELECT FLOOR(RAND() * COUNT(*)) AS `offset` FROM `serp_proxies`");
  57.   $offset_row = mysql_fetch_object($offset_result);
  58.   $offset = $offset_row->offset;
  59.   $result = $serpdb->query("SELECT * FROM `serp_proxies` LIMIT $offset, 1" );
  60.   while($row=mysql_fetch_assoc($result)) {
  61. //make instance of Proxy, with proxy_host ip and port
  62.    $this->ThisProxy = new Proxy($row['ip'].':'.$row['port']);
  63.    $this->ThisProxy->proxy_ip = $row['ip'];
  64.    $this->ThisProxy->proxy_port = $row['port'];
  65.    break;
  66.   }
  67.  }
  68.  
  69. //visit the famous russian site
  70.  function GetSomeAir($pages) {
  71.    for($index=0; $index< $pages; $index++)
  72.    {
  73.     $pageno = sprintf("%02d",$index+1);
  74.     $page_url = "http://www.samair.ru/proxy/proxy-" . $pageno . ".htm";
  75.     $page_html = @file_get_contents($page_url);
  76.  
  77. //get rid of the crap and extract the proxies
  78.     preg_match("/<tr><td>(.*)< \/td>< \/tr>/", $page_html, $matches);
  79.     $txt = $matches[1];
  80.     $main = split('</td><tr><td>', $txt);
  81.     for($x=0;$x<count ($main);$x++) {
  82.      $arr = split('</td><td>', $main[$x]);
  83.      $this->Proxies[] = split(':', $arr[0]);
  84.     }
  85.    }
  86.  }
  87.  
  88. //store the retrieved proxies (stored in this->Proxies) in the database
  89.  function store2database() {
  90.   global $serpdb;
  91.   foreach($this->Proxies as $p) {
  92.    $result = $serpdb->query("SELECT * FROM serp_proxies WHERE ip='".$p[0]."'");
  93.    if(mysql_num_rows($result)&lt;1) $serpdb->query("INSERT INTO serp_proxies (`ip`, `port`) VALUES ('".$p[0]."', '".$p[1]."')");
  94.   }
  95.   $serpdb->query("DELETE FROM serp_proxies WHERE `ip`=''");
  96.  }
  97.  
  98.  
  99.  function DeleteProxy($ip) {
  100.   global $serpdb;
  101.   $serpdb->query("DELETE FROM serp_proxies WHERE `ip`='".$ip."'");  
  102.  }
  103.  
  104.  
  105.  function GetCount()
  106.  {
  107. //use this to check how many proxies there are in the database
  108.   global $serpdb;
  109.   $this->MyCount = mysql_num_rows($serpdb->query("SELECT * FROM `serp_proxies`"));
  110.   return $this->MyCount;
  111.  }
  112.  
  113.  
  114. }
  115.  
  116. Class Proxy {
  117.  
  118.  var $proxy_ip;
  119.  var $proxy_port;
  120.  
  121.  var $proxy_host;
  122.  var $proxy_auth;
  123.  var $ch;
  124.  var $Content;
  125.  var $USERAGENT = "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)";
  126.  var $ProxyError = 0;
  127.  var $ProxyErrorMsg = '';
  128.  var $TimeOut=3;
  129.  var $IncludeHeaders = 0;
  130.  
  131.  function Proxy($host, $username='', $pwd='') {
  132. //initialize class, set host
  133.          $this->proxy_host = $host;
  134.          if (strlen($username) > 0 || strlen($pwd) > 0) {
  135.             $this->proxy_auth = $username.":".$pwd;
  136.          }
  137.       }
  138.  
  139.  function CURL_PROXY($cc) {
  140.    if (strlen($this->proxy_host) > 0) {
  141.     curl_setopt($cc, CURLOPT_PROXY, $this->proxy_host);
  142.     if (strlen($this->proxy_auth) > 0)
  143.      curl_setopt($cc, CURLOPT_PROXYUSERPWD, $this->proxy_auth);
  144.    }
  145.  }
  146.  
  147.  function DoRequest($url) {
  148.   $this->ch = curl_init();
  149.   curl_setopt($this->ch, CURLOPT_URL,$url);
  150.   $this->CURL_PROXY($this->ch);
  151.   curl_setopt($this->ch, CURLOPT_HEADER, $this->IncludeHeaders); // baca header
  152.  
  153.   curl_setopt($this->ch, CURLOPT_USERAGENT, $this->USERAGENT);
  154.   curl_setopt($this->ch, CURLOPT_RETURNTRANSFER, 1);
  155.   curl_setopt($this->ch, CURLOPT_TIMEOUT, $this->TimeOut);
  156.      $this->Content = curl_exec($this->ch);
  157.  
  158. //if an error occurs, store the number and message
  159.   if (curl_errno($this->ch))
  160.    {
  161.     $this->ProxyError =  curl_errno($this->ch);
  162.     $this->ProxyErrorMsg =  curl_error($this->ch);
  163.    }
  164.  }
  165.  
  166. }
  167. </td></count></td></tr>

There is not much to say about it, just a rough outline. I would prefer elite level 1 proxies but for now it will have to do.

RedHat Seo : scraper auto-blogging

Just give us your endpoint and we’ll take it from there, sparky!

I was going to make one of these tools to scrape google and conjur a full blog out of nowhere, as Christmas special, RedHat Seo. The rough sketch has arrived , far from perfect, but it does produce a blog and don’t even look too shabby. I scraped a small batch of posts off of blogs, keeping the links intact and adding a tribute links. I hope they will pardon me for it.

structure

I use three main classes,

BlogMaker the application
Target the blogs you aim for
WPContent the scraped goodies

…and two support classes

SerpResult scraped urls
Custom_RPC a simple rpc-poster

Target blogs have three texts,

file contents maintenance
blog categories category you post under manual
blog tags tags you list on the blog manual
blog urls urls already used for the blog system

routine

The BlogMaker class grabs a result list (up to 1000 urls per phrase) from Google, extracts the urls and stores them in SerpResult, scrapes the urls and extracts the entry divs, stores div-entries in the WPContent class (that has some basic functions to sanitize the text), and uses the BlogTarget-definitions to post it up blogs with xml-rpc.

usage

  1.  
  2. //make main instance
  3. $Blog = new BlogMaker("keyword");
  4.  
  5. //define a target blog, you can define multiple blogs and refer with code
  6. //then add rpc-url, password and user
  7. //and for every target blog three text-files
  8.  
  9. $T=$Blog->AddTarget(
  10.  'blogcode',
  11.  'http://my.blog.com/xmlrpc.php',
  12.  'password',
  13.  'user',
  14.  'keyword.categories.txt',
  15.  'keyword.tags.txt',
  16.  'keyword.urls.txt'
  17.  );
  18.  
  19. //read the tags, cats and url text files stored on the server
  20. //all retrieved urls are tested, if the target blog already has that
  21. //scraped url, it is discarded.
  22. $T->CSV_GetTags();
  23. $T->List_GetCats();
  24. $T->ReadURL();
  25.  
  26. //grab the google result list
  27. //use params (pages, keywords) to specify search
  28. $Blog->GoogleResults();
  29.  
  30. $a=0;
  31. foreach($Blog->Results as $BlogUrl) {
  32.   $a++;
  33.   echo $BlogUrl->url;
  34. //see if the url isnt used yet
  35.  
  36.  if($T->checkURL(trim($BlogUrl->url))!=true) {
  37.    echo '…checking ';
  38.    flush();
  39. //if not used, get the source
  40.    $BlogUrl->scrape();
  41. //check for divs marked "entry", if they arent there, check "post"
  42. //some blogs use other indications for the content
  43. //but entry and post cover 40%
  44.  
  45.    $entries = $BlogUrl->get_entries();
  46.    if(count($entries)&lt;1) {
  47.     echo 'no entries…';
  48.     flush();
  49.     $entries = $BlogUrl->get_posts();
  50.      if(count($entries)&lt;1) {
  51.       echo 'no posts either…';
  52. //if no entry-post div, mark url as done
  53.  
  54.       $T->RegisterURL($BlogUrl->url);
  55.      }
  56.    }
  57.  
  58.    $ct=0;
  59.    foreach($BlogUrl->WpContentPieces as $WpContent) {
  60. //in the get_entries/get_post function the fragments are stored
  61. //as wpcontent
  62.     $ct++;
  63.  
  64.     if($WpContent->judge(2000, 200, 5)) {
  65.      $WpContent->tribute();  //add tribute link
  66.      $T->settags($WpContent->divcontent); //add tags
  67.      $T->postCustomRPC($WpContent->title, $WpContent->divcontent, 1); //1=publish, 0=draft
  68.      $T->RegisterURL($WpContent->url);  //register use of url
  69. usleep(20000000);  //20 seconds break, for sitemapping
  70.     }
  71.    }
  72.   }
  73.  }

notes

  • xml-rpc needs to be activated explicitly on the wordpress dashboard under settings/writing.
  • categories must be present in the blog
  • url file must be writeable by the server (777)

It seems wordpress builds the sitemap as background process, the standard google xml sitemap plugin wil attempt to build in the cache (takes anywhere between 2 and 10 seconds), and apart from building a sitemap the posts also get pinged around. Giving the install 10 to 20 seconds between posts allows for all the hooked in functions to be completed.

period

That’s about all,
consider it gpl, I added some comments in the source but I will not develop this any further. A mysql backed blogfarm tool (euphemistically called ‘publishing tool’) is more interesting, besides, I am off to the wharves to do some painting.

if you use it, send some feedback,
merry christmas dogheads