Problems of Social Bookmarking Today – Part One

Can you imagine the on-line world without, reddit, digg, dzone and other Web2.0 social bookmarking sites? Sure, you can – they were not always around and nobody missed them before they appeared. However, since their debut, I guess no serious geek can exist without them anymore. The functionality and information richness these sites offer is unquestionable – however, there are more and more flaws and problems popping out as people learn to use, monetize, abuse, trick and tweak them. I would like to present my current compilation of woes and worries, sprinkled with a few suggestions on how to handle them.

DISCLAIMER: this is my subjective view on these matters – I am not claiming the things presented here are objectively true – this is just my personal perception.

General Problems

I have read a nice quote recently – unfortunately I can not find it right now. It goes something like this: “Time is nature’s method of preventing things to happen all at once. It does not seem to work lately…”

Though the notion of a social bookmarking site did not even exist when this quote was thought up by someone, it captures the essential problem of these sites very well: too much things are happening all at once, and it is therefore impossible to process the amount of information pouring from everywhere…

  • Information overload – I think this fact is not really a jaw-dropping mind-boggling discovery – but since it is the root of all evil (not just in the context of Web2.0 or social sites, but in general for the whole web today) it deserves to be presented as the first problem in this list. Today it is almost sure that the thing you are looking for is on the Web (whether legally or illegally) – it is a much bigger problem to actually find it! This applies to the social sites as well. A site like digg gets about 5000 article submission every day – and even if you restrict yourself to the front page stories, it is virtually impossible to keep up with them unless you are spending a given (not so short) amount of time every day just with browsing the site. O.K. this is not a Web2.0 or social site problem per se, but a quite hard one to solve nevertheless.

    Proposed solution: I don’t have the foggiest idea 🙂 Basically an amalgam of the solutions presented in the next points…

  • Articles get pushed down quickly – which is inevitable and not even a terrible problem in itself, since this is how it should work – the worse thing is that the good stuff sinks equally fast as the crap – i.e. every new article hitting the front page makes all the others sink by 1 place.

    Proposed solution: The articles could be weighted (+ points for more votes, more reads, more comments etc, points for thumbs down, spam report, complaints etc.) and the articles should sink relatively to each other at any given moment – i.e. the weight should be recalculated dynamically all the time and the hottest article should be the most sticky while the least-voted-for should exchange it’s place with the upcoming, more interesting ones.

  • Good place, wrong time – if you submitted a very interesting article, and the right guys did not see it in the right time, it will inevitably sink and never make it to the front page. It is possible that if you would have submitted it half a day later, it would be noted by the critical mass to make it to the front page – the worst thinkg is that you never even know if this is so.

    Proposed solution: Place a digg/dzone/ button after or before the article – this way, people will have the possibility to vote on your article after reading it, no matter how did they get to your site and when. The article will stay on your site forever – whereas on digg it will be present on a relevant place for just a few hours.

  • Url structure problems – sometimes the same document is represented by various URLs which confuses most of the systems. The most frequent manifestations of this
    problems are: URL with and without www (like and, change of the URL style (from /?p=4 to /2002/4/5/stuff.html) or redirects, among other things.

    Proposed solution: Decide for an URL scheme and use it forever (generally, /?p=4 is not a recommended style – /2002/4/5/post.html and other semantically meaningful URLs are preferred (see Cool URIs never change), set your web server to turn http://www… to http:// (or the other way around)). The sites could also remedy the situation by not just checking the URL, but also the content of the document (like digg does just before submission).


Tagging is a great way of describing the meaning of an item (in our case a document) in a concise and easy to understand way – from a good set of tags you should know immediately what is the article about just by reading them. The idea is not really brand new – scientific papers are using this technology for ages (much like PageRank – long time before PageRank was implemented by the google guys, it was an accepted and commonly used technique to rank scientific papers based on the number of their quoting in other relevant works).

Some sites have predefined, finite set of tags (like dzone) while some allow custom ones (like – usually with suggestions based on the tags of others or by extracting keywords from the article). The problem of a predefined tag set is that you are restricted to use only the tags offered by the site – well this is sometimes good because it gives you some guidelines about what is accepted on the site. There are much more interesting problems with sites that allow custom tags:

  • No commonly accepted, uniform tagging conventions – some of these sites are accepting space separated tags, some quoted ones and some of them do not require or recommend any specific format. This is again the source of confusion, even inside the same system. Consider these examples:

    ruby on rails
    "ruby on rails"
    ruby rails

    and I could come up with tons of other ones. The problem is that all these tags are trying to convey the same information – namely that the article is about ruby on rails. Of course this is absolutely clear to any human being – however, much less so for a machine.

    Proposed solution: It would be beneficial to agree on one accepted tagging convention (even if you can not really force people to use it). The sites could use (even more) heuristics to turn tags with the same meaning ito one. For example if the user has a lots of ruby and rails bookmarks, and tags something with ‘rails’ it is very likely that the meaning of the tag is ‘ruby on rails’ etc.

  • Too much tags and no relations between them – I think everybody has, or at least has seen a large bookmark farm. The problem with the tags at this point is that there is a lot of them, and they are presented in a flat structure, without any relation between them. (O.K., there is tag cloud, but it is more of an eye candy in this sense). With a really lot of tags (say hundreds of them) the whole thing can become really cumbersome.

    Proposed solution: Visualization could help a lot here. Check out this image:

    Clustered Tag Graph

    Example of a Clustered Tag Graph

    I think such a representation would make the whole thing easier, mainly if it would be interactive (i.e. if you’d click the tag ‘ActiveRecord’, the graph will change to show the tags related to ‘ActiveRecord’. The idea is that all of your tags should be clustered (where relevant ones should belong to one cluster – the above image is an example of a toread-ruby cluster) and the big graph should consist of the clusters, with each cluster’s main element highlighted for easy navigation. If you click a cluster, it would zoom in etc.

  • Granularity of tagging – this is a minor issue compared to the others, but I would like to see it nevertheless: it should be possible to mark and tag paragraphs or other smaller portions of the document, not just the whole document itself. Imagine a long tutorial primarily about Ruby metaprogramming. Say there is an exceptionally good paragraph on unit testing, which is about 0.1% of the whole text. Therefore it might be wrong to tag it with ‘unit testing’ since it is not about unit testing – however, I would like to be able to capture the outstanding paragraph.

    Proposed solution: Again, visual representation could help very much here. I would present a thumbnail of the page, big enough to make distinguishing of objects (paragraphs, images, tables) possible, but small enough not to be clumsy. Then the user would have the possibility to visually mark the relevant paragraph (with a pen tool), and tag just that.
    This should result is a bookmark tagged like this:

    Granular tagging

    Example of More Granular Tagging

    On lookup, you will see the relevant lines marked and will be able to orient faster.
    To some people this may look an overkill – however, nobody forces you to use it! If you would like to stick with the good-old-tag-one-document method, it’s up to you – however, if you choose to tag up some documents also like this, you have the possibility.

  • Tagging a lot of things with the same tag is the same as tagging with none – consider that you have 500 items tagged with ‘Ruby’. True, you still don’t have to search the whole Web which is much bigger than 500 documents, but still, it is a real PITA to find something in 500 documents.

    Proposed solution: the clustered tag graph could help to navigate – usually you are not looking for just ‘Ruby’ things but ‘Ruby and testing and web scraping’ for example. Advanced search (coming in vol. 2), where you can specify which tags should be looked up and also what should the document contain could remedy the problem, too.

  • Common ontologies, synonyms, typo corrections – O.K. these might seem to be rocket science compared to the other, simpler missing features – however, I think their correct implementation would mean a great leap for the usability of these systems. Take for example web scraping, my present area of interest. People are tagging documents dealing with web scraping with the following tags: web scraping, screen scraping, web mining, web extraction, data extraction, web data extraction, html extraction, html mining, html scraping, scraping, scrape, extract, html data mining – just from the top of my head. I did not think about them really hard – in fact there are much more.
    It could solve much confusion if all these terms would be represented with a common expression – say ‘web scraping’.

    Proposed solution: this is a really hard nut to crack, stemming from the fact that e.g. screen scraping can mean something different to various people. However, a heuristics could lookup all the articles which are tagged with e.g. web scraping – and find the synonyms going through all the articles. It is not really hard to find out that ‘web scraping’ and ‘ruby’ or ‘subversion’ are not synonyms – however, after scanning enough documents, the link between ‘web scraping’ and ‘html scraping’ or ‘web data mining’ should be found by the system. The synonyms could be also exploited by using the clustered tag graph.


The idea of voting for articles as a mean to get them on the front page (opposed to editor-monitored, closed systems) seemed to be revolutionary and definitely the right way to rank the articles in a people-centered way from the beginning – after all it is really simple: people vote on stuff that they like and find interesting, which is equal to the fact that the most interesting article gets to the front page. Or is it? Let’s examine this a bit…

  • Back to the good old web 1.0 – when Tim O’Reilly coined the term Web2.0 in 2005, he presented a few examples of typical web1.0 vs web2.0 solutions, for example: Britannica Online vs Wikipedia, vs napster etc. I wonder why did not he come up with slashdot (content filtered by editors) vs digg (content voted up by people). At that time everybody was soo euphoric about Web2.0 that no one would question this claim (neither did I that time).

    However, it seems to me that after these sites evolved a bit, basically there is not that much difference between the two: according to this article, Top 100 Digg Users Control 56% of Digg’s HomePage Content. So instead of 10-or-something-like-that professionals, 100-or-something-like-that amateurs decide about the content of digg. So where is that enormous difference after all? Wisdom of crowds? Maybe wisdom of a few hundred people. Because of the algorithms used, if you don’t have too much time to submit or digg or comment or look for articles all the time (read: few hours a day) like these top diggers do, your vote won’t count too much anyway. Digg (and I read that also reddit, and possibly sooner or lather this fate awaits more sites (?)) became a place where “Everyone is equal, but some are more equal than others…”.

    Proposed solution: None. I guess I will be attacked by a horde of web2.0-IloveDigg fanatics claiming that this is absolutely untrue and since I have no real proofs of this point (and don’t have time/tools tom make one) I am not going to argue here.

  • Too easy or too hard to get to the front page – The consequence of some of the above points (Information overload, Good place, wrong time, Back to the good old web 1.0) is that if the limit to get to the front page is too high, it is virtually impossible to achieve it (unless you are part of a digg cartell or you have a page which has a lot of traffic anyway + a digg button). However, if the count is too low (hence it is too easy to get to the front page), people might be tempted to trick the system (by creating more accounts and voting on themselves, for example), just to get to the front page – which will result in a lot of low quality sites making it to the front page. Though I don’t own a social bookmarking site, I bet that finding out the right heihgt of the bar is extremely hard – and it even has to change from time to time in response to more and more submissions, SEO tricks etc.

    Proposed solution: A well-balanced mixture of silicon and carbon. Machines can do the most of the job by analysing logs, activities of the user on the page, thumbs up/down received from the user, articles submitted/voted/commented and other types of usage mining. However, machines alone are definitely not enough (since their don’t have the foggiest idea about what’s in an article) – a lot of input is needed from humans, too. On the one side by the users (voting, burying, peer review etc.) and from the editors as well. However I think that this is all done already – and the result is not really unquestionably perfect, I guess mainly because of the information overload – 5000 submissions a day (or 150,000 a month) is very hard to deal with…

  • Votes of experts should count more – In my opinion, it is not right that if a 12 year old script kiddie votes down an article and an expert with 20 years of experience votes it up, their votes are taken into account with an equal weight. OK, I know there is peer review and if the 12 old will do a lot of stupid moves, he will be modded down – so he will open a new account and begin the whole thing again from scratch. On the other hand, the expert maybe does not have time to hang around on digg and similar sites (because he is hacking up the next big thing instead of browsing) and therefore he might not get a lot of recognition from his peers on the given social site – which does show that he is an infrequent digg/dzone/whatever user, but tells nothing about his tech abilities.

    Proposed solution: I think it is too late for this with the existing sites, but I would like to see a community with real tech people, developers, enterpreneurs and hackers of all sorts. How could this be done? Well, people should show what they did so far – their blog, released open source software, mailing list contributions, sites they designed or any other proof that they are also doing something and not just criticizing others (It seems to me that always those people are the most abrasive on-line who do not have a blog, did not hack up somehing relevant or did not prove their abilities in any relevant way). This would ensure also that only one account belongs to one physical person. I know that this may sound too much work to do (both on the site maintainer’s and the users’ side) but it could lay a foundation for a real tech-focused (or xyz-focused) social site . Of course this would not lock out people without any tangible proof of their skills – however they votes would count less.

  • Everything can be hot only once – Most of the articles posted to the social bookmarking sites are ‘seasonal’ (i.e. they are interesting just for a given time period, or in conjunction with something hot at the moment) or news (like announcements, which are interesting for just a few days). On the other hand, there are also articles which are relevant for much longer – maybe months, years or even decades. However, because of the nature of these sites, they are out of luck – they can have their few days of fame only once.
    One could argue that this is good so – however, I am not sure about it. Take for example my popular article on Screen scraping in Ruby/Rails: I am getting a few thousand visitors from google and Wikipedia every month (which proves that the article is still quite relevant) and close to zero from all the social sites, despite of the fact that it was quite hot upon it’s arrival. Moreover, I have updated it since it’s first appearance with actual information, so it is not even the same article anymore, but a newer, more relevant one.

    Proposed solution: Let me demonstrate this on a example, where a certain amount of recent bookmarks is needed to get to the ‘popular’ section (something similar to the notion of the front page on digg-style sites). In my opinion, this count should depend also on the number of already received bookmarks. Let’s see an example: Suppose a brand new article needs 50 recent bookmarks to get to After getting there and a great stir is created around it, it gets bookmarked 300 times. Then, for the next 50 days it does not receive that much attention, gets 1 bookmark a day on average, so it has 350 votes altogether. However, after these 50 days, for some reason (e.g. some related topic gets hot) 30 people bookmark it in a few hours. In my opinion, it should get popular again – and moreover, with these 30 (and not 50) bookmarks – because it was already popular once. This metric should be than adjusted after getting popular once again – if this happens, and people don’t really bookmark it anymore despite of being featured on /popular, it should get again 50 (or more) votes.
    On digg style pages I would create a ‘sticky’ section for articles that are informative and interesting for a longer timespan. I would add another counter to the article (‘stickiness’) which should be voted up by both editors and users in a similar way as ‘hotness’ is now. Of course it is very subjective what should be sticky – it is easy to know that news are not sticky, but harder to decide this in case of other different material.

Since I never had the chance to try these ideas in practice, I can’t tell if how much (and to what extent) of them would work in real life. I guess there is no better method to find this out than to actually implement these features… and the other ones coming in vol. 2!

In the next part I would like to take a look on the remaining problems, connected with searching and navigation, comments and discussion, the human factor and miscellaneous problems which did not fit into another categories. Suggestions are warmly welcome, so if there will be some interesting ideas, I will try to incorporate those into the next (or this) installment!

Making a website for distance learning about ruby on rails is a great way to create awareness for the language. With the help of online certificate such as ibm certification, which is attained through sitting the ibm exams. With this you can create this site efficiently and with the guidance of oracle certification you can create a strong database for it. Next look around for internet hosting companies to upload the site on. One good example is bluehost, as it hires the best out, such as cisco’s 350-029 certified, there to provide quality services. To ensure that your site gets a good traffic work on search engine marketing. Employ affiliate marketing program to cater a wide scope of audience.

Data Extraction for Web 2.0: Screen Scraping in Ruby/Rails, Episode 1

This article is a follow-up to the quite popular first part on web scraping – well, sort of. The relation is closer to that between Star Wars I and IV – i.e., in chronological order, the 4th comes first. To continue the analogy, probably I am in the same shoes as George Lucas was after creating the original trilogy : the series became immensely popular and there was demand for more – in both quantity and depth.

After I have realized – not exclusively, but also – through the success of the first artcile that there is need for this sort of stuff, I begun to work on the second part. As stated at the end of the previous installment, I wanted to create a demo web scraping application to show some advanced concepts. However, I left out a major coefficient from my future-plan-equation: the power of Ruby.

Basically this web scraping code was my first serious Ruby program: I came to know Ruby just a few weeks earlier, and I have decided to try it out on some real-life problem. After hacking on this app for a few weeks, suddenly a reusable web scraping toolkit – scRUBYt! – begun to materialize which caused a total change of the plan: instead of writing a follow-up, I decided to finish the toolkit and sketch a big picture of the topic as well as placing scRUBYt! inside this frame and illustrating the theoretical things with it described here.

The Big Picture: Web Information Acquisition

The whole art of systematically getting information from the Web is called ‘Web information acquisition’ in the literature. The process consists of 4 parts (see the illustration), which are executed in this order: Information Retrieval (IR), Information Extraction(IE), Information Integration (II) and Information Delivery (ID).

Information Retrieval

Navigate to and download the input documents which are the subject of the next steps. This is probably the most
intuitive step to make – clearly, the information acquisition system has to be pointed to the document which contains the data first, before it can perform the actual extraction.

The absolute majority of the information on the Web resides in the so-called deep web – backend databases and different legacy data stores which are not contained in static web documents. This data is accessible via interaction with web pages (which serve as a frontend to these databases) – by filling and submitting forms, clicking links, stepping through wizards etc. A typical example could be an airpot web page: an airport has all the schedules of the flights they offer in their databases, yet you can access this information only on the fly by submitting a form containing your concrete request.

The opposite of the deep web is the surface web – static pages with a ‘constant’ URL, like the very page you are reading. In such a case, the information retrieval step consist of just downloading the URL. Not a really tough task.

However, as I said two paragraphs earlier, most of the information is stored in the deep web – different actions, like filling input fields, setting checkboxes and radio buttons, clicking links etc. are needed to get to the actual page of interest which can be then downloaded as the result of navigation.

Besides that this is not trivial to do automatically from a programming language just because of the nature of the task, there are a lot of pitfalls along the way, stemming from the fact that the HTTP protocol is stateless: the information provided to a request is lost when making the next request. To remedy this problem, sessions, cookies, authorizations, navigation history and other mechanisms were introduced – so a decent information retrieval module has to take care about these as well.

Fortunately, in Ruby there are packages which are offering exactly this functionality. Probably the most well-known is WWW::Mechanize which is able to automatically navigate through Web pages as a result of interaction (filling forms etc.) while keeping cookies, automatically following redirects and simulating everything else what a real user (or the browser in response to that) would do. Mechanize is awesome – from my perspective it has one major flaw: you can not interact with JavaScript websites. Hopefully this feature will be added soon.

Until that happy day, if someone wants to navigate through JS powered pages, there is a solution: (Fire)Watir. Watir is capable to do similar things as Mechanize (I never did a head-to-head comparison, though it would be interesting) with the added benefit of JavaScript handling.

scRUBYt! comes with a navigation module, which is built upon Mechanize. In the future releases I am planning to add FireWatir, too (just because of the JavaScript issue). scRUBYt! is basically a DSL for web scraping with lot of heavy lifting behind the scenes. Through the real power lies the extraction module, there are some goodies here at the navigation module, too. Let’s see an example!

Goal: Go to Type ‘Ruby’ into the search text field. To narrow down the results, click ‘Books’, then for further narrowing ‘Computers & Internet’ in the left sidebar.


  fetch           ''
  fill_textfield  'field-keywords', 'ruby'
  click_link      'Books'
  click_link      'Computers & Internet'

Result: This document.

As you can see, scRUBYt’s DSL hides all the implementation details, making the description of the navigation as easy as possible. The result of the above few lines is a document – which is automatically fed into the scraping module, but this is already the topic of the next section.

Information Extraction

I think there is no need to write about why does one need to extract information from the Web today – the ‘how’ is a much more interesting question.

Why is Web extraction such a tedious task? Because the data of interest is stored in HTML documents (after navigating to them, that is), mixed with other stuff like formatting elements, scripts or comments. Because the data is missing any semantic description, a machine has no idea what a web shop record is or how a news article might look like – it just perceives the whole document as a soup of tags and text.

Querying objects in systems which are formally defined and thus understandable for a machine is easy: For instance, if I want to get the first element of an array in Ruby, One can do it easily like this:


Another example for a machine-queryable structure could be an SQL table: to pull out the elements matching the given criteria, all that needs to be done is to execute an SQL query like this:

SELECT name FROM students WHERE age > 25

Now, try to do similar queries for a Web page. For example, suppose that you already navigated to an ebay page by searching for the term ‘Notebook’. Say you would like to execute the following query: ‘give me all the records with price lower than $400’ (and get the results into a data structure of course – not rendered inside your browser, since that works naturally without any problems).

The query was definitely an easy one, yet without implementing a custom script extracting the needed information and saving it to a data structure (or using stuff like scRUBYt! – which does exactly this instead of you) you have no chance to get this information from the source code.

There are ongoing efforts to change this situation – most notably the semantic Web, common ontologies, different Web2.0 technologies like taxonomies, folksonomies, microformats or tagging. The goal of these techniques is to make the documents understandable for machines to eliminate the problems stated above. While there are some promising results in this area already, there is a long way to go until the whole Web will be such a friendly place – my guess is that this will happen around Web88.0 in the optimistic case.

However, at the moment we are only at version 2.0 (at most), so if we would like to scrape a web page for whatever reason *today*, we need to cope with the difficulties we are facing. I wrote an overview on how to do this with the tools available in Ruby (update: there is a new kid on the block – HPricot – which is not mentioned there).

The rough idea of those packages is to parse the Web page source into some meaningful structure (usually a tree) then provide a querying mechanism (like XPaths, CSS selectors or some other tree navigation model). You could think now: ‘A-ha! So actually a web page *can* be turned into something meaningful for machines, and there *is* a formal model to query this structure – so where is the problem described in the previous paragraphs? You just write queries like you would in a case of a database, evaluate them against the tree or whatever and you are done’.

The problem is that the machine’s understanding of the page and human thinking about querying this information are entirely different, and there is no formal model (yet) to eliminate this discrepancy. Humans want to scrape ‘websop records with Canon cameras with maximal price $1000’, while the machine sees this as ‘the third <td> tag inside the eight <tr> tag inside the fifth <table> … (lot of other tags) inside the <body>> tag inside the <html> tag, where the text of the seventh <td> tag contains the string ‘Canon’ and the text of the ninth <td>, is not bigger than 1000 (to even get the value 1000 you have to use a regular expression or something to get rid of the most probably present currency symbol and other possible additional information).

So why is this so easy with a database? Because the data stored in there has a formal model (specified by the CREATE TABLE keyword). Both you and the computer know *exactly* how a Student or a Camera looks like, and both of you are speaking the same language (most probably an SQL dialect).

This is totally different in the case of a Web page. A web shop record, a camera detail page or a news item can look just anyhow and your only chance to find out for the concrete Web page of interest is to exploit it’s structure. This is a very tedious task on it’s own (as I have said earlier, a Web page is a mess of real data, formatting, scripts, stylesheet information…). Moreover there are further problems: for example, a web shop record must not be uniform even inside the same page – certain records can miss some cells which others have, may containt the information on a detail page, while others not and vice versa – so in some cases, identifying a data model is impossible or very complicated – and I did not even talk about scraping the records yet!

So what could be the solution?

Intuitively, there is a need for an interpreter which understands the human query and translates it to XPaths (or any querying mechanism a machine understands). This is more or less what scRUBYt! does. Let me explain how – it will be the easiest through a concrete example.

Suppose you would like to monitor stock information on! This is how I would do it with scRUBYt!:

#Navigate to the page
fetch ‘’

#Grab the data!
stockinfo do
symbol ‘Dow’
value ‘31.16’


      <symbol>S&P 500</symbol>
      <symbol>10-Yr Bond</symbol>

Explanation: I think the navigation step does not require any further explanation – we fetched the page of interest and fed it into the scraping module.

The scraping part is more interesting at the moment. Two things happened here: we have defined a hierarchical structure of the output data (like we would define an object – we are scraping StockInfos which have Symbol and Value fields, or children), and showed scRUBYt! what to look for on the page in order to fill the defined structure with relevant data.

How did I know I had to specify ‘Dow’ and ‘31.16’ to get these nice results? Well, by manually pointing my browser to ‘’, and observing an example of the stuff I wanted to scrape – and leave the rest to scRUBYt!. What actually happens under the hood is that scRUBYt! finds the XPath of these examples, figures out how to extract the similar ones and arranges the data nicely into a result XML (well, there is much more going on, but this is the rough idea). If anyone is interested, I can explain this in a further post.

You could think now ‘O.K., this is very nice and all, but you have been talking about *monitoring* and I don’t really see how – the value 31.16 will change sooner or later and then you have to go to the page and re-specify the example again – I would not call this monitoring’.

Great observation. It’s true scRUBYt! would not be of much use if the situation of changing examples would not be handled (unless you would like to get the data only once, that is) – fortunately, the situation is dealt with in a powerful way!

Once you run the extractor and you think the data it scrapes is correct, you can export it. Let’s see how the exported extractor looks like:

#Navigate to the page
fetch ‘’

#Construct the wrapper
stockinfo “/html/body/div/div/div/div/div/div/table/tbody/tr” do
symbol “/td[1]/a[1]”
value “/td[3]/span[1]/b[1]”

As you can see, there are no concrete examples any more – the system generalized the information and now you can use this extractor to scrape the information automatically whenever – until the moment the guys at yahoo change the structure of the page – which fortunately not happening every other day. In this case the extractor should be regenerated with up-to date examples (in the future I am planning to add automatic regeneration in such cases) and the fun can begin from the start once again.

This example just scratched the surface of what scRUBYt is capable of – there are tons of advanced stuff to fine-tune the scraping process and get the data you need. If you are interested, check out for more information!


The first two steps of information acquisition (retrieval and extraction) are dealing with the question ‘How to get the data I am interested in (querying)’. Up to the present version (0.2.0) scRUBYt! contains just these two steps – however, to do even these properly, I will need a lot of testing, feedback, bug fixing, stabilization, adding heaps of new features and enhancements – because as you have seen, web scraping is not a straightforward thing to do at all.

The last two steps (integration and delivery) are addressing the question ‘what to do with the data once it is collected, and how to do that (orchestration)’. These facets will be covered in a next installment – most probably when scRUBYt! will contain these features as well.

If you liked this article and you are interested in web scraping in practice, be sure to install scRUBYt! and check out the community page for further instructions – the site is just taking off, so there is not too much yet – but hopefully enough to get you started. I am counting on your feedback, suggestions, bug reports, extractors you have created etc. to enhance both and scRUBYt! user experience in general. Be sure to share your experience and opinion!

To launch a tutorial site is comparatively much easier today than it was a few years ago. You can easily buy domain name at a very low cost and do domain parking until your site is ready. Get a good business hosting package from one of the many providers listed on the internet, go for a company which hires people with cisco certifications such as 642-143. Create a professional web design with the help of adobe. Get online training that can guide you through the site’s development. Use your laptop wireless internet connection to upload from anywhere conveniently.