Meetu Singhal's Blog Site

Posted in Uncategorized by meetusinghal on May 27, 2010

Neither the approval nor the disapproval of any individual or group makes any real difference in the quality of your life. — Guy Finley

Posted in Uncategorized by meetusinghal on May 26, 2010

LOL…. this one is hilarious – RT @TechCrunch Venture Capitalists Get Grilled (And Pitched At Urinals) At #TCDisrupt

Posted in Uncategorized by meetusinghal on May 26, 2010

is having fun listening about Mark Zukerberg and facebook on #TCdisrupt

Posted in Uncategorized by meetusinghal on May 26, 2010

BREAKING: Facebook Announces New Privacy Features

Facebook Open Graph: A new take on semantic web – O’Reilly Radar

Posted in facebook by meetusinghal on May 25, 2010

Facebook Open Graph: A new take on semantic web

Facebook’s Open Graph is both an important step and one that still needs work.

by Alex Iskold | @alexiskoldcomments: 3

Facebook logoA few weeks ago, Facebook announced an Open Graph initiative — a move considered to be a turning point not just for the social networking giant, but for the web at large. The company’s new vision is no longer to just connect people. Facebook now wants to connect people around and across the web through concepts they are interested in.

This vision of the web isn’t really new. Its origins go back the the person who invented the web, Sir Tim Berners-Lee. This vision has been passionately shared and debated by the tech community over the last decade. What Facebook has announced as Open Graph has been envisioned by many as semantic web.

The web of people and things

At the heart of this vision is the idea that different web pages contain the same objects. Whether someone is reading about a book on Barnes and Noble, on O’Reilly or on a book review blog doesn’t matter. What matters is that the reader is interested in this particular book. And so it makes sense to connect her to friends and other readers who are interested in the same book — regardless of when and where they encountered it.

The same is true about many everyday entities that we find on the web — movies, albums, stars, restaurants, wine, musicians, events, articles, politicians, etc — the same entity is referenced in many different pages. Our brains draw the connections instantly and effortlessly, but computers can’t deduce that an “Avatar” review on is talking about the movie also described on a page on

The reason it is important for things to be linked is so that people can be connected around their interests and not around websites they visit. It does not matter to me where my friends are reading about “Avatar”, what matters is which of my friends liked the movie and what they had to say. Without interlinking objects across different sites, the global taste graph is too sparse and uninteresting. By re-imagining the web as the graph of things we are interested in, a new dimension, a new set of connections gets unlocked — everything and everyone connects in a whole new way.

A brief history of semantic markups

The problem of building the web of people and things boils down to describing what is on the page and linking it to other pages. In Tim Berners-Lee’s original vision, the entities and relationships between them would be described using RDF. This mathematical language was designed to capture the essence of objects and relationships in a precise way. While it’s true that RDF annotation would be the most complete, it also turns out to be quite complicated.

It is this complexity that the community has attempted to address over the years. A simpler approach called Microformats was developed by Tantek Celik, Chris Messina and others. Unlike RDF, Microformats rely on existing XHTML standards and leverage CSS classes to markup the content. Critically, Microformats don’t add any additional information to the page, but just annotate the data that is already on the page.

Microformats enjoyed support and wider adoption because of their relative simplicity and focus on marking up the existing content. But there are still issues. First, the number of supported entities is limited, the focus has been on marking organizations, people and events, and then reviews, but there is no way to markup, for example, a movie or a book or a song. Second, Microformats are somewhat cryptic and hard to read. There is cleverness involved in figuring out how to do the markup, which isn’t necessarily a good thing.

In 2005, inspired by Microformats, Ian Davis, now CTO of Talis, developed eRDF — a syntax within HTML for expressing a simplified version of RDF. His approach married the canonical concepts of RDF and the idea from Microformats that the data is already on the page. An iteration of Ian’s work, called RDFa, has been adopted as a W3C standard. All the signs point in the direction of RDFa being the solution of choice for describing entities inside HTML pages.

Until recently, despite the progress in the markups, adoption was hindered by the fact that publishers lacked the incentive to annotate the pages. What is the point if there are no applications that can take advantage of it? Luckily, in 2009 both Yahoo and Google put their muscle behind marking up pages.

First Yahoo developed an elegant search application called Search Monkey. This app encouraged and enabled sites to take control over how Yahoo’s search engine presented the results. The solution was based on both markup on the page and a developer plugin, which gave the publishers control over presenting the results to the user. Later, Google announced rich snippets. This supported both Microformats and RDFa markup and enabled webmasters to control how their search results are presented.

Still missing from all this work was a simple common vocabulary for describing everyday things. In 2008-2009, with help from Peter Mika from Yahoo research, I developed a markup called abmeta. This extensible, RDFa-based markup provided a vocabulary for describing everyday entities like movies, albums, books, restaurants, wines, etc. Designed with simplicity in mind, abmeta supports declaring single and multiple entities on the page, using both meta headers and also using RDFa markup inside the page.

Facebook Open Graph protocol

The markup announced by Facebook can be thought of as a subset of abmeta because it supports the declaration of entities using meta tags. The great thing about this format is simplicity. It is literally readable in English.

The markup defines several essential attributes — type, title, URL, image and description. The protocol comes with a reasonably rich taxonomy of types, supporting entertainment, news, location, articles and general web pages. Facebook hopes that publishers will use the protocol to describe the entities on pages. When users press the LIKE button, Facebook will get not just a link, but a specific object of the specific type.

If all of this computes correctly, Facebook should be able to display a rich collection of entities on user profiles, and, should be able to show you friends who liked the same thing around the web, regardless of the site. So by publishing this protocol and asking websites to embrace it, Facebook clearly declares its foray into the web of people and things — aka, the semantic web.

Technical issues with Facebook’s protocol

As I’ve previously pointed out on my post on ReadWriteWeb, there are several issues with the markup that Facebook proposed.

1. There is no way to disambiguate things. This is quite a miss on Facebook’s part, which is already resulting in bogus data on user profiles. The ambiguity is because the protocol is lacking secondary attributes for some data types. For example, it is not possible to distinguish the movie from its remake. Typically, such disambiguation would be done by using either a director or a year property, but Facebook’s protocol does not define these attributes. This leads to duplicates and dirty data.

2. There is no way to define multiple objects on the page. This is another rather surprising limitation, since previous markups, like Microformats and abmeta, support this use case. Of course if Facebook only cares about getting people to LIKE pages so that they can do better ad targeting, then having multiple objects inside the page is not necessary. But Facebook claimed and marketed this offering as semantic web, so it is surprising that there is no way to declare multiple entities on a single page. Surely a comprehensive solution ought to do that.

3. Open protocol can’t be closed. Finally, Facebook has done this without collaborating with anyone. For something to be rightfully called an Open Graph Protocol, it should be developed in an open collaboration with the web. Surely, Google, Yahoo!, W3C and even small startups playing in the semantic web space would have good things to contribute here.

It sadly appears that getting the semantic web elements correct was not the highest priority for Facebook. Instead, the announcement seems to be a competitive move against Twitter, Google and others with the goal to lock-in publishers by giving them a simple way to recycle traffic.

Where to next?

Despite the drawbacks, there is no doubt that Facebook’s announcement is a net positive for the web at large. When one of the top companies takes a 180-degree turn and embraces a vision that’s been discussed for a decade, everyone stops and listens. The web of people and things is now both very important and a step closer. The questions are: What is the right way? And how do we get there?

For starters, it would be good to fill in some holes in Facebook Open Graph. Whether it is the right way overall or not, at least we need to make it complete. It is important to add support for secondary attributes necessary for disambiguation and also, important to add support for multiple entities inside the page (even if there is only one LIKE button on the whole page). Both of these are already addressed by Microformats and abmeta, so it should be easy to fix.

Beyond technical issues, Facebook should open up this protocol and make it owned by the community, instead of being driven by one company’s business agenda. A true roundtable with major web companies, publishers, and small startups would result in a correct, comprehensive and open protocol. We want to believe that Facebook will do the right thing and will collaborate with the rest the web on what has been an important work spanning years for many of us. The prospects are exciting, because we just made a giant leap. We just need to make sure we land in the right place.

facebook like box – enable your site visitors like your facebook page without leaving your website

Posted in facebook by meetusinghal on May 24, 2010

Posted in Uncategorized by meetusinghal on May 24, 2010

Catch Techcrunch Disrupt live all time here #TCDisrupt

Posted in Uncategorized by meetusinghal on May 22, 2010

“Trying to get everyone to like you is a sign of mediocrity.” – Colin Powell

Posted in Uncategorized by meetusinghal on May 21, 2010

Android OS run rate daily activations ( of devices ) has now passed 100,000 / day in the world. So now who is competing with Android?

Google’s WebM draws praise, critiques

Posted in Uncategorized by meetusinghal on May 20, 2010

(05-20) 10:57 PDT — The news this week of Google‘s open-source release of the HD video codec VP8 plays into the ongoing debate over which video codec browsers can use to display high-definition video without a plug-in, such as Adobe Flash or Microsoft Silverlight.

Google obtained the codec when it acquired On2 Technologies in February. Under the name of WebM, Google, along with other contributors, will maintain the source code, specifications and application programming interfaces of VP8.

Observers have noted that should Google release VP8 as an open standard, it could potentially solve the ongoing possible performance and legal issues surrounding other high-definition video codecs that could be used with the HTML5 standard.

Thus far, browser adoption of VP8 has been quick — at least when market leader Microsoft is not factored in — though performance concerns have arisen about the codec.

Both Opera and Mozilla have built test versions of their browsers that can run videos with the .webm suffix, as does a beta version of Google’s own Chrome browser. Adobe has also pledged that Flash will be able to run VP8 video.

The Free Software Foundation has praised Google for putting VP8 into open source, a move that FSF had urged Google to make in February.

“The world would have a new free format unencumbered by software patents. Viewers, video creators, free software developers, hardware makers — everyone — would have another way to distribute video without patents, fees, and restrictions,” Holmes Wilson wrote on the organization’s site in February.

Technically speaking, however, video codec experts seem to be split over how well VP8 stacks up against the H.264 codec favored by Apple and Microsoft for HTML5 playback. They do seem to agree that the format does not hit the performance claims made by ON2.

In terms of compressing files, “VP8 appears to be significantly weaker than” H.264, wrote Jason Garrett-Glaser, one of the developers behind the x264 open-source library for rendering video into H.264, in a blog analysis. In his tests, VP8 decompression seemed to require more processing power as well.

Garrett-Glaser noted that these results are especially problematic because Google has finalized the specification and organizations such as Mozilla and Adobe have rushed to support it, which will make it difficult to make the changes needed to improve performance.

“It would have been better off to have an initial period during which revisions could be submitted and then a big announcement later when it’s completed,” he wrote.

Garrett-Glaser did note that VP8 performance is better than that of Ogg Theora, the other potential choice for Web video.

Streaming media consultant Jan Ozer also compared the performance of VP8 and H.264, and found the two to be about roughly equivalent, performance-wise.

Copyright (c) 2010, IDG News Service. All rights reserved. IDG News Service is a trademark of International Data Group, Inc.

Next Page »