Some Beautiful Nuggets From Web 2.0 Summit White Paper

Beautiful paper, kind of serves as a motivational introduction to the whole world of web 2.0. I strongly insist, one should read this. The actual white paper can be found here:

On what we have learnt:

In our first program, we asked why some companies survived the dotcom bust, while others had failed so miserably. We also studied a burgeoning group of startups and asked why they were growing so quickly. The answers helped us understand the rules of business on this new platform.

Chief among our insights was that “the network as platform” means far more than just offering old applications via the network (“software as a service”); it means building applications that literally get better the more people use them, harnessing network effects not only to acquire users, but also to learn from them and build on their contributions.

On how web 2.0 is really collective intelligence (the ability to take feedback and repond better):

Consider search – currently the lingua franca of the Web. The first search engines, starting with Brian Pinkerton’s webcrawler, put everything in their mouth, so to speak. They hungrily followed links, consuming everything they found. Ranking was by brute force keyword matching.

In 1998, Larry Page and Sergey Brin had a breakthrough, realizing that links were not merely a way of finding new content, but of ranking it and connecting it to a more sophisticated natural language grammar. In essence, every link became a vote, and votes from knowledgeable people (as measured by the number and quality of people who in turn vote for them) count more than others.

Modern search engines now use complex algorithms and hundreds of different ranking criteria to produce their results. Among the data sources is the feedback loop generated by the frequency of search terms, the number of user clicks on search results, and our own personal search and browsing history. For example, if a majority of users start clicking on the fifth item on a particular search results page more often than the first, Google’s algorithms take this as a signal that the fifth result may well be better than the first, and eventually adjust the results accordingly.

Once we read the above, the network as a platform idea makes sense to us. We take the network as the basic entity which has all that we want and use our devices (whatever) to get what we want as services from the network.

To get a glimpse of how much more smarter it has become:

Now consider an even more current search application, the Google Mobile Application for the iPhone. The application detects the movement of the phone to your ear, and automatically goes into speech recognition mode. It uses its microphone to listen to your voice, and decodes what you are saying by referencing not only its speech recognition database and algorithms, but also the correlation to the most frequent search terms in its search database. The phone uses GPS or cell-tower triangulation to detect its location, and uses that information as well. A search for “pizza” returns the result you most likely want: the name, location, and contact information for the three nearest pizza restaurants.

On how things are yet haphazardous:

It’s easy to forget that only 15 years ago, email was as fragmented as social networking is today, with hundreds of incompatible email systems joined by fragile and congested gateways. One of those systems – internet RFC 822 email – became the gold standard for interchange.

We expect to see similar standardization in key internet utilities and subsystems. Vendors who are competing with a winner-takes-all mindset would be advised to join together to enable systems built from the best-of-breed data subsystems of cooperating companies.

On how the learning component adds even more value to the web 2.0, how learning happens and an example of its application:

Speech recognition and computer vision are both excellent examples of this kind of machine learning. But it’s important to realize that machine learning techniques apply to far more than just sensor data. For example, Google’s ad auction is a learning system, in which optimal ad placement and pricing is generated in real time by machine learning algorithms.

In other cases, meaning is “taught” to the computer. That is, the application is given a mapping between one structured data set and another. For example, the association between street addresses and GPS coordinates is taught rather than learned. Both data sets are structured, but need a gateway to connect them.

It’s also possible to give structure to what appears to be unstructured data by teaching an application how to recognize the connection between the two. For example, You R Here, an iPhone app, neatly combines these two approaches. You use your iPhone camera to take a photo of a map that contains details not found on generic mapping applications such as Google maps – say a trailhead map in a park, or another hiking map. Use the phone’s GPS to set your current location on the map. Walk a distance away, and set a second po

On why this learning component is necessary:

Some of the most fundamental and useful services on the Web have been constructed in this way, by recognizing and then teaching the overlooked regularity of what at first appears to be unstructured data.

Ti Kan, Steve Scherf, and Graham Toal, the creators of CDDB, realized that the sequence of track lengths on a CD formed a unique signature that could be correlated with artist, album, and song names. Larry Page and Sergey Brin realized that a link is a vote. Marc Hedlund at Wesabe realized that every credit card swipe is also a vote, that there is hidden meaning in repeated visits to the same merchant. Mark Zuckerberg at Facebook realized that friend relationships online actually constitute a generalized social graph. They thus turn what at first appeared to be unstructured into structured data. And all of them used both machines and humans to do it.

It looks like we are very near to making a Terminator robot: 🙂

The Wikitude travel guide application for Android takes image recognition even further. Point the phone’s camera at a monument or other point of interest, and the application looks up what it sees in its online database (answering the question “what looks like that somewhere around here?”) The screen shows you what the camera sees, so it’s like a window but with a heads-up display of additional information about what you’re looking at. It’s the first taste of an “augmented reality” future. It superimposes distances to points of interest, using the compass to keep track of where you’re looking. You can sweep the phone around and scan the area for nearby interesting things.

A word on Information Shadows:

All of these breakthroughs are reflections of the fact noted by Mike Kuniavsky of ThingM, that real world objects have “information shadows” in cyberspace. For instance, a book has information shadows on Amazon, on Google Book Search, on Goodreads, Shelfari, and LibraryThing, on eBay and on BookMooch, on Twitter, and in a thousand blogs.

A song has information shadows on iTunes, on Amazon, on Rhapsody, on MySpace, or Facebook. A person has information shadows in a host of emails, instant messages, phone calls, tweets, blog postings, photographs, videos, and government documents. A product on the supermarket shelf, a car on a dealer’s lot, a pallet of newly mined boron sitting on a loading dock, a storefront on a small town’s main street — all have information shadows now.

As the information shadows become thicker, more substantial, the need for explicit metadata diminishes. Our cameras, our microphones, are becoming the eyes and ears of the Web, our motion sensors, proximity sensors its proprioception, GPS its sense of location. Indeed, the baby is growing up. We are meeting the Internet, and it is us.

On the role of massive data in learning:

There’s a fascinating fact noted by Jeff Jonas in his work on identity resolution. Jonas’ work included building a database of known US persons from various sources. His database grew to about 630 million “identities” before the system had enough information to identify all the variations. But at a certain point, his database began to learn, and then to shrink. Each new load of data made the database smaller, not bigger. 630 million plus 30 million became 600 million, as the subtle calculus of recognition by “context accumulation” worked its magic.

Sensors and monitoring programs are not acting alone, but in concert with their human partners. We teach our photo program to recognize faces that matter to us, we share news that we care about, we add tags to our tweets so that they can be grouped more easily. In adding value for ourselves, we are adding value to the social web as well. Our devices extend us, and we extend them.

Its not just web2.0:

But as is so often the case, the future isn’t clearest in the pronouncements of big companies but in the clever optimizations of early adopters and “alpha geeks.” Radar blogger Nat Torkington tells the story of a taxi driver he met in Wellington, NZ, who kept logs of six weeks of pickups (GPS, weather, passenger, and three other variables), fed them into his computer, and did some analysis to figure out where he should be at any given point in the day to maximize his take. As a result, he’s making a very nice living with much less work than other taxi drivers. Instrumenting the world pays off.

Consider the so-called “smart electrical grid.” Gavin Starks, the founder of AMEE, a neutral web-services back-end for energy-related sensor data, noted that researchers combing the smart meter data from 1.2 million homes in the UK have already discovered that each device in the home has a unique energy signature. It is possible to determine not only the wattage being drawn by the device, but the make and model of each major appliance within – think CDDB for appliances and consumer electronics!

On real time events:

Real-time search encourages real-time response. Retweeted “information cascades” spread breaking news across Twitter in moments, making it the earliest source for many people to learn about what’s just happened. And again, this is just the beginning. With services like Twitter and Facebook’s status updates, a new data source has been added to the Web – realtime indications of what is on our collective mind.

Its not just the web that is learning:

Even without sensor-driven purchasing, real-time information is having a huge impact on business. When your customers are declaring their intent all over the Web (and on Twitter) – either through their actions or their words, companies must both listen and join the conversation. Comcast has changed its customer service approach using Twitter; other companies are following suit.

Some applications:

But in his advice on the direction of the Government 2.0 Summit Federal CTO Aneesh Chopra has urged us not to focus on the successes of Web 2.0 in government, but rather on the unsolved problems. How can the technology community help with such problems as tracking the progress of the economic stimulus package in creating new jobs? How can it speed our progress towards energy independence and a reduction in CO2emissions? How can it help us remake our education system to produce a more competitive workforce? How can it help us reduce the ballooning costs of healthcare?

Twitter is being used to report news of disasters, and to coordinate emergency response. Initiatives like Instedd(Innovative Support to Emergencies, Diseases, and Disasters) take this trend and amp it up. Instedd uses collective intelligence techniques to mine sources like SMS messages (e.g., Geochat), RSS feeds, email lists (e.g., ProMed, Veratect, HealthMap, Biocaster, EpiSpider), OpenROSA, Map Sync, Epi Info™, documents, web pages, electronic medical records (e.g., OpenMRS), animal disease data (e.g., OIE, AVRI hotline), environmental feed, (e.g., NASA remote sensing, etc.) for signals of emerging diseases

Companies like 23andMe and PatientsLikeMe are applying crowdsourcing to build databases of use to the personalized medicine community. 23andMe provides genetic testing for personal use, but their long term goal is to provide a database of genetic information that members could voluntarily provide to researchers. PatientsLikeMe has created a social network for people with various life-changing diseases; by sharing details of treatment – what’s working and what’s not – they are in effect providing a basis for the world’s largest longitudinal medical outcome testing service. What other creative applications of Web 2.0 technology are you seeing to advance the state of the art in healthcare?

How do we create economic opportunities in reducing the cost of healthcare? As Stanford’s Abraham Verghesewrites, the reason it’s so hard to cut healthcare costs is that “a dollar spent on medical care is a dollar of income for someone.” We can’t just cut costs. We need to find ways to make money by cutting costs. In this regard, we’re intrigued by startups like CVsim, a cardio-vascular simulation company. Increasingly accurate data from CAT scans, coupled with blood flow simulation software running on a cloud platform, makes it conceivable to improve health outcomes and reduce costs while shrinking a multi-billion dollar market for angiography, an expensive and risky medical procedure. If CVsim succeeds in this goal, they’ll build a huge company while shrinking the nation’s healthcare bill. What other similar opportunities are there for technology to replace older, less effective medical procedures with newer ones that are potentially more effective while costing less?


Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s