Ahh, now I get it. I messed with Google Base the other night, and I didn't get what the deal was, like most everyone else really. For a test, I exported my blog database as a tab delimited file and uploaded it here to see how it'd all look and work, and came away thinking what everyone else thought: Meh.
Ahhh, but then I read Bill Burnham's explanation of what was going on this morning, and the light clicked on. (How could I be so blind?!?)
One need look no further then the detailed XML Schema and extensive RSS 2.0 specification to realize that Google intends to build the world's largest RSS "reader" which in turn will become the world's largest XML database.
To faciliate this, I suspect that Google will soon announce a program whereby people can register their "Base compliant" RSS feeds with Google base. Google will then poll these feeds regularly just like any other RSS reader. Publishers can either create brand new Base-compliant feeds or with a bit of XSLT/XML Schema of their own they can just transpose their own content into a Base compliant feed.
Ahh, that makes perfect sense! I can't understand why I didn't see it right away! It's exactly what I was talking about doing back in May of this year:
Imagine in your aggregator you could receive not only â€œPostsâ€ but forms as well. And calendaring info, and images, etc. And this stuff wasnâ€™t just HTML formatted inside the Description tag, but actually processable by the aggregator itself. I guess then the Aggregator becomes a Universal Data Reading Client instead. On the other side of the equasion, I currently have a weblog which has only one way to create new items, a button called â€œNew Postâ€ which has just two fields, Title and Content. Now what would happen if I added more ways to create items: â€œNew Calendar Itemâ€ and â€œNew Reviewâ€ and â€œNew Classifiedâ€, etc. Each one of these extra types of posts would all have Title and Content, but theyâ€™d also have fields filled with additional arbitrary information which was included in the RSS also. If your aggregator didnâ€™t understand these fields, they could just display them.
This is definitely what the GOOG is doing. It didn't dawn on me to centralize the data in the RSS into a massive online database. I guess I don't have that All Encompassing Googlebot view of the world that I need. Chalk that one up to experience. But hey... So after I format my data, publish my fields or otherwise submit my info to Base, then what? Anyone besides me notice that there's no "export" options anywhere? Not even an RSS feed of the data. That may not be "evil" per se, but it ain't good either.
Anyways, I'm not too worried about Base (and I don't think eBay or Craig's List is either). We're all sort of learning that where Microsoft is a 3.0 company - they don't get it right until the third try, and then they take over, Google is a 1.0 company - if they don't get it right on the first try, it's unlikely they ever will. They rather go off and find something else shiny to work on (that's how engineers think, and they're an engineering driven company after all).