Saturday, December 01, 2007

Using Blogs for Web Site Design

It seems to me that lists are adequate to represent most of the content in our web sites. And blogs serve adequately as lists. In addition, content can be exported in standard formats. Blogs are easy to manipulate. And, additionally they make sense by themselves. With processors like Yahoo! Pipes out there now, content from blogs can be processed in useful ways.

My content content of lists of
--my bio (a singleton)
--my various degrees
--my publications
--work experience
--books I've read
--photos
--book's I've read
--movies I've seen
an so on.

I'm going to create a blog corresponding to each list I wish to maintain and use Yahoo! Pipes like software to create a coherent web site out of them. Why do I need to maintain a resume? I'll just have a processor merge feeds I need to create a resume, export the result into a format from which I can generate PDF and send it around.

There are various other ways in which other can use my content to create mashups. Why should people add to their HTML pages anymore! Just blog and mash.

Saturday, February 17, 2007

Disseminating Research

One suggestion:
---------------------
It would be awesome if instead of maintaining an HTML publications page, a researcher maintained ATOM or RSS feeds of her publications. Yes, there could be multiple feeds, e.g., by area of research. This way fellow researchers could subscribe to her feeds for updates. They could mashup her feed with others through a service such as Yahoo! pipes to create more interesting feeds. Plus, with the growing set of tools and services around feeds, it should be easier to maintain feeds than HTML pages.

The other day I tried to create a Yahoo! pipes mashup of the feeds of fellow researchers whose papers I frequently cite. However, since there were no feeds available, I just couldn't do it. Hence, consider this an exhortation upon all researchers to start publishing feeds. They have much to gain from it, and nothing to lose.

One idea:
-------------
Entering bibitems in bibliographies and managing them is so tedious and error-prone, don't you think? Well, here is a solution that alleviates this burden to an extent. An author should make the accurate bibtex of her publications dereferencable by URIs. Then, any other author's local bibliography should logically consist of only id:URI pairs. A bibtex processor should be smart enough to fetch using those URIs (and automatically "populate", if needed, each bibitem in the bibliography for offline use).

A point to note is that the information in a paper's bibtex is a subset of the information in the corresponding entry of the author's publication feed. As long as each entry has its own URI, a bibtex processor can fetch an entry in the feed directly, and process it to extract the relevant elements. Hence, a paper's author has to work no harder to create the bibtex.

Authors often move from institution to institution, and therefore the mappings from URI to URLs could change. How to manage these mappings without imposing any additional burden on authors, I'll need to think about.