Monday, November 27, 2006

A Blog In Transition



As the term draws to a close I want to begin transitioning this blog to something a little more general. Libraries, social software, web 2.0, these are my interests, and as such they will remain staples of this blog, however, I feel that I can now begin to give it a slightly more personal touch.

To kick off the transition I'm posting a photo I took in Colorado over the summer. I went to Colorado to run the Pikes Peak Marathon, but also found myself with quite a bit of time on my hands. As the race starts at 6,295 feet above sea level and has an elevation gain of 7,815 feet on top of that, I went a week prior to acclimatise as much my schedule would allow. During the week, while I wasn't on the top of the mountain, I explored the small town of Manitou Springs at its base. Among the interesting things I found was the town's public library.

As it turns out, the library is an original Carnegie library, as you can see from the picture, built in 1910. That first day I was there the library was closed, but I return later in the week to take a look inside. It was a fantastic space with the adult collection on the main floor and a children's area in the basement. One of the things I do when I have the opportunity to visit a new library is to checkout what OPAC they're using. In this small, century old library, I expected to find some small system of which I'd never heard, but much to my chagrin the screen displayed the same dynix window the the Toronto Public library computers do.

Anyway, as it happens, the Start/Finish line of the race was only a few yards from the library. So, after being thoroughly checked over for potentially fatal injuries and spending some time hunched in a chair recuperating, I hobbled over to the library and immortalized the moment on film.

I had a great many experiences during that race and the week prior, but I can't think of a better way to remember it then with a picture of the library.

[Edit: Oh yea, for those of you in London, go vote !!!]

Sunday, November 26, 2006

To Shirk These Mental Shackles

I wanted to quickly follow up my last post with something of an update to one from a few weeks ago. I received a number of excellent comments pointing me to instances of projects that in many ways lend themselves to the criteria I had suggested. iToolbox and Academici are both social networks or sorts that lend themselves to a more restricted audience and specific functionality then MySpace. iToolbox is a site directed primarily at IT professionals. In implementation, it is structure much less like a MySpace and more like a traditional forum or modified wiki. This is not to put it down, the site has certainly made great leaps in improving upon forum implementations of the past. I also don't want to pigeonhole it into one particular methodology. The site has a wide variety of useful functionality and it seems to be continuing to experiment and improve.

The other site that was suggested, Academici, is a little closer to what I had envisioned. To put it in as few words as possible, Academici is a MySpace for knowledge workers. I haven't used the site, but from what I garnered from the documentation it shares many of the same features as the mainstream social networks. However, in addition to the more traditional functionality ,it also offers more targeted functionality such as the ability to share abstracts and papers as well as open them up to discussion. It is difficult to tell from the description, but the site also seems to have search and contact functionality tailored specifically for academics and other research intensive professionals. Generally, I'm impressed with both these sites and am thrilled that their are those out there working to develop more targeted, and I think, more useful social networks.

My concern, however, is that it is not libraries working to develop these sites. I think the cause for my concerns is largely evident in the Academici implementation. While I have nothing but praise for the work they have done, I can't help but imagine how much better it could be if it were run in the context of a library. Firstly, for research networks to be effective they simply can't exclude, and while Academici does have a free option one still has to pay for the full functionality.

That having been said, my monetary concerns are secondary to my firm belief that the networks functionality could be better as well. The ability to share papers or abstracts is wonderful, but in truth it is nothing more then what many researchers are already doing with blogs. Perhaps their is a certain virtue to the simple feature consolidation that Academici has achieved, but it seems so small as to be almost not worth mentioning. What would dramatically increase the value of such feature consolidations is if rather then providing access only to the few unpublished works, drafts and preprints that a user has created since his registration, users could access their potential collaborators entire opus. Once more, would it not be fascinating if one could view in a user's profile a impartial ranking of their authority determined through some measure based on citation information. Similarly, it is one thing to open up publication to debate on a social network, it is quite another to attempt to draw conclusions or in fact any value at all from them, but what if each comment was accompanied by a ranking, again determined through the use of citation information? To bring my suggestions even closer to the profession itself, what if users could enlist the aid of a trained librarian, within the very context of the network, to aid in the location of relevant colleagues to befriend or contact.

I find this last suggestion the most interesting as it is by far the purest application of library and information science to the online realm. In fact the suggestion I'm really making is that in the context of the online social network a person, or more precisely their profile, is nothing more then a document. To broaden the suggestion, and I'm certainly not the first to suggest this, in an online context everything is a document. Given this universality of the documentary format, their seems little reason that librarians should limit the application of their profession to the retrieval of only that which has remained unchanged between the physical and virtual worlds. In a social network like Academici , users, at some level at least, are simply documents and as such should be well within the purview of the librarian. While in the past a patron might have approached a librarian looking for publications on a particular topic, now in the social network context it seems equally reasonable that a librarian might better serve the patron by locating an authority on the topic or even better a community of authorities.

I must admit their is a certain futility and pointlessness in postulating about the potential of things that don't exist, but my goal is not to make idol feature requests or spin tales of times to come. I suggested these possibilities merely to illuminate the differences between advertising decades old practices online and actually applying the tenants of a centuries old profession to a new digital world. Finally, I would like to suggest that we should not confuse the intellectual paradigms that bind us to the brick and mortar with the librarian profession itself. To shirk these mental shackles is not to abandon the profession, but to free it.

Consuming Libraries with MySpace

A few days ago I bought the latest issue of the MIT Technology Review, a magazine I discovered a few month ago and have been enamored with ever since. I have for some time been a fan of Wired, as regular readers know, in part because of its content and in part because of its connection with the late great UofT professor Marshal McLuhan. For the benefit of those who haven't had the opportunity to browse the publication, I would describe the Technology Review as presenting a slightly more technical take on many of the issues often addressed by Wired.

My general strategy for reading both publications is to peruse the table of contents and cherry pick a few of the more interesting articles, returning later to read the publication from cover to cover. As it happens, this time round my attention was drawn to an article featured on the front cover under the title "What's Wrong With MySpace".

Now I'm no fan of MySpace, and there are many, I'm sure, who would be more than happy to list their grievances with the site. The article's author, however, has a rather unique perspective, and one quite relevant to libraries. The gist of the argument is that social networks are, or should be, fundamentally about people not products. Their function, or what one would expect it to be, is to facilitate things like communication, and relationship building. MySpace, however, is populated by a growing number of artificial profiles. These artificial profiles are for movies, companies, products, and fictional characters, none of which can participate in or contribute to the communication and relationship activities for which social networking is often lauded. Ultimately, this bastardization of the community building functionality of the site has lead to a system that encourages its members to increasingly define themselves by the products and services they consume, manifested in their "friendship" with a multitude of inhuman profiles. To add an analogy of my own, MySpace is much like high school, in theory a place for the transmission of knowledge and betterment of one's person, but has devolved into nothing more then a convenient forum for competitive consumption.

If one resists the urge to place values on the various goods and services that MySpace users choose to define themselves with and instead focus exclusively on the broader issue of personal definition through consumption, one cannot help but lump those libraries with profiles in as part of the problem. This is quite close to the point I was inarticulately attempting to make a few weeks ago. Libraries are not members of communities; in the physical or virtual world, they are fundamentally platforms for community. The physical communities that allow for the transmission of culture and facilitate research have many aspects, an important one of which is the library, but one would never expect to see the name of a library listed as the author of a paper or on the cover of a best selling novel. For the same reason one could contest that the name of a library doesn't belong on a social network user's list of friends. In moving to the digital realm, libraries should be looking for ways to continue to act as facilitators and spend less time attempting to be participants.

Wednesday, November 22, 2006

Now Hear The Evolution of My Thoughts

My post last week generated a number of interesting comments, all of which raised important points. They also caused me to realize that I hadn’t done a particularly good job of explaining what I was proposing specifically or where I was coming from philosophically. With those comments in mind I had initially planed to write this week in elaboration of my previous post, but have hence decided to take a different approach. I still plan to return to the issue in a more formal fashion but will leave that for another post. What I do intend to do is leverage the connective, communicative power of the internet to provide those so kind as to read this with some insight into the foundations of my thinking on the issue.

For those who don’t know me well, I am something of an podcast addict. I am perhaps one of the few individuals who actually purchased their iPod with the express purpose of listening to podcasts. After my most recent iPod's purchase I went month without cluttering its small drive with music files. While I had no intention of becoming a podcast connaisseur I have, to some degree, simply because of the volume of Podcasts I go through. On the average week I estimate I consume somewhere between 10 and 15 hours. For those ready to click away in disbelief, I should inform that the vast majority of this auditory consumption is due to my weekly commute from London to Toronto, with the remainder lost to many hours running the trails and streets of both cities. These many hours of listening have significantly shaped my thinking on any number of issue, including the role and function of social networks. While many of the podcasts I have listened to are lost in a haze of sweat or highway, thanks to iTunes I have been able to identify a number who’s contents weighted heavily on my mind as I composed my last post.

Venture Voice 40 - Reid Hoffman of LinkedIn :

This interview with Ried Hoffman was perhaps the most important of the podcasts I’ve listed in shaping my views on the use of social network. Hoffman talks about how demographics are important in the operation and construction of social networks. He explains that what people need out of a social network can change dramatically based on a great many things including age or life stage. He also explains how by properly structuring a social network one can meet a communities needs in ways that often would not be possible in more generic structures.

Inside the Net 35: Digication
:

Digication is just another example of a social network that has been constructed to meet the needs of a specific community. In the interview one of Digication's founders also explains how important the simplicity of single purpose tools are to their adoption and use. This is also an interesting example for libraries because the target audience for the Digication software is schools and other educational institutions.

Inside the Net 5: 37Signals:

This is an interview with Jason Fried of 37Signals the company behind the popular productivity applications BackPack, CampFire and perhaps most importantly BaseCamp. In truth I could have selected any number of podcast interviews with Jason Fried because his message is quite consistent. He firmly believes that most software is too complex, which hinders both its use and adoption. He believes that the success that 37Signals has had recently is due to the simplicity and focus of their products, a success he thinks other could share if they adopted a similar approach.

Web 2.0 Show - David Heinemeier Hansson - Episode 19:

This podcast is a little more technical then the others, focusing on the actual creation of the software that drives social networking sites. Hansson is an employee of 37Signals and the programmer that developed their initial product BaseCamp. During the initial development process Hansson started to put together a software framework to help speed and standardize many of the technical steps involved in putting together a web application. The framework he created has since been dubbed Rails and works in conjunction with the programming language Ruby. In the interview Hansson explains how much easier the Ruby on Rails framework has made the construction of web application and how that has contributed dramatically to the Web2.0 explosion. While technical skills are still required, Ruby on Rails has drastically reduced the time, effort and cost of taking an idea from the planning to implementation and deployment stages. While I’m not sure my fundamental opinions would change in the absence of Rails, I am certain my confidence in the ability of libraries to actually achieve what I suggest would be significantly reduced.

Wednesday, November 15, 2006

Time to be Leaders not Users

Recently I jumped on the Ruby on Rails band wagon, having played around with PHP for a year or two. Ruby on Rails, a framework for developing online applications, was developed by a programmer at 37Signals, a web design firm turned software development house. Part of the Ruby on Rails indoctrination, as I’ve come to discover, is understanding and accepting the 37Signals design gospel. Even prior to the Rails explosion 37Signals was respected in the online community for their blog, Signal v. Noise and other writings. One of the things that they preach is what I would describe as “specificity in design”. By this I mean that applications should be developed with a very narrowly construed purpose. The example they use quite frequently is that of the master craftsman. The craftsman has tools, many tools, each with a very specific function. The true craftsman does not use a Black and Decker all in one, or a swiss army knife, but a tool meant only for that single job. I increasingly find myself looking at the tools I use and the things I encounter on the web, including social networks, and evaluating them based on the 37Signals criteria.

From what I’ve seen most of the social networks would have a real problem meeting the 37Signals standard. MySpace is perhaps the worst feature glut offender, but the same can be said of most of the others, to one degree or another. This seems to be a problem mostly because of the lack of definition in most social networks. They’re not sure just who they appeal too or what people are going to use them for. The lack of focus causes them to crowd their sites with features and various visual components with little cohesiveness. Then users come in and further confuse things by adapting the sites’ disparate features for all sorts of functions. I’m not sure this is a good thing, in fact I’m almost positive its not, and from the readings it seems libraries are as guilty as anyone else.

I am not saying that libraries shouldn’t use social networks or that social networks are a negative thing for libraries, but one has to consider for just what purpose they intend to use a social network and whether they’re using the right tool for the job. I think in most cases the answer is no, but more importantly it seems the question isn’t being asked. If libraries were asking the question and thinking about it critically I think that far few would be spending their time and resources putting up MySpace profiles.

So what should they be doing?

This brings me to my second idea/inspiration. I was recently reading an article in this months Nature entitled “2020 Computing: Science in an exponential world”. The gist of the article is that the amount of data being produced by modern science is overwhelming the traditional methods of scientific communication/storage. While a portion of the article is addressing simply storage and creation, it also address the scholarly use of data, particularly the increasing degree to which original research is conducted by data mining and not experimentation. As one would expect from a science journal they come at the issues from a very “preserving scientific method” perspective. The authors are very concerned with issues of reproducibility and long term preservation and less with how researchers themselves are coping.

I couldn’t help but think while reading the article that the solution to many of the issues they were raising, any many they weren't might be found through the use of a social network. If research is happening entirely online through databases and data mining, it seems silly that they should have to step offline then to participate in scholarly communication and collaboration. It also strikes me as silly that they should have to use inappropriate tools such as Google and blogs to keep themselves online and up to date. Their should be a social network for academics that is designed specifically to facilitate positive scholarly communication and collaboration. It also strikes me that the construction and maintenance of such a social network should be the responsibility of the library. It was revealed to me a few months ago that it was the special libraries at Canada’s big Toronto banks that constructed some of Canada's first corporate intra nets. While this might not seem to require anywhere near the same level of skill and understanding as the creation of an effective social network, it did a decade ago.

Libraries today have no excuse for passivity. The cost and time to develop applications such as social networks is decreasing rapidly as both the tools and platforms improve. Libraries need to think critically about where they spend their dollars. Is it more valuable to spend millions on electronic journals that place you at the mercy of vendors and publishers or is better to spend a fraction of that on developing a system that truly makes use of modern technology and insures a role for libraries in the future as more then simply spending committees. It seems that it is no longer enough for libraries to be the users of generic or inappropriate tools.
I have less time to think on this then I would like, but I hope to refine my understanding over the next little while and post more on the topic.

Wednesday, November 08, 2006

Do All Roads Lead To Facebook

This is actually the second post I’ve written today. After writing the first, I decided that I wasn’t happy with the thought, particularly as to how it addressed libraries. Anyway, you’ll have to forgive me if this is somewhat cursory in its treatment of the subject of social networks, but it’s being done on the fly.

I recently listened to a podcast of a presentation on transportation networks. The presenter, and author of a book on the subject had a number of interesting things to say about historical transportation networks, as well as networks in a more modern context. One of the networks he dedicated some time to was road networks, particularly toll roads. Aside from describing the origin of the term “turn pike”, fascinating in itself, he described how many original English toll roads were designed to capture revenue from outsiders traveling into a jurisdiction rather then from the areas residence. This description gave me cause for pause when I considered it in the context of the library Myspace and Facebook profiles I had looked at. I wondered just who the libraries were trying to attract or contact, outsiders or locals. This may seem like something of a non sequitur, but give me some time.

As far as I can tell, and admittedly I’m not a heavy user, the individual online social networks allow for the formation of social ties over increasingly great distances. For organizations, social networks provide a means of advertising to large groups of people dispersed over vast distances at relatively little cost. The question that I’m driving at is why would a library have any interest in attracting attention from users that aren’t close by geographically? I know, the simple answer to this is that they offer service online that don’t require physical proximity, virtual reference for instance. However, this issue of proximity begs two questions. The first of which is who’s using these services. If the users are local to the library’s jurisdiction, then why waste time advertising to the entire english speaking world on Myspace and not focus on more effective local advertising, such as public transportation signage, physical mailing, email lists, etc.

It would seem for a service to be legitimately justified in its delivery through Facebook or Myspace and not through the library webpage, a service should be accessible to the entire population of the social network on which it’s hosted. However, that begs my second question, just who is going to pay for its delivery? The english speaking population is a fairly large group to provide a service too and service like that doesn’t come cheap. This was much the same situation that many a seventeenth century english county was in with their roads, their solution, the toll road. (I told you I’d get to it)

Now I may be putting any future I have with OCLC on the line, but I have some difficulty with libraries, particularly public ones, charging for library services. So you can see, I’m stuck in something of a bind about social networks and libraries. If one isn’t going to provide the service to the network's user base, why not stick with the traditional library web site? If one is going to provide the service, just how is it going to be paid for? At this point unfortunately, I don’t have answers, only questions.

I apologize for the rather rambling structure of this post, these are just a few things that sprung into my mind. Maybe I’ve missed to boat on the rational behind the library social network profile, I certainly solicit rebuttal.

Saturday, October 28, 2006

A bug in Digg

I've been using digg a lot in the last few hours trying to generate interest in my last , designed for digg, post and I think I may have found a bug. I'm sure their are other ways of reporting bugs but it seems very much in the digg spirit to do so through a blog post/digg submission. Anyway, the bug seems to show when one tries to digg the "my number 1" story of a user they've just befriended. A message appears saying that you've already dugg the story regardless of whether this is true or not. If you then leave your new friends profile and go to the story's page itself digg will once again let you digg it. The critical point is that the bug is only apparent when digging from the page to which you are sent after initially befriending someone. Well there it is, I can now rest peacefully knowing that I've done my bit to make digg better.

Digg!

Top Ten Wired Software Stories of All Time

Well, here it is, my guaranteed to make it to the Digg front page post.

1. Scripting on the Lido Deck
Issue 8.10 | Oct 2000
By Steve Silberman

One luxury cruise ship. A hundred hard-coding geeks. Seven sleepless nights. Welcome to the floating conference called Perl Whirl 2000.

This was the article that inspired me, so it gets the top spot.

2. O, Engineers!
Issue 8.12 | Dec 2000
By Evan Ratliff

Twenty years ago, Tracy Kidder published the original nerd epic. The Soul of a New Machine made circuit boards seem cool and established a revolutionary notion: that there's art in the quest for the next big thing.

Kidder’s Soul of a New Machine is one of the best books written about the software/hardware development process. Reading it changed my world, at least for a little while. I was maybe 13 when I read Soul of a New Machine. After that, what had been a mild interest in computers turned into a passion. The engineers working at Data General were my heroes. I have a few books that I reread again and again and Kidder’s masterpiece is pretty close to the top of the list. It was also the first Pulitzer prize winner I read and I’ve been a faithful follower of the prize ever since.

3.The Java Saga
Issue 3.12 | Dec 1995
By David Bank

Sun's Java is the hottest thing on the Web since Netscape. Maybe hotter. But for all the buzz, Java nearly became a business-school case study in how a good product fails. The inside story of bringing Java to the market.

Java was among the first languages I tried as a young computer science student, and while I may have resented it at the time, the story of its creation is both fascinating and enlightening. Wired describes the story well and with a mild geek factor. In respect and remembrance of the dot com bubble I’ve tried to down play the business side of things in my choices but in some articles it still shines through.

4. Leader of the Free World
Issue 11.11 | November 2003
By Gary Rivlin

How Linus Torvalds became benevolent dictator of Planet Linux, the biggest collaborative project in history

I don’t know how you can have a list of inspirational tech stories without putting one about the little guy from Finland who changed the world in the top 5. If this were a list based entirely on inspiration I think this would be a contender for number 1.

5. Code Warrior
Issue 7.12 | Dec 1999
By Chip Bayers

Microsoft's head honcho for Windows 2000 seeks perfection. It's a lonely crusade.

O.K so rounding out the top five is an article about the great satan. Some may question how a story about Microsoft can be inspiring. I ask you to think back to the days before the antitrust case and before the browser wars. I remember reading Coupland's Microsurfs and thinking, perhaps wrongly, this is really cool. Bill Gates is a nerd that made it and we should respect him for that. It’s a good article about an interesting story. The geek factor is moderate, so it takes spot number 5.

6. The Quest for Meaning
Issue 8.02 | Feb 2000
By Steve Silberman

The world's smartest search engine took 250 years to build. Autonomy is here.

Having strayed a little from the computer science world, I’m shortly to become a librarian and as a librarian how can I not put a story about searching on the list. Making it even better is that it’s not about Google, every librarian’s worst nightmare. This also fits well with the next story on the list. Academia can be inspiring too.

7. The Cutting Edge
Issue 4.03 | Mar 1996
By Michael Meloan

In computer science at Stanford, academic research can be a battle - to the death.

It takes Wired’s impassioned writing to make computer science faculty interesting. None the less, it’s good to find the inspirational in stories that aren't about making money.

8.When We Were Young
Issue 6.09 | Sep 1998
David S. Bennahum

In the Golden Age of ASCII, kids could be king.

O.K so maybe I wasn’t alive in the era this article is about, but I wish I was. Some people dream of the good old days when there were no computers, because there were no computers. I know a librarian or two that fits this description. Others however, dream of those days because they wish they could witness the technology’s birth. I don’t long for the past because of safer streets, but because I want an Altair.

9.Open War
Issue 9.10 - Oct 2001
By Russ Mitchell

It started as a crusade for free source code. Linux zealots turned it into a full-frontal assault on Microsoft. Now the battle for the desktop could snatch defeat from the jaws of moral victory.

I think the flagship of the open-source era deserves to make it onto the list twice and heck, what’s more inspiring then a real life David and Goliath battle.

10.Meet the Bellbusters
Issue 9.11 - Nov 2001
By Steve Silberman

Network-geek power couple Judy Estrin and Bill Carrico helped build the Internet as we know it. Now they want to safeguard its soul.

Not so much about coding, but still an interesting story and certainly important to all those web 2.0 web service developers out there.

Digg!

This all started a few weeks ago when I began looking through old Wired articles for mentions of Marshal Mcluhan. I had been reading some of his older stuff out of interest and decided to turn it into a paper on new social technology and the institution of the library in North American society. In Wired’s early days they often published articles about their patron saint and I was curious what they had to say.

Anyway, as I was paging through these dusty volumes I recalled, and subsequently found, an article that had inspired me to such a degree, that after a year long hiatus from anything computer related, I jumped back in and started taking courses at the University of Toronto. It felt strange being the only political science major in a room full of math and computer science geeks, but it was a world which I had always longed to be part of. It took a couple of months but I got back into the swing of things and had a great time. I loved swiping my way into the computer lab or logging into the UofT machines via SSH. I got a great deal out of the experience and ironically bumped up my GPA.

I write this because I owe it all to the excellent writing and editing of the Wired staff and contributors. I have since met a number of others who, while not interested in computer science as a career or field of study, have benefited or could benefit from a similar experience. So, as a reward to those who have, and a source of inspiration to those who have not, I’ve compiled a list of the top ten best or most inspirational programing and software engineering articles from Wireds past.

I’ve done my best to look through as many of Wired featured articles as possible. However I may have missed some and would certainly appreciate suggestions. Just so I don’t get a host of emails from irate readers I’ll explain my criteria. Firstly, I looked for articles that spoke to the software development process, the development of some software/language in particular or a personality important in the software development world. I judged the articles based on their level of inspirational content, their level of detail and the degree to which they “got their geek on”. By the last criteria I mean the degree to which they got into the technical details.

Some may disagree with my rankings based on these criteria and to them I say, the overriding criteria for any top ten list is the personal preference of the reviewer. If I liked it, it made it on the list. Having said that, I still want input, both on the articles and on your own sources of written inspiration. The more comments the better.

Wednesday, October 25, 2006

First Google now Amazon; attacked from all sides

This is just a quick follow up to my FRBR post. Amazon it appears is leveraging their Mechanical Turk to steal more then just the jobs of catalogers. Their new service NowNow is essentually a reference question service the works on mobile devices such as cell phones and blackbarries. Unlike Google that uses trained and vetted reference librarians (or equivalent) or Yahoo that allows anyone, NowNow farms the questions out to Mechanical Turk. I'm not sure what the quality will be like as anyone can sign up to work for mechanical Turk. According to the FAQ a response rating system will regulate the system to some degree. I'm not sure what the pay structure is but it might be a quick way for an impoverished library student to earn a few bucks. I'm thinking that if described creatively it might also look interesting on a resume, particularly to some one not in the know.

Wednesday, October 18, 2006

FRBRising with the folks

During last semesters advanced cataloguing class I spent a great deal of time thinking about the relationship between traditional cataloguing and modern collective systems like "tagging". Eventually I began to think about FRBR and the changes coming out of OCLC. Particularly, I focused on the various attempts being made to identify the "work" under the new system. OCLC has begun to test catalogue FRBRisation with what they call the "work set" algorithm and have met with some success, but certainly far from 100%. The problem that I just don't see them getting around is that often the "work" is simply not represented in the traditional bibliographic record, not even as a combination of elements. If this is the case no amount of processing by computer or librarian will be able to accurately and consistently identify and group "works". What the FRBRisation process needs is just a little added information about each record. This seems like a perfect task for a social bookmarking application. I'm not suggesting that social bookmarking or tagging should replace the more traditional details of the cataloging process. Any information that already exists in the bibliographic record our can be found on the chief source of information should still be dealt with in the tradition fashion. However, the "work" to which an item belongs is neither currently found in any pre FRBR databases nor easily derived from the chief sources of information.

I do recognize that the devil is in the details with these things. The problem of multiple overlapping "works" would undoubtedly arise in the absence of a predefined set, among other problems. However, Google and others have had a great deal of success devising algorithms that use user input, but don't do so literally. User input is filtered and analyzed to identify trends, commonalities, etc. This is how the Google spell checker works, google suggests a word based on the aggregate misspellings of the searching community. The idea behind a social FRBRisation project would not be to let the community definitively define the work of an item like they might tag a picture on flickr, it would be to generate the information lacking in a traditional bibliographic record so, that something like the "work set" algorithm might perform more efficiently.

The other problem I foresee arising is that a library undertaking a FRBRisastion project might not have the time or resources to develop and shape the kind of community that would be require to pull something like this off. However, I believe this has a solution as well. I was recently listening to a podcast/interview with Jeff Bezos, the founder of Amazon.com. He was addressing the various new non-consumer products that Amazon has begun to offer. All the various services were interesting to here about but one caught my attention immediately, Mechanical Turk. I had remembered reading about it when Amazon first introduced it but hadn't paid any real attention to its development. The Mechanical Turk allows for organization to programmably farm out small tasks to large groups of independent contractors. Each task itself is worth only a few cents, but individuals who signup can, in theory, perform many tasks in a very short span of time, enough to make a reasonable sum of money. Bezos called it artificial artificial intelligence, because from the programmers point of view the Amazon computer is doing all the work. In reality the Amazon computer is asking a person and then sending the result back to the service subscribing third party. My point is that a library that wanted to FRBRise its database quickly could employ the Mechanical Turk instead of waiting to build it own community.

Interestingly Amazon developed the Mechanical Turk initially for internal use, to do much the same thing as I'm suggesting. Amazon had a problem with duplicate records. They realized that many products were virtually the same and could be sold/inventoried as a single product, but were in their database as two items. It was too large a problem to give to one, or even a group of people, so they created a task marketplace, which evolved into the Mechanical Turk. A program would identify similar records and then submit them to the market place as a task. All the Amazon employee had to do to earn a few extra bucks was glance at each record and answer yes or no to the program. If the answer was yes the records were merged, if no the program moved on. All I'm suggesting is that something like the "work set" algorithm replace the Amazon program. Sure it would cost, but looking at how things are priced, not as much as one might think.

Sunday, October 15, 2006

Digging for wiki's

Library Science Student and avid blogger Jason Hammond has an interesting post on his blog HeadTail regarding the similarities between Google and Wiki's. I bring it up not only because Jason always has interesting and insightful things to say, but also because he's set himself the challenge of beating my record 8 digg's on the social new bookmarking site digg.com. My digg got him to seven, so he's not far from leaving me in the dust. I encourage everyone not only to read his post but to give it a digg. I think it would be a first if we could get a Western LibSci blog to the digg front page. For those of you in a hurry here is the link to digg the story.

Wednesday, October 11, 2006

How much structure is to little structure?

I just finished reading Brian Lamb's article on wikis. Generally I was impressed, Lamb seemed to take a fairly critical view that I think was lacking in a number of the other articles. However, I do take issue with his brushing off of the structure issues that can arise when collaborating via a wiki. He seemed to suggest that user problems with wiki structure stemmed mainly from unfamiliarity rather then genuine issues with usability. Much of his argument in this regard rested on his faith in the inherent search functions of a wiki. This attitude seems counter to the library and information science perspective. I certainly don't want to bring up and go through the many arguments for proper cataloging and sort over search, but they do exist and are prevalent enough that they shouldn't be ignored.

Lamb also suggests that recent additions and update lists, as well as other assorted tools would somehow address the lack of structure issue and ultimatly improve findability. I couldn't disagree more. If anything tools such as the recent change list hinder findability by focusing user attention towards a small subset of the total information contained in the wiki. In terms of searching the recent change list is no different from a new book list in the library, while interesting and full of helpful information it is not a tool for finding things.

Contextual links also come with their own problems, particularly when combined with unsupervised editing. If the only structured means of finding a page is through a contextual link, all that is required to orphan the page is for the link to be edited out of the related article. Given the known problems with freetext searching a page that has lost its single contextual link may never be found.

None of this is to say that wikis are not a wonderful collaborative tool, but to ignore the issue of structure risks making a waste of a lot of positive effort. It would also seem that the inherent self organization of groups fades as the group grows increasingly diverse. As libraries of all sorts generally deal in diversity, the issue of structure, it would seem, should be even more relevant to them.

Vista's RSS platform podcast

Just thought I would post a link to an interesting podcast/interview with Amar Gandhi, a program manager on the IE7 team. It's about a year old but still very interesting. In it they talk about RSS integration with Windows Vista as well as issues related to potential uses of RSS and some potential pitfalls among other things. A word of warning, the first five minutes is very technical, after that it gets fairly easy to follow. http://weblog.infoworld.com/udell/gems/ju_ghandi.mp3

Sunday, October 01, 2006

Looking for delusions in google

Just a quick post for any who are interested. I found while searching for something completely unrelated that google print has made available the full text of the book "Extraordinary Popular Delusions & the Madness of Crowds" by Charles Mackay. The book is notable for a number of reasons ,not the least of which is its mention in James Surowiecki's book the "Wisdom of Crowds". Surowiecki, in part at least, structures his best selling book as a response to the incidents that Mackey recounts in his work. Both books are interesting reads and in many ways fundamental to the development of social software both inside and outside the library.

Digg!

Wednesday, September 27, 2006

It's What's Not There That Matters

After reading the rosy pictures painted by Luke Rosenburger, and Steven Cohen, I'm concerned that we're not approaching RSS in as critical a fashion as we should be. Even Robin Good's article, while more balanced, wasn't particularly critical. RSS is a new medium in the same way that the telephone, television, and telegraph were once new. Each radically changed the way we interact with each other, the way we think and even the way organize ourselves. I think that a recognition of this has begun to ingrain itself in our culture, for which we are in great debt to Marshal McLuhan, and for that reason we approach most new mediums critically. Blog are a great example of this, as much time and print is spent analyzing the medium of blogging as is spent addressing any particular aspect of its content. The same, I don't think, can be said of RSS. Perhaps because of its ultimate simplicity, many fail to recognize RSS as something new, or at least as a medium all its own.

I haven't given this as much thought as I would have liked to, but I do have some initial thoughts on RSS the medium. RSS is minimalist in the information it distributes; headlines, summaries, mere suggestions of the whole story. The intent, obviously, is to draw people to the full text, the totality of what is available, but is that what really happens? What I fear is equally likely is that rather then drill down into the story, article, post, or update, RSS readers will increasingly rely on the feed itself as their source of information. In the past a person might not have experienced information of such diversity, at such speed, but because they were required to actually seek it out they experienced it in its entirety. If I'm right, this only becomes more of a problem as the popularity of RSS grows. The more feeds to which a person subscribes, the less time they will ultimately have to dedicate to each of the items in the feeds. In the end we will have traded immediacy for completeness. My really concern is that this RSS overload will further the Fox News, talk radio effect, ignorance of ignorance. People reading hundreds of RSS feeds a day will feel as though they are in the know, when in fact they've garnered very little from their efforts.

Anyway, because of the above RSS worries me and I think it should be approach with care . Libraries want potential patrons to visit their site and I'm not sure RSS is the way to do it. The key, I believe, may be in treating the RSS feed not as a headline, but simply as advertising; summaries out, catch phrases in. The headline works for the newspaper because the article is underneath, the same cannot be siad of RSS. Finally, if it's something that can't or shouldn't be advertised then leave it out, let them come to the site.

Friday, September 22, 2006

This blog is still not fact checked

It has been pointed out to me that it might be unwise to follow the precepts of my last post whilst blogging for an organisation of any kind, a library for example. To be honest this is probably quite true. To make rash personal and potentially inaccurate statements on an organisation's official website is obviously both wrong and unwise. This is precisely why, and I alluded to this in my post on the New Republic, blogs on behalf of organisations are in fact not blogs at all. They may look as though they are blogs, appear so in form, but they are not. A blog sponsored by an organisation is no more a blog then an infomercial is a documentary.

In writing this, I'm not suggesting that organisations like libraries shouldn't run blog like web sites, they have and do, occationally with some success. And if organisations are going to run blog like sites it would only be prudent to have a code of some sort for the employee they mislabel a blogger follow. However, to suggest this is not to suggest a code for bloggers, but a code for employees. I am not an employee and as a blogger I thus, for the form to operate as is suggested it does, should not have to follow the ethical code of any particular organisation.

As for future employment, from what I have heard, it's true; employers occasionally do examine the blogs or past blogs of potential hires. Does this mean that bloggers that feel that at some point their blog might influence their job prospects should be ethical, no. What it suggests is that they should be prudent. However, to be prudent is certainly not to be ethical. To be ethical would be to whistle blow on the organisation by which you are employed, to be prudent would be to remain silent. The difference, while in the abstract is quite grey, is stark in reality.

If I might add one additional point to the ethical implications of employment. To suggest that we behave ethically because we fear reprisal from employers, future or current, is to suggest that some how are ethics are a function of this economic relationship. While some might agree, many, I think, would not. I am of the latter. The thought that our employer defines our ethics is a scary thought, it is to suggest that those with money, thus free from the bonds of traditional employment are also free from traditional, or at least conventional, ethical restraint. This may at times seem true in our society, but that certainly does not mean that it is right.

Wednesday, September 20, 2006

This blog is not fact checked

I'm frustrated by Blood's Weblog Ethics. My frustration stems from a number of her individual points as well as the piece as a whole.

On the whole the first thing that came to my mind in reading both her piece and that of Karen Schneider was that they're missing the point. Blogs are about speed, indavidual preference and, as is mentioned all to often, conversation. Are they about fact checked information, written with the lofty goal of unbias dissemination to the public, no! To insist that bloggers adhere to some code of ethics, particularly one constructed as a sort of journalism ethic light, is to eliminate the form altogether. A blogger that would painstaking conform to the code Blood describes ceases to be a blogger and instead becomes a underpaid journalist with low ethical standards. Perhaps I'm wrong in this, and I can say for certain that my information has not been fact checked, but no one ever seriously suggested that newsletter publishers back in the early days of desktop publishing adhere to a common code, a blog is no different. Blood and others confuse the issue by suggesting blogging is a form of journalism, when in fact its best considered a form all its own.

More specifically I have difficulty with a number of Blood's individual points. Firstly, the issue of truth and authority. Blood suggests that fact checking and the validity of a blogs assertions are the sole responsibility of the blogger, and to some extent I agree. To make a statement in full knowledge of its falsity is lying and is unquestionably unethical, not however because it was written and record on a blog, but because to make false statements in any circumstance is unethical. This is not part of my code as a blogger, but as a person. Where my difficulty arises is in her suggestion that the ultimate authority of the blog should rest in the ethical motivations of the author. This is similar to the suggestion that the imposition of a code of ethics is what insure the accuracy of the science and history we read in academic journals. This is simply not the case, the authority academic journals carry is derived from the system of peer review they employ. Similarly, any authority that blogs have or will have is derived from the system of immediate comment and feedback they employ.

While codes of ethics are positive in that they suggest that some at least are taking the form seriously, they also confuse the issue of authority. Bloggers should focus their efforts in promoting blogs on the merits of their unique systems of authority not on those of any other form and not on some ethereal notion of ethics.

Sunday, September 17, 2006

Blogging The Symptom Not The Problem

At this point some must be asking what's wrong at the New Republic ? The magazine has been a staple of American Journalism since 1914, but in the last few decades has suffered from one humiliation after another. The most recent in the magazines long string of troubles is the suspension of one of its senior writers, Lee Siegel, after he began posting increasingly outrageous comments to his blog under a pseudonym. However, little has been made over the fact that yet another New Republic writer has found himself at the center of scandal. The response to this latest journalistic failing has been to suggest that real journalists can't handle the bidirectional nature of the blog medium. The give and take the blogosphere is to much for the pride filled professional writer. This is a convenient explanation for the magazine as it allows for the old ghosts of its past failings, Stephen Glass and Ruth Shalit, to remain buried. But is the blog really to blame? It's hard to look at the publication and not wonder if this most recent debacle couldn't be part of a larger problem one that has reared its head more then once in the past. Perhaps the blog itself isn't the problem, but rather merely a symptom.

The Ruth Shalit affair was the publications first major modern misstep. Shalit as it turned out had not only plagiarised pieces of a number of her works, but had also, on occasion, manufactured some of the more critical facts, this is of course before Fox made it fashionable. After the affair, the magazine instituted an official fact checking department to exercise the control its editors seemed unable or unwilling to .This response seemed reasonable both at the time and even now in hindsight. Plagiarism is bound to occur in any endeavor that relies on the intellectual product of the human mind. So when engaged in such endeavors it seems only reasonable to guard against such human failings. However, one question might have been why the editors were guarding against this already, but that was lost in magazines efficient response to the crises.

The same question should have been asked during the Stephen Glass affair. Glass young and pressed for time, increasingly resorted to fabrication, and not of the minimal sort of which Shalit was guilty. Glass constructed not only the facts, but also often the context. As a New York Times writer pointed out after the public revelation of Glass's fraud, one is prepared for the interview to be falsified or misconstrued, even the interviewee to be nothing more then mere character, but not the institution of which the article was about. The suggestion then was that Glass had not been found out because of his audacity and the scale of his fictions. However, while audacity may explain a lack of popular scepticism it fails to explain just what happened to the fact checking and the guiding authority of editorial control. In many instances Glass was only one phone call from being found out. One can't hide the nonexistence of entire corporations for very long. The problem was the phone call was never made and Glass remained, for quite some time undiscovered.

Given this history it seems that the most recent failings of the publication might be better explained from a editorial perspective then a technological one. Siegal's blog is not an isolated incident, but part of the much larger pattern outlined above. In the case of Ruth Shalit the New Republic lacked the editorial controls necessary to guard against her mistakes. In the instance of Stephen Glass it failed to use them. Now in giving Siegal a blog the New Republic removed any pretense of editorial control. Having always and often erred on the side of carelessness in creating official blogs, the magazine chose to institutionalised the practice.

The New Republic has for some time relied on the passioned and unbridled expression of its writers. To foster this image the magazine hires young fresh writers, often newly graduated from the countries finest schools. The use of the raw medium of the blog was simply another tool to keep the publications image alive, but as in the past, with Glass and Shalit, the strategy backfired. The only difference between then and now is that it happened online.


Digg!

Wednesday, September 13, 2006

How Easy is Too Easy ?

I just finished reading Meg Hourihan 2002 piece "What We're Doing When We Blog". I think anyone who has read my previous posts know that I disagree in principle with her highly positive assessment of the bloggers and their readers. I won't belabor the points I've already elaborated in the past few days. I also don't have any quarrel with the more descriptive elements of her post. However, I will briefly make one point though.

As anyone reading this blog will have noticed I've been quite caught up in the recent Digg.com controversy. As each day seems to bring an endless commentary on the situation , something of which I am a party to, much of my morning surfing has been dedicated to keeping just up with the debate. This morning I was reading a particularly well written article entitled "Digg's Design Dilemma". While I can't say that I agree with the authors overall conclusion, for reasons which I will spare you, he did have a number of interesting things to say. Particularly relevant to Hourihan's article was his comment on diggs ease of use.

specifically he points out that a creditable argument exists that the more difficult it is to comment on something the better the comments themselves tend to be. This is not the first time I have heard the case made for making individual participation more difficult. In political science their is a substantial school of thought that views the ease of use problem as the strongest argument against the use of phone, mail or internet voting systems in elections or referenda. Make things too easy and people stop thinking about them, be it the comments they leave on blogs or the votes they cast in an election.

Houriham's suggestion that comments allow for a dialog to emerge on a blog is indubitably true. What isn't so clean cut is whether the dialog created by a blogs comment feature actually lends itself to a better intellectual product in the end? What might have begun as a potentially valuable protothought on a blog could easily devolve rather then develop when contorted in response to a series of thoughtless comments.

Surowiecki agrees with blogger

Or at least that's what the title of this USA Today article should have been, as I beat Surowiecki to the punch with my post Monday entitled "The Wisdom of Digg from Crowd to Mob". As it happens, in an interview with USA Today Surowiecki agreed with my assessment that the changes made by Digg.com to its algorithm should result in a system that better reflex's the principles of good group decision making outlined in the "Wisdom of Crowds".

The proof just keeps rolling in

I know my last post may have sounded a little harsh, and perhaps it was. However, in my defense the evidence just keeps rolling in. In an ironic twist of fate I came across this post in my web wanderings this morning, "It's all a farce, anyway". In it the author is more interested in the effects of web surfer apathy as it applies to ad revenue, then blog quality, but she does begin with a very interesting anecdote. She apparently had been speaking with a number of Mac programmers, during which they reveled that they were party to a bloggers successful attempts to game the social bookmarking site Digg.com. For anyone who follows Digg that's not a big shock, gaming seems to have been a problem for some time on the site. However, the bloggers next statement is significantly more interesting. He revels that his gaming strategy depends largely on the fact that after a point Digg users cease to read the articles themselves and simply Digg anything that catches their fancy. This behavior suggests that Digg users are less interested in the content then they are the activity of "digging", in effect, "killing time". I must admit, this comes as less of a revelation to me then it appears to have the author of the post, but it is nice to hear it from the horses mouth. As a student first and blogger second, evidence still trumps opinion.

Tuesday, September 12, 2006

Where did all the links go

Reading Rebecca Blood's "Hammer, Nail: How Blogging Software Reshaped The Online Community", I can't help but long for the times when a blog post almost always contained a link of some kind. Perhaps I failed to play enough Nintendo as a child and my attention span has thus remained at a level now uncommon, but I rarely ever want to only read a sentence or two on a topic of interest to me. Even three hundred words, the length of originality indicated by Blood, seems to short to truly communicate anything of substance.

Don't get me wrong, I'm all for headlines and teasers (movie trailers often proving better then the movie). I like to know what I'm getting into before I start to read. Frankly, I don't have time to read everything and a good title\summary makes separating the wheat from the chaff all the easier. That's why the link was there, if your interested, go read more. Now sadly without the link, my attention caught by a witty title and that smashing first line displayed in the Google results I'm left with some bloggers 300 word, not even enough for a thought, more a thoughtlet.

However, bloggers and readers alike seem unperturbed by this, the loss of the link. My theory why, no one is really interested in what they read on blogs anyway. In fact I don't think most bloggers are much interested in what they write. Both groups, equally uninterested, just killing time.

The consumption and production of many a blog has become something akin to the consumption of romance novels. I work in a public library and regular witness the romance novel selection process. People choose romance novels because the cover is shiny and new, because the title is interesting or even because they've taken the time to read the first sentence on the back cover. At times I've witnessed patrons just take all the books that were ajar or misaligned. This, in my limited experience, is not dissimilar from the manner by which most people choose the blogs or blog posts they read. Why, because without the link, blog consumption is little different from reading a romance novel, just killing time.

Perhaps I'm wrong, maybe a lot can be said in only a few words, and genuine interest expressed without the desire to know more.

Monday, September 11, 2006

The Wisdom of Digg from Crowd to Mob Part 1

In recent days a controversy has arisen over steps taken by digg.com to reduce the influence of its top contributors. The algorithm that moves stories to the coveted front page of the site is being altered to reduce both the weight formerly given to top diggers, as well as to reduce the effectiveness of reciprocal digging by require a greater diversity in the participating users. Top users have not only let loose with a torrent of blog articles criticizing the move, but are also "resigning" from the site in protest. The question this controversy has left the community is just how important are top contributors to the web 2.0's social production formula. The answer, a resounding "not very", can be found in two of web 2.0 's seminal works.

When one scours Amazon for the popular intellectual origins of the social production movement that has recently culminated in "web 2.0", one's search eventually and inevitably lands on James Surowiecki's The Wisdom of Crowds. Surowiecki's work presents a compelling case for the effectiveness of social collaboration in achieving any number of goals. His arguments have since been employed repeatedly by bloggers, pundits and academics to justify an apearent infinity of new online collaborative tools. What is often lost in this evangelising of group power is that the book also serves as a warning that the wisdom of the crowd can very quickly devolve into the irrationality of the mob.

Surowiecki outlines three preconditions necessary for a collaborative project to succeed: diversity, independence and decentralization. Groups lacking in even one of these traits fail miserably at producing even passable results. In the case of Digg, having the top users in control of the front page violates not one, but all three of Surowiecki's tenets. The reciprocal digging that has been growing increasingly common among diggs top users has resulted in behaviour more akin to that of a cliques then a collection of independant netizens. Diversity disappears as the closer the group becomes, and the more difficult it becomes to move into. Finally, one can already see decentralization disappearing as leaders and personalities emerge from among the top user ranks. The recent debate further illustrates this last point, with the digg community looking to see what a select few individuals will do. In fact, if the top users succeed in driving traffic from digg, it will only serve to prove how little independence and diversity their is within the community.

Next post to come soon just think "long tail".

Digg!

Sunday, September 10, 2006

Just a Few Things

Just finished reading the "The amorality of Web 2.0" post on Rough Type. A strange post and I don't think it really hung together, but then again what can you expect from blog posts. Personally, I strongly disagree with him on the amoral point, which, given his final paragraph, may have been his primary point. Notions of community, equality, and expression are not only concepts heavily value laden, but also very much a part of our society beyond and before the web. We live in a participatory democracy, a system founded in these ideas, it would seem the very definition of hypocrisy to suggest that we would not hold them in the highest esteem. True the web is "just happening", but it's happening because it allows us to tap into and further many of our core ideals.

On a very different note, I found what he had to say about the coexistence of the blog and traditional media interesting. Not interesting because of what he was saying about media, but because of how it seems that the same might also apply to the conflict between libraries and bookstores. Often the suggested solution seems to be to alter the library to emulate the bookstore. This seems like no better an idea then the suggestion the New York Times should cede the mantel of journalism to the bloggers. In both cases overlap may exist, but in neither case is their an excuse for the extinction of either.

Laws That Can't Be Ignored

I just finished reading the April Newsweek article "The New Wisdom of The Web" by Leve and Stone. While it provides an interesting and optimistic overview of a number of the major web 2.0 players, the authors enthusiasm seems to prevent them from taking a more critical look at the companies they are profiling. True, the ability of the web to overcome some of modern economics hard and fast rules regarding incentive and self interest is astounding, one only has to look at the success of the open source software movement. However, much of the evidence suggests that these economic improbabilities are built on the most tenuous of contextual changes. Sure the web may have lowered the personal costs of contributing to a collective effort by such a degree that altruism, enjoyment and recognition prove sufficient motivators, but an equally small push in the other direction may be all that is required to break the system. This is why so many, if not all these web 2.0 enterprises are burning holes in the pockets of their owners and investors.

What the article fails to mention is that many of the web 2.0 flagship sites have yet to turn a profit, relying on cash from venture capitalists much like their 90's dot com bubble counterparts. YouTube recently valued at 1 billion dollars continues to loose money hand over fist, not to mention the potential for further dramatic losses due to intellectual property law suits. Business Week recently feature digg.com founder Kevin Rose on its cover with the headline "How This Kid Made $60 Million in 18 Months". However, Rose was quick to respond in his weekly podcast that he can barely afford to buy a couch, the site still drawing on its venture capital cash.

In response to the, "show me the money" demand, one often hears that the magic of the Chris Anderson's "long tail" will come to the web's rescue. One is often directed to look at Amazon and Netflix for proof of every web 2.0's companies viability, but to leverage the "long tail" one has to have a product and flickr, YouTube and Myspace don't seem to. Without their users (and their content) these sites have nothing and the minute they place an impediment to their users participation they'll start a classic race to the bottom. Make the site less appealing to use through more intrusive ads or charging for access and users will cease to generate content. Once the content begins to dwindle so to will the users willingness to tolerate the ads and or cost, which will only further the loss of content, and users.

Adding to all this is fact that much of the aging infrastructure on which these sights rely was largely built during the dot com bubble and was thus significantly subsidized by all those investors that lost their shirts and pension in the bust. Today's cheap broadband is largely a product of bargain basement bankruptcy sales. Unless middle class of North America are willing to again turn over their life savings, such as they are, for the betterment of broadband access for all, costs for these sites are only going to rise. Rises in infrastructure costs will hurt sites like YouTube the hardest because of the data intensive content they provide, but rising costs for any money loosing startup cannot be good.

I do not mean to spell doom and gloom for the web 2.0 industry. All I am suggesting is that to look at it uncritically and consider only the value and not the costs is a mistake, one that was made by many not so long ago. The "long tail" provides a economic model for many an online business, but not all. Appealing to the masses is great, but it does not give one carte blanche to ignore economics.

Digg!

Wednesday, September 06, 2006

First Post (My Bio)

Hi,

I am an MLIS student at the University of Western Ontario, currently in my third and final term (hopefully). Last year I com pleated a BA in Political Science and American Studies from the University of Toronto. I am and have been for the last 7 years an employee of the Toronto Public Library system at the Burrows Hall branch . Besides school and work much of my time over the last several years has been dedicated to training for and running in a number of marathons and races of other assorted distances. I spend many of my days attempting to reconcile my passion for libraries and my love of running, unfortunately reading while running results in way to many injuries.

About my social software experience, I have blogged, if only briefly. I have and do subscribe to RSS feeds both through a variety of aggregators and through firefox’s live bookmark feature. I’m not a fan of del.icio.us, not because of its underling principle but because the interface is atrocious. By far my favorite piece of social software is the social news aggregation site digg.com, not simply for its functionality and interface but because of the various personalities involved in its creation. I’m a reader of wikipedia but have only contributed once, in aid of Steve Cobert’s abortive sabotage attempt (In my defence I think he was making a valid point).