Harvard Critical Digital Conference 2008 paper

In April I presented a paper at the GSD Critical Digital Conference at Harvard University. The paper was co-authored by my supervisor Mike Donn. The conference itself was pretty good considering it was the first time it had been run. You can find my paper along with all the others online at the Critical Digital website. However for posterity (and Google) I have included the text of my paper below.

Using Project Information Clouds to Preserve Design Stories within the Digital Architecture Workplace


During the development of an architectural design a series of design stories form. These stories chronicle the collective decision making process of the diverse project team. Current digital design processes often fail to record these design stories because of the emphasis placed on the concise and accurate generation of the virtual model. This focus on an all-encompassing digital model is detrimental to design stories because it limits participation, consolidates information flow and risks editorialisation of design discussion. Project Information Clouds are proposed as a digital space for design team participants to link, categorise and repurpose existing digital information into comprehensible design stories in support of the digital building model. Instead of a discrete tool, the Project Information Cloud is a set of principles derived from a proven distributed information network, the World Wide Web. The seven guiding principles of the Project Information Cloud are simplicity, modular design, decentralisation, ubiquity, information awareness, evolutionary semantics and context sensitivity. These principles when applied to the development of existing and new digital design tools are intended to improve information exchange and participation within the distributed project team.

The 7 (f)laws of the Semantic Web (aka Web 3.0)

I have just been going through some old articles I have lying around and came across this:

The 7 (f)laws of the Semantic Web

Within the article Dan Zambonini lists seven issues he sees needed to be addressed by Semantic Web proponents in order to improve its chances of adoption.

  1. Not all Semantic Web data are created equal.
  2. A technology is only as good as developers think it is.
  3. Complex Systems must be built from successively simpler systems.
  4. A new solution should stop an obvious pain.
  5. People aren’t perfect.
  6. You don’t need an Ontology of Everything. But it would help.
  7. Philanthropy isn’t commercially viable.

Personally I think these are all excellent points which if overcome would considerably improve the adoption chances of many Semantic Web related technologies.

The Semantic Web is an ideal that has been around for a long time but has never reached critical mass. Recently a number of American journalists announced the ideals of the Semantic Web were alive and kicking in the guise of Web 3.0. Personally I think this is a fairly naive thing to say for a couple of reasons.

Firstly Tim Berners-Lee's concept of the Semantic Web existed well before the term 'Web 2.0' was even a glimmer in Tim O'Reilly's eye. So to launch a rebranding exercise and announce it as the new big thing ignores the fact it actually lost out to the simple, socially motivated read/write concepts which are the embodiment of 'Web 2.0'. Whatever evolves from the hype that is 'Web 2.0' will certainly not be the Semantic Web as previously envisioned. Instead it will inherit many of the aspects which has come before it whilst simultaneously adding a new twist, perhaps Semantic Web related (or not), which gets Web users and investors excited.

Secondly launching a crusade for the next big thing well before the benefits and lessons of 'Web 2.0' have been disseminated will ensure this next version of the Semantic Web meets the same fate of its previous incarnation. At issue is the fact that the Semantic Web concept represents a number of digital information and relationship ideals. Striving to meet these ideals should be the long term ambition of all future iterations of Web technologies, including 'Web 3.0' if it ever materialises. Attempting to simply package up these ideals into a set of technologies which get forced on an unsuspecting, and to a certain degree unwilling user base will only result in one thing: rejection.

What is the answer to HTML, Web 2.0 and everything?

Well it may not be 42 but this great video by Michael Wesch an Assistant Professor at Kansas State University does an excellent job of visually explaining what Web 2.0 is all about and how it differs from conventional media and the Web we all go used to prior to this Century:

Web 2.0 ... The Machine is Us/ing Us

The Search for Web 3.0

The buzz around Web 2.0 may have only started in the last year or so but already industry commentators are putting their opinions in the hat for what will constitute Web 3.0? Such talk strikes me as more than a little premature and what is being discussed appears to be a regurgitation of the technologies proposed during the dot-com boom of the mid-nineties rather than original ideas on how to take what we have learned from the previous two incarnations of the Web.

Discussing Web 3.0 is premature because no one has come to grips with what exactly what the concept of Web 2.0 is right now. There are loose ideas of community, interaction and the writeable Web but no simple, easy to understand description has yet crystallised. Until this occurs its hard to tell where one set of conceptual ideas finishes and another begins. The bursting of the dot-com bubble signaled the end of one distinct period of Web development much like the K-T boundary marked the end of the dinosaurs (mostly). This intense moment of destruction followed by relative calm gave those on the Web time to pause, disseminate what came before and evaluate the best way forward.

To make matters worse discussion about what Web 3.0 could be appears to be centered around the relatively old concepts of the Semantic Web. Whilst a nice idea such arguments ignore the fact that Semantic Web ideas existed well before Web 2.0 concepts and in terms of realising these grand ideas not a great deal has changed. From a technical perspective the enabling technologies are still overly complicated and at a practical level no clear upgrade path exists from our current dumb Web to this idealised space (apart from millions of hours of painful, manual classification). Of greatest significance the Semantic Web relies on our ability to generate classification systems for many different forms of data. Given that a single office document standard cannot be agreed to and development of in-depth, domain specific semantic languages such as Industry Foundation Classes are stalled such a proposition seems far off.

Swoogle semantic search

Swoogle is a semantic search engine project by the Computer Science and Electrical Engineering Department at the University of Maryland. They have taken a useful approach by ripping off Google's interface so that new users understand how to use the tool from the very beginning. Unfortunately the major limiting factor of the experience seems to be the results page. Rather than creating human readable snippets of information below each of the links the results simply output a snippet of the RDF code from the returned file. RDF is difficult for computers to understand so asking people to make sense of a brief quote is expecting the possible. As a consequence it is difficult to meaningfully interrogate the results in order to find content that is most relevant to you, in fact in practice the value of the returned results seems almost zero.

What is in your Piggy Bank?

Piggy Bank is an interesting little concept as it tries to bring together ideas about what the semantic web could be into a user-friendly Firefox plugin. There is also a server component for sharing your 'semantic banks' with others. The concept seems quite nice but I have really struggled with the user-interface. If anything the interface is too transparent and gives a view of the information that feels too raw for the casual user. For somebody that understands RDF schema some of the attributes are probably very useful but for the general user these things just confuse matters.