Meeting with Mike 13/9/05

Last Tuesday Mike and I met for a discussion on things. I was meaning to put an overview of what we talked about online sooner but it slipped my mind. Actually a far more interesting thing entered it - the home theatre system I bought the next day...

For the most part we discussed how the concept of 'rich and unobstructive' connections could be made a reality. Definitely the 'tagging' concept so well implemented in systems like Flickr would be a real benefit in such a system especially when it comes to the difficult task of categorising resources (text, images and CAD files) for searching. The ability for users to easy tag resources (be it theirs or others) would enable a degree of human searching not present in a contemporary model. Once use instance of this could be the client tagging product fittings they like and then having the architect place style tags next to them (i.e. postmodern, classic). Using these tags you could then pull slices out of the conceptual work such as 'classic, low-cost fittings that we (as in the client and architect) like'. Alternatively in a construction scenario the ability for the contractor to tag documents pertanent to a specific contractor would ease some documentation headaches (if I change this drawing who will be effected?).

A problem highlighted in Mike's thesis is the unreliability of hyperlinks as reference items over a long period of time. Whilst very reliable over a short period of time hyperlinks a prone to break over time with domain changes, site outages and corporate rebranding. Consequently  the use of Internet based references for this thesis and arguably for the entire concept, should be backed up by some form of backup document to prove the initial hyperlinks existence. In the case of my thesis this will be achieved by saving Internet based resources as PDF. In the case of the sharing concept as a whole the problem is a little more complex and I will go into that a little later. Mike brought up the existence of the Wayback Machine. Not only do they run this archive on some fairly serious hardware (http://www.archive.org/web/hardware.php) but they also keep a fairly comprehensive archive of the web going back over seven years. I have just entered my site into the WayBack machine's web crawler so hopefully in the future copies of this site will be around for posterity and to cringe at.

Our last topic of discussion was how this concept could be tested. The major problem with a quality test (is my thing better than the conventional process?) is that it is almost impossible to test or prove conclusively. In theory a quality test such as this could be run on hundreds of real world projects and the results, in the form of questionnaires and evaluation forms, compared and summarized. Unfortunately such a test is not practical and anything similar performed on a small scale would not provide conclusive evidence.

A case study approach could be a better strategy as it substitutes depth for breath. In this form of testing a few projects could be evaluated extensively using interviews, discussion groups and observation to determine the impact (if any) of the proposed system. In order to prove the feasibility of the system the scalability of the concept would need to be proven. To prove this the system could be tested in mockup small and large practice environments and testing how it performs under a variety of conditions. The common belief is that something that can be 'scaled up' is successful but I think the reverse principle should be used for this test. I have found in practice that anything can be 'scaled up' given enough investment in resources, support and training. The real test of a system is how it can 'scale down' into low resource environments such as that of a New Zealand architecture practice and then be scaled back up into a full scale enterprise environment (100+ users from many different companies participating).

The other test of how successful the concept will be is in how robustly it performs during failure of part of the network. This idea touches on the problems Mike has experienced with Internet based references no longer being accessible on the Internet for one reason or another. In practice this same problem will be faced by users of the system, especially in environments where multiple organisations are sharing information. In this scenario it would not be unreasonable to suggest that at least once during the development process one of the companies would temporarily (or permanently) drop out of the information network for technical or business reasons. When this circumstance arises the system must still be able to provide the information submitted by this party else the integrity of the system as a whole would be severely compromised. No user wants to be told to 'come back tomorrow' if time critical information is not available when they need it and likewise a project's information should not be held to ransom by the threat of one business failing to live up to their service agreement. Consequently the system must use caching not only to ensure robustness of service but also to aid in boosting the overall user experience through speedy service, especially when large data files are involved. There is an interesting speech by Justin Chapweske of Onion Networks available at IT Conversations that deals with this subject (http://www.itconversations.com/shows/detail462.html).
The complete Onion Networks SDK costs US$25,000 which puts it out of reach for this application but definitely the concepts (and the Public Edition) are very useful to keep in mind.