The next generation of web standards made significant progress this month as the W3C announced that the draft HTML5 specification has been opened up to a "Last Call" for comments from members of the W3C, and the wider community. This is on target for the schedule the W3C set last year for HTML5, and is based on the assumption that the priority is to get a recommended HTML5 specification as soon as possible.
Whilst attending the World Wide Web Consortium Advisory Committee meeting in Bilbao (on today and tomorrow), I realised that one small part of the semantic web initiative, pushed for many years by the W3C, has started to see wide scale adoption.
RDFa is now claimed to be embedded in 3.6% of all public URLs (October 2010), and growing fast. Yahoo! led the way with SearchMonkey indexing, Google followed with rich snippets, and FaceBook have used it to enable their open graph protocol.
For more information see:
Europe set to put data in the cloud – Google Apps for Business.
BRUSSELS–The American computer scientist John McCarthy — who coined the phrase “artificial intelligence” — predicted in 1961 that computing power may someday become a public utility, much like electricity or water.
The idea that you could flick a switch for data-crunching was as futuristic at the time as humanoid robots or flying cars.
But 50 years on, a robot has won the U.S. quiz show “Jeopardy,” multiple jet-propelled skycars are under development and McCarthy’s computing vision is slowly taking shape.
The idea of siphoning computing power from afar — a concept called “cloud computing” — has been in active development for much of the past decade, with companies such as Google, IBM and Amazon playing a central role.
Now the European Union is rapidly adopting the idea, hoping that it can streamline businesses, rid the 27-country union of overlapping infrastructure and ultimately save time and money.
But it’s also causing intense headaches for EU regulators, who are troubled by issues of privacy and jurisdiction, including tough questions about who owns information and who bears responsibility for how European laws are applied.
Anyone who uses Gmail, Flickr or other services where the data are not saved on their computers is already taking advantage of cloud computing. Businesses are increasing switching their entire networks to such Internet-based systems.
While it saves money, it also means that personal data can essentially be stored anywhere in the world and that the ability to reach them depends solely on the cloud provider working properly. It’s a potential privacy and logistical nightmare.
Still, the economic argument is striking: The Centre for Economics and Business Research predicts that Europe’s five largest economies could save 177 billion euros (US$257.1 billion) — roughly the output of Ireland — each year for the next five years if all their businesses were to switch over at the expected rate.
In response to the new business possibilities and in an effort to head off the impending privacy concerns, the EU executive is putting together its first cloud-computing strategy. The target for completion is next year, and there’s a sense of urgency to get ground rules in place.
“Normally I prefer clearly defined concepts,” Neelie Kroes, the EU’s top official responsible for information technology and the digital agenda, said as she announced the unveiling of the EU’s cloud strategy in January.
“But when it comes to cloud computing I have understood that we cannot wait for a universally agreed definition. We have to act.”
Compared to the United States, Europe has been a slow adapter to the new technology.
Last year, western Europe accounted for less than a quarter of the US$68 billion spent globally on cloud-computing services, according to technology research consultancy Gartner. The United States occupied nearly 60 percent of the market.
That leaves plenty of room for the technology to expand in Europe, but it’s the privacy issue that is likely to prove the biggest hurdle to a rapid and successful expansion.
One significant problem is that there’s no way for a user to verify where their data are sitting, whether on a server in Sao Paolo, Siena, Singapore or Seoul.
This raises a particular problem for EU member states, who under EU law can only send personal data outside EU borders if the receiving country meets “adequate” privacy standards.
It’s also unclear under EU regulations whose privacy laws would apply in any dispute where the end-user, the cloud-provider and the actual data servers are all in different countries.
For that reason, the EU’s 27 member states are first trying to align their privacy laws and close jurisdictional gaps.
“This is a necessary condition for cloud computing to be effective in the near future,” said Daniele Catteddu, a communication security expert working on the EU’s cloud computing strategy.
“The major obstacles are legal barriers, the enormous levels of bureaucracy, the difficulties of being compliant with 27 different sets of rules,” he said.
One Sunday last February, tens of thousands of Gmail users opened their e-mail accounts only to find them completely empty — data stored in the cloud had temporarily disappeared.
“In some rare instances, software bugs can affect several copies of the data. That’s what happened here,” Google’s vice president of engineering explained on the company’s blog.
Although the data were recovered, it was a jolting reminder that using the cloud means giving up control and that the technology is only as good as its stability and reliability.
Similar incidents have happened to businesses. In 2006, the British-Swedish gaming services company GameSwitch lost access to its software and data following a police raid on a different company that happened to use the same data center.
“It basically comes down to the degree to which you trust the cloud-provider,” said Giles Hogben, a communication security expert for ENISA, the EU’s Internet security agency.
Determining whether to trust can be tricky for customers because providers are wary of disclosing their exact security infrastructure, arguing that to do so would make them more vulnerable to cyber-attack, Hogben said.
Customers also do not have much bargaining room with cloud providers, according to a study conducted at Queen Mary, University of London. As with electrical companies or other utilities, it’s “take it or leave it.” EU regulators have taken note of the potential issues.
“We can’t simply assume that voluntary approaches like codes of conduct will do the job,” Kroes said in a speech last month. “Sometimes you need the sort of real teeth only public authorities have.”
However the regulation shakes out, big computing companies like Microsoft say cloud computing will be the next big thing for Europe and the rest of the world.
“It really is the future. All of our products will run on the cloud,” Microsoft associate general counsel, Ron Zink, told Reuters this month.
Zink said 70 percent of Microsoft’s research and development funds were already devoted to cloud computing, with the figure expected to rise to 90 percent soon.
Kroes is also pushing for more of Europe’s public sector to switch to cloud computing, following the United States, which is planning to close 800 of its 2,100 data centers, almost 40 percent, by 2015 as part of a new “cloud-first” policy.
“I want to make Europe not only ‘cloud-friendly’ but ‘cloud-active’,” Kroes said.
The aim and the ambition are there, but negotiating the legal and privacy maze may take time.
Retrospection: Cross-Platform Mobile Development at EclipseCon - Heiko Behrens (Blog)
I met Heiko recently in Kiel, Germany, where we were both presenting our own visions of cross-platform mobile app development to a local chapter of the ACM. I was very impressed with his bredth of knowledge and his analysis of the field.
This blog post covers his most recent session at EclipseCon this month.
Now, in my view, the hybrid approach of using web technologies for the user interface, and generating local code to integrate into the features of the mobile device (camera, local storage, contacts list, accelerometer, compass, GPS, and so on), as FeedHenry does, is the best medium to long term bet. Web technologies have proved again and again that they can solve cross-platform UI issues, and they must be the safest bet. I'd much rather learn standard JavaScript, CSS, and HTML/HTML5, with a few extra local API calls to get at the local device's features, than learn a limited subset of JavaSCript or some new invented DSL language, when developing mobile apps.
2011-03-28 @ 15:42 Heiko Behrens responds:Thank you for the reference, Mícheál.
I fully agree on the long term: As with the desktop mobile web technology eventually will be able to do real time rendering and tight integration with the host system. At the moment though, neither desktop browsers nor their mobile counter parts can even access a webcam or the built-in camera with pure web technology. It will take some time to catch up with native capabilities. And as we go, a significant amount of outdated browsers (such as WP7) ask for compromises.
We'll see what the future holds...
Interaction Design Centre - Computer Science and Information Systems Department - University of Limerick - Limerick, Ireland
Tag-it-Yourself™ is a journaling platform that supports the personalization of self-monitoring practices in diabetes. TiY is developed at the Interaction Design Centre (www.idc.ul.ie) and is part of a broader research project called FutureCOMM, which is funded by HEA Ireland under the 4th PRTLI program.
The TSSG led the FutureComm project, and UL's IDC were partners. This is a very interesting output of the research programme. Now allI have to do is persuade them to migrate to Feedhenry to allow it work across multiple devices sucha s Android, Blackerry and Nokia WRT as well as iPhone.
HTML5 Is An Oncoming Train, But Native App Development Is An Oncoming Rocket Ship
This article argues that at best lip service, and at worst derision, is being paid to web apps and HTML5 apps, despite an underying beleif that HTML5 is the way of the future. The debate is summarised in terms of the major platforms: Apple (fully native for iPhone and iPad), Google (half in half for Android), Facebook (fully HTML for Facebook).
FeedHenry's offering is effecively a halfway house between the two, as the FH client API wraps the local device functionality, and allows the resulting app to delivered via app stores as a real app, despite using web technologies rather than native development languages to build.
The W3C have unveiled a HTML5 Logo: W3C News Archive: 2011 W3C
W3C unveiled today an HTML5 logo, a striking visual identity for the open web platform. W3C encourages early adopters to use HTML5 and to provide feedback to the W3C HTML Working Group as part of the standardization process. Now there is a logo for those who have taken up parts of HTML5 into their sites, and for anyone who wishes to tell the world they are using or referring to HTML5, CSS, SVG, WOFF, and other technologies used to build modern Web applications. The logo home page includes a badge builder (which generates code for displaying the logo), a gallery of sites using the logo, links for buying an HTML5 T-shirt, instructions for getting free stickers, and more. The logo is available under "Creative Commons 3.0 By" so it can be adapted by designers to meet their needs. See also the HTML5 logo FAQ and learn more about HTML5.
The logo itself is not the main important thing, it is the richness of HTML5's features that help make media (audio and video in particular) easier to access in a standardised way on the web.
HTML5 also promises to help level out the differences between the various mobile devices, providing a common web-based layer of functionality that can be targeted by all mobile apps. Currently companies like FeedHenry (where I am CTO) offer a client-side API abstraction that helps cross-platform mobile app development and deployment, utilising web technologies. HTML5 will hopefully make more of these differences irrelevant over time, making it even easier to develop cross-platform mobile apps that work across a wide range of devices, using standardised web technologies (approved by the W3C).
The TSSG (where I am Executive Director Research) are members of the W3C and support these standardisation activities.
UPDATE 2011-02-21 Ian Jacobs of the W3C has blogged on the interesting discussions the unofficial release of the logo to the public has generated, see Ian Jacobs' Blog Entry.
Research Report: 2011 Cloud Computing Predictions For Vendors And Solution Providers
The authors present the challenges that cloud computing offers to traditional vendors, outlining the key comparisons between traditional and cloud development. It doesn't stike me as particular new or insightful, but it is clear and accurate, so my guess is that this makes it worth reading - one usually underestimates how much effort it takes to articulate the zeitgeist clearly.
I am very happy to announce that a team of programmers from the TSSG have won first place in the Ericsson Mobile Application Awards. The global student competition generated over 700 registered users, 120 registered teams from 28 countries worldwide and the involvement of 1000 end-users globally. The jury selected the winners yesterday at the Nordic Mobile Developer Summit in Stockholm.
Pictured above is Robert Mullins and the two part-time students Kieran Ryan and Mark Williamson who are both members of the TSSG and students on the TSSG-sponsored MSc Communications Software in Waterford Institute of Technology.
The winning entry has an associated YouTube video and was based on the development of a "Caller Profiler application".
This success demonstrates the capability within the TSSG to translate from a high level research interest in telecommunications networks and services, through to real software development that can have an impact on industry, with a link back to WIT's teaching curriculum, in particular with specialised targeted MSc (taught) course offerings designed to enable professional development of specialist software skills.
Robert Mullins led the TSSG's engagement in Enterprise Ireland funded ILRP IMS-ARCS project that built enablers for next generation IMS services. The development work for the competition was enabled by Enterprise Ireland funding linked to one of the IMS-ARCS industrial members Vennetics as partner.
We are grateful to Ericsson for the opportunity to take part in this competition. Ericsson, both in Ireland and in Sweden, work closely with the TSSG in supporting our Next Generation Network TSSG Centre (funded by the TSSG, Enterprise Ireland, and Science Foundation Ireland), and have worked with us on many leading edge research projects.
Well done to the winning team!
A video interview with the winning three teams, the TSSG team as overall winners are third in the sequence starting at timestamp 2:20:
The official press release on the Applications Award win was released today: TSSG Press Release
Netcraft have posted an analysis showing that the UK National Rail website is suffering under the increased traffic load as worried comuters and travellers check the availability of their routes: National Rail website affected by snow - Netcraft.
In an interesting post Patricio Robles discusses why RSS usage is dropping of in the USA and wonders about the equivalent rise in the use of specific social media platforms such as Facebook and Twitter Is RSS dead? | Blog | Econsultancy.
He concludes the RSS never was mainstream, and that it still serves a very useful function that will not go away any time soon.
I'd agree, adding that we are actually taking about the different flavours of RSS and Atom here. Personally I use RSS/Atom every day, and have started to use Twitter quite a bit. I use tools - like ping.fm - (often enabled with Jabber/IM and RSS) to update Facebook rather than using Facebook directly. Similarly with Yammer, MySpace, LinkedIn, Plaxo Pulse, and Flickr). Life's too short to be logging into all of these individually to update status, or to micro-blog. Also, I still value the more thoughtful composition required for a full (even if short) blog entry, compared to a short 140 character tweet. Thus I view the social media platforms as potential hooks to engage people, rather than as my real focus.
A new product called Tracer from a Canadian start-up company Tynt allows web site owners to track how their content is being copied and pasted. Measuring reader engagement by how often they copy and paste - Nieman Journalism Lab
Tynt homepage.
In this post last April (that I've only just found), Dave Shea explains why he's moving back to HTML 4.01 strict from XHTML: mezzoblue - Switched.
This is newsworthy, as the W3C have just announced the effective merger of the XHTML2 and HTML5 efforts, as the former's charter expires at the end of this year. It's not that XHTML will go away, but the XHTML2 efforts will be de-emphasised, and having an XHTML compatible version for HTML5 will become the priority.
From W3C News Archive 2009-07-02
2009-07-02: Today the Director announces that when the XHTML 2 Working Group charter expires as scheduled at the end of 2009, the charter will not be renewed. By doing so, and by increasing resources in the HTML Working Group, W3C hopes to accelerate the progress of HTML 5 and clarify W3C's position regarding the future of HTML. A FAQ answers questions about the future of deliverables of the XHTML 2 Working Group, and the status of various discussions related to HTML. Learn more about the HTML Activity.
See also:
February 2009 Web Server Survey - Netcraft
It is interesting to note the massive impact, on statistics about on-line websites held by Netcraft.com, that one large Chinese hosting site, the Qzone blogging service, has had.
In the February 2009 survey we received responses from 215,675,903 sites. This reflects a phenomenal monthly gain of more than 30 million sites, bringing the total up by more than 16%.This majority of this month's growth is down to the appearance of 20 million Chinese sites served by QZHTTP. This web server is used by QQ to serve millions of Qzone sites beneath the qq.com domain.
QQ is already well known for providing the most widely used instant messenger client in China, but this month's inclusion of the Qzone blogging service instantly makes the company the largest blog site provider in the survey, surpassing the likes of Windows Live Spaces, Blogger and MySpace.
The web server they use, their own customized software called QZHTTP, is now the 3rd most popular web server in the world, after Apache and MS IIS, with just shy of a 10% share of the server market; that's real impact! I guess this is just the beginning of this type of phenomenon as the Chinese start to impact on many such on-line statistics.
I am at the Advisory Committee meeting of the W3C in Mandelieu, near Nice in France. As usual it is a great buzz with loads of people trying to progress web standards and related standards. The rules of the meeting are such that I cannot blog directly about the contents of the meeting itself - fair enough!
However, the AC (Advisory Committee) meeting is co-located with a TP (Technical Plenary) meeting, many aspects of the latter are more public. Starting at the public W3C TPAC 2008 web page, or the news posting on the W3C front page, will lead to the stuff that is public. Some staff members are also blogging here.
Thanks to CircleID: Google Says Its Counting Over 1 Trillion Unique Pages on the Web?
"We've known it for a long time: the web is big. The first Google index in 1998 already had 26 million pages, and by 2000 the Google index reached the one billion mark. Over the last eight years, we've seen a lot of big numbers about how much content is really out there. Recently, even our search engineers stopped in awe about just how big the web is these days—when our systems that process links on the web to find new content hit a milestone: 1 trillion (as in 1,000,000,000,000) unique URLs on the web at once!"
It has been noted however that Google does not index all 1 Trillion web pages (see Michael Arrington)
Tim Bray posts On Communication, a good read. I would add printing to the list of key communications technologies dating from maybe 1430 so making it 577 in Tim's table.
When Tim blogs we listen, as he has a great way of simplifying complex arguments down to easily understandable metaphors, that just might change the world, again..... Giant Global Graph | Decentralized Information Group (DIG) Breadcrumbs
Ian Davis posted this useful comment on W3C semantic web activity and the value of keeping activity in the mainstream with many eyes to give feedback and updates on metadata Internet Alchemy: Is the Semantic Web Destined to be a Shadow?
In an interesting ligtening talk at W3C TP this afternoon Håkon Wium Lie described "How web fonts can change the face of the web." He showed that there are very few fonts one can reliably assume are on a client machine. He then showed how CSS 2.0 can reference on-line server-side fonts that can be dynamically downloaded when a web page is viewed.
I'm here in Cambridge MA, looking across at the Boston skyline from the other side of the Charles River at the World Wide Web Consortium's (W3C's) TPAC (Technical Plenary/Advisory Committee) joint meeting.
It's a stimulating environment with lots of working groups reporting on their activities, and with parallel meetings before and afterwards to progress these working groups.
As usual I'm impressed by the quality if the on-line collaboration tools allowing people around the world to take part even if they aren't here physically.
Web technologies are effectively a universal layer for services allowing ligher wieight and heavier weight approaches to service/applications development for unbiquitous access from many types of devices with network connectivity.
I am the TSSG's Advisory Council member for the World Wide Web Consortium (W3C). At our last meeting in Banff Canada (May 2007) I had the pleaseure of meeting, among others, Dave Raggett. He chairs the Ubiquitous Web Applications Activity in the W3C.
On the UWA blog, I noticed that Dave had delivered the opening talk on "The Web of Things" at the UWE Web Developer's Conference in Bristol, UK on 26 September 2007.
Abstract:
A look at the origins of the Web, how it has evolved, and the challenges in extending it to the Web of things as the number and variety of networked devices explodes. Changing the way we conceive of the Web. Why today's hacks will give way to more structured approaches to developing applications that allow developers to focus on what the application should do rather than the details of exactly how.
I am sorry to have missed this talk, but I could read the slides which take about 10-15 minutes to read through: Dave Raggett Slides - Web of Things
Personally, I think the arguments for declarative development (e.g. HTML, XML, ...) over procedural language development (e.g. Java, JavaScript, AJAX, ...) are very strong and will win out in the medium to long term for mobile web development. This slideset explains why. Declarative standards for the web expanding to cover more areas, particularly to enable flexibility of mobile devices as limited as a remote control, are one potential future with many interesting options. It's all about making the infrastructure simple, and lower the barriers to entry for programing, just as the web has already done for desktop applications (introducing so called "web time" and slashing development costs for distributed systems).
Interestingly I could give almost the same talk as Dave did, with a different focus on how IPv6 can help solve the problems at a lower layer. It'll take both to really achieve both the "Web of things" and the "Internet of things".
There has been quite a buzz on the blogsphere for the past few weeks over the report on the privacy issues of Internet services produced by Privacy International entitled "A Race to the Bottom - Privacy Ranking of Internet Service Companies".
A good summary is provided in this article: Privacy International pokes a stick in Google’s eye by ZDNet's Dan Farber -- Privacy International has poked Google in the eye with the stick. In an interim report on the privacy ranking of the major Internet services, Google was the only company found among those surveyed to receive a failing grade, which Privacy International described as conducting comprehensive consumer surveillance and having entrenched hostility to privacy. [...]
To quote part of the report that addresses Google:
It is good to see the TSSG campus company Nubiq, who have a mobile web site generator called Zinadoo, getting some good coverage in the blogsphere Mobile Web 1.0 again and again and again. It really is very easy to setup a mobile website using Zinadoo.
In the TSSG we have been tracking a number of technologies that converge around the idea of the web browser as a flexible docking station for various applications. I am borrowing for a lot that my colleague Eamonn de Leastar has investigated for this posting. The source of these innovations stems from the desire to create a "My X", such as "My Netscape" or the Google customised home page.
You could say this trend for personalised portals started with the Netscape portal idea of the late 1990s that led to the original RSS 0.90 (Rich Site Summary) in 1999. See this History of RSS if you're interested in that journey. There are things before RSS 0.90, but they weren't called RSS (most importantly Dave Winner's Scripting News format). Since then the alternative Atom format has been developed and standardised in the IETF. This history is also linked to the early W3C semantic web standard RDF (Resource Description Framework), and some RSS versions are subsets of RDF.
So the basic idea was that these feed formats could be used to allow syndication and aggregation of content across many different types of content sources such as newspapers and later blogs, but including weather and other sorts of information flows.
The bigger picture with portals is to allow functionality to be bundled into widgets (small sets of functionality) that can allow a host platform to grow through the use of 3rd party information sources (mini-programs). So the feed is the basic low-level entry point here, a simple flow of textual data marked up with XML in RSS or Atom into headline, content and link to original source. The more complex widgets can then become the requirement becomes for a program rather than just an XML parser to allow the widget to function.
The other concept that has become very important recently has been client-side scripting sometimes called AJAX (Asynchronous JavaScript and XML), though it doesn't necessarily require JavaScript or even XML to be branded AJAX. Basically it's about using clever client-side web programming techniques to improve the end-user experience, often by pre-fetching server-side data before the user explicitly requests it. Very clever AJAX solutions are emerging that can handle disconnection from the network for periods of time.
One alternative framework to JavaScript is that of Adobe Flash, most commonly associated with multimedia content, but now being used as a light deployment platform (to those with browsers with a Flash plug-in). The latest innovation here is Adobe's Apollo.
The most recent clutch of portal sites are showing some startling effects. Of particular interest where these two:
The former is very slick, the latter is intriguing - have a look at this calendar widget.
It looks like a standard calendar widget. However, look the four buttons along the top, one of them is "Copy to Desktop". i.e. If you have installed the Adobe Apollo engine on your PC (a flash host of some kind) then there is no distinction between these widgets appearing on your desktop or in your browser.
There is a somewhat similar effect in the latest meebo. The IM (Instant Messaging) client emulation has a button which apparently permits the IM window to escape from the browser and live on decoupled. It is in fact another browser instance - but so customised that it looks more or less like Exodus (the open IM Jabber client). No downloads required of course.
Finally, AB5k (Widgets for the World) is the first Java based widgets framework taking advantage of the major (perhaps Eclipse inspired) renovation and improvement of Swing these past 2-3 years. This is currently in pre-alpha but it might have potential once it gets going.
All of these seem a good bit more sophisticated that Google, Yahoo! or netvibes. Whether they are just toys or not remains to be seen...
Netcraft's survey of SSL sites has now been running for over ten years. The first survey, in November 1996, found just 3,283 sites; since then, the number of SSL sites has had an average compound growth of 65% per annum.
The survey is a good guide to the growth of online trading and services. The survey counts sites by collecting SSL certificates; each distinct, valid SSL certificate is counted in the results. Each SSL certificate typically represents one company's details, and each certificate must be approved by a certificate authority, so the data is typically more consistent and less volatile than other attributes of the Internet's infrastructure.
Netcraft: Internet Passes 600,000 SSL Sites.
This is many fewer than the total number of websites, current Netcraft web server survey, as I access it now in May 2007 it reads 118,023,363 sites.
I have to admit I am impressed by the setting for the W3C Advisory Committee (AC) meeting this week in the Fairmont Hotel, Banff National Park, Alberta, Canada.
We're just gathering now in the 8am-9am coffee/breakfast/registration slot and there's a good good buzz about the place. Having seen the range of the discussions on the agenda I'm looking forward to a productive meeting.
Phase 1 of my involvement in web technologies began when I first heard about the web at the NSC92 conference (Network Services Conference) November 1992 in Pisa, Italy, and went back to University College Galway (UCG), now called NUI Galway, where I worked in computer services, very enthused. Within a year I had my own webserver and I was the webmaster for UCG's first website. I was very happy when the front page image from the website, a picture of the old quadrangle in UCG, was featured in the Irish Times in an article about the emerging web with the title "The West's Awake" (a reference to a famous Irish rebel song), I think it was around 1994. At the time we would email some guys in UCD in Dublin running a server called Slarti (a reference to the Hitch-Hiker's Guide to the Galaxy character), where a list and a map of active Irish websites was maintained - my personal server and the UCG server are still listed in this list as are the servers of other UCG web-heads many still active including Joe Desbonet and John Breslin. I ended up getting really into Perl and CGI programming a published some books on this as well as being a webmaster in the first iteration of the web.
Phase 2 of my engagement in web technologies began when I moved to Waterford Institute of Technology back in 1996, and got involved in some EU funded projects linked to the Telecommunications Software & Systems Group (TSSG) there. I became really enthused by the whole content aggregation technology suite, with the early versions of RSS from Netscape in the late 1990s, and later with weblogging/blogging, and the first version of this blog (Greymatter then MovableType).
The third phase of my web technology engagement has been via the telecommunications management work in the TSSG, based on the use of emerging semantic modelling techniques, in particular OWL-based solutions, and on applying these to the telecommunications network and service management space in the TeleManagement Forum ((TM Forum) and in the new Autonomic Communications Forum (ACF) linked to our research programme on Autonomic Management of Communications of Networks and Services.
So coming to the W3C for the first time is a bit like coming home for me, even though the TSSG only joined the W3C very recently, and this is my first AC meeting.
The Advisory Committee meeting is co-located with the WWW2007 conference, and I'll be staying on for two days of that before returning to Ireland.
I have just heard that my MPhil mini-thesis supervisor, Professor Karen Spärck Jones, of the Computer Laboratory in the University of Cambridge (UK) has been awarded the joint ACM and AAAI Newell Award for her work on natural language processing ACM: Press Release, March 22, 2007.
Congratulations, I am a proud alumnus of the taught masters course she co-founded with the late Professor Frank Fallside, an innovative linkage of computer science and engineering, "Computer Speech and Natural Language Processing", where we did Hidden Markov Models and Neural Networks side-by-side with Prolog and LISP in the late-1980s. I believe that where I now work in the TSSG in Waterford Institute of Technology also captures the creative energy of working where disciplines intermingle, here with telecommunications engineering and Internet technologies.
The notice also mentions that Karen is being given two further awards:
The Athena Lecturer Award, given by the ACM Committee on Women in Computing (ACM-W) recognizes women researchers who have made fundamental contributions to Computer Science [...and...] the Lovelace Medal, presented by the British Computer Society to those who have made significant contributions to advancing and understanding Information Systems.
Well done Karen, you were an inspiration to me and many other students.
UPDATE: 2007-04-26 Karen Spärck Jones, IR Pioneer, Winner of Two ACM Awards Karen Spärck Jones, recently named as the recipient of ACM/AAAI's Allan Newell Award and ACM-W's Athena Lecturer Award, passed away on April 4. As we say in Irish "Ní bheidh a leithid arís ann" - "We will not see her likes again".
Mark Pilgrim has been hired by Google, in his own words (Mon 19th March 2007):
There are two basic visions of the future of the web, and one of them is wrong. I'm going to work on the right one for a while. At Google. Starting today.
In terms of clarifying this here is an older posting of his on W3C standards: W3C and the Overton window [dive into mark].
This recent Economist article gives a good overview of what's exciting about the mobile web and the semantic web: | Watching the web grow up It is based on discussions with Tim Berners-Lee.
An article from The Mail On Sunday quoted a resident of Exton (a small town in England), Brian Thorpe-Tracey as saying:
About two years ago we noticed a real increase in drivers using the lane. Vehicles are getting stuck and having to reverse back up, damaging the wall and fence. There's even a piece of metal embedded 12ft up in a tree which looks like it's come off a lorry. When I've asked drivers why they are using the lane they say they are just following satnav.
Thanks to Brady Forest of O'Reilly Radar for the link. This highlights one of the unforeseen problems with new technologies. Brady suggests in his post that allowing user generated updates to be integrated into the data used by satnav systems could alleviate the problem.
I came across this interesting article the other day: An Introduction to Connective Knowledge ~ Stephen's Web ~ by Stephen Downes.
It argues that the web has allowed a new way of creating a shared knowledge based on connectivity, and it places this argument within the philosophical debate around meaning.
All very relevant to the semantic web, and an interesting forthright contribution.
How I found the article was interesting in itself. I noticed that John Breslin of DERI was publishing his slide shows on an new slide show sharing service. I browsed the other side shows in this service and found one by Stephen Downes. I liked it, and I then searched for more material by him.... That's the kind of path we can follow these days to locate interesting materials.
An excellent history of SOAP and web services Pete Lacey's Weblog :: The S stands for Simple - laugh out loud.
For a different view from the norm, try a dose of this light reading, a very informed critique of the mis-placed idealism associated with much Internet and web promotion: Macleans.ca | Top Stories | Life | Pornography, gambling, lies, theft and terrorism: The Internet sucks. Food for thought indeed.
Thanks to Elyes Lehtihet for drawing my attention to this slideset published on-line: SCAI-2006-keynote.pdf (application/pdf Object). In it Ora Lassila of Nokia (as a keynote talk in SCAI 2006 9th Scandinavian AI conference held at the Helsinki University of Technology, Finland, on October 25-27, 2006) descibes the potential cross-over areas between these two visions, coming from a perspective of a Artificial Intelligience true believer.
Well I briefly made it into the top 100 blogs (by inbound links) Justin Mason: Happy Software Prole サ Technorati-ranked Irish Blogs Top 100 but now I'm a gonner.... So it goes.
Paul Kiel posts on O'Reilly's XML.com Profiling XML Schema analysing what features of XML schemas are actually used in practice, and advising on the schema features to avoid. Very interesting.... Not surprisingly the vast majority of schemas analysed stuck to simplicity, to quote from Paul's conclusions "The clearest message is one of simplicity. The most commonly used constructs involve merely creating reusable types, assembling them into sequences of elements, and augmenting them with enumerations. Many of the more complex features went unused."
A 3CS and TSSG spinout company, Nubiq, is today launching its Zinadoo product for mobile content creation management. This article in today's Irish Times summarises the key points: Irish Times Article - Firm makes mobile websites easy
This is the press relesase from 3CS:
The Centre for Converged Services (3CS) based at the Waterford Institute of Technology (WIT) will see another spin off company come to fruition in September Nubiq Ltd. Funded by Enterprise Ireland, the new campus company, which specialises in community mobile solutions, will launch its first product, zinadoo.com around that time.Zinadoo provides a service that enables end-users to create their own mobile website from their computer. The service provides a full solution to end-users to enable them to take existing website content and create, deploy and manage new mobile services and mobile websites. Users dont have to write software, develop and manage connections to operators networks and gateways, or host, manage and monitor the service.
In addition to mobile website creation, an end user can create their own text services to promote their site, invite people to see it and use it and to build community services. The zinadoo services range from group texting, to text voting and automated text response services, which set it apart from its competitors. It is an easy and effective way for people to express themselves through the creation of websites using their mobile phones.
Since February of this year, sporting clubs such as Gaelic football clubs, golf clubs and more recently the Waterford branch of the Youth Information Services Organisation have taken part in the new mobile services trials. These trials aim to bring mobile services to community groups and SMEs looking for an easier way to keep in touch with friends, family and customers.
Helene Haughney, Chief Executive, Nubiq said: "Until now internet users who create content and services on the web, be it social networking, journaling or using sites for photo distribution, have had no way of doing so for mobile apart from simple blog support.
As a solution to this, we developed zinadoo to be used by individuals, social clubs, high growth SMEs, anyone in fact, to build innovative mobile websites and community services. It allows businesses and communities to communicate using the most effective means available today - the mobile phone."
"So far uptake for this service has been high and the reaction to it from our end users has been very positive. They have found it invaluable for sending out notifications regarding, for example; golf competitions, concerts and match fixtures," Ms. Haughney continued.
Barry Downes, Centre Director, 3CS added: "Using Zinadoo is as easy as listing an item on eBay and as a result it will tap into the latent demand of end-users to use the mobile channel to publish, share information and engage in social networks. It grants users, previously denied by technical and organisational hurdles, access to an easy-to-operate solution, value added services, international communities and limitless business opportunities."
The zinadoo product is being trialled under the EU funded eTen market validation project. Built by the project coordinator Waterford Institute of Technology, other partners include: AePONA (UK), Aceno (Ireland), Fraunhofer Institute FOKUS (Germany), OTEPlus (Greece) and Telefonica I&D (Spain).
The Centre for Converged Services (www.3CS.info) main area of research is convergent software services for next generation networks such as IMS (IP Multimedia Subsystem). 3CS is associated with the Telecommunication Software and Systems Group (TSSG) at WIT and also Fraunhofer Institute FOKUS, in Berlin. 3CS has expertise in the areas of IMS services, Internet Information Management and Syndication (RSS/ATOM) systems, architectures and services, mobile multimedia and Web 2.0 services and frameworks. 3CS currently has 12 active research projects, the funding for which has been won through competitive tenders for national and international research funding.
According to Barry Downes "Nubiq is the first of a number campus companies that 3CS is developing based on its research agenda and its collaborative relationships with industry and the research community. I look forward to Nubiqs success and future technology transfers based on 3CSs research that will benefit the Irish economy".
For further information please contact:
Hélène Haughney
Tel: 051 302974
www.nubiq.com
Jon Udell: A conversation with Roy Fielding about HTTP, REST, WebDAV, JSR 170, and Waka
As one of the prime movers in the W3C and the inventor of the term REST (Representational State Transfer) Roy is definitely worth listening to.
John Breslin posts adverts.ie Map at Cloudlands on a map (Google Maps mash-up) of how people are buying and selling via adverts.ie
he also includes a table of co-rodinates of Irish counties - useful.....
Gartner's 2006 Emerging Technologies Hype Cycle Highlights Key Technology Themes
1. Web 2.0Web 2.0 represents a broad collection of recent trends in Internet technologies and business models. Particular focus has been given to user-created content, lightweight technology, service-based access and shared revenue models. Technologies rated by Gartner as having transformational, high or moderate impact include:
Social Network Analysis (SNA) is rated as high impact (definition: enables new ways of performing vertical applications that will result in significantly increased revenue or cost savings for an enterprise) and capable of reaching maturity in less than two years. SNA is the use of information and knowledge from many people and their personal networks. It involves collecting massive amounts of data from multiple sources, analyzing the data to identify relationships and mining it for new information. Gartner said that SNA can successfully impact a business by being used to identify target markets, create successful project teams and serendipitously identify unvoiced conclusions.
Ajax is also rated as high impact and capable of reaching maturity in less than two years. Ajax is a collection of techniques that Web developers use to deliver an enhanced, more-responsive user experience in the confines of a modern browser (for example, recent version of Internet Explorer, Firefox, Mozilla, Safari or Opera). A narrow-scope use of Ajax can have a limited impact in terms of making a difficult-to-use Web application somewhat less difficult. However, Gartner said, even this limited impact is worth it, and users will appreciate incremental improvements in the usability of applications. High levels of impact and business value can only be achieved when the development process encompasses innovations in usability and reliance on complementary server-side processing (as is done in Google Maps).
Collective intelligence, rated as transformational (definition: enables new ways of doing business across industries that will result in major shifts in industry dynamics) is expected to reach mainstream adoption in five to ten years. Collective intelligence is an approach to producing intellectual content (such as code, documents, indexing and decisions) that results from individuals working together with no centralized authority. This is seen as a more cost-efficient way of producing content, metadata, software and certain services.
Mashup is rated as moderate on the Hype Cycle (definition: provides incremental improvements to established processes that will result in increased revenue or cost savings for an enterprise), but is expected to hit mainstream adoption in less than two years. A "mashup" is a lightweight tactical integration of multi-sourced applications or content into a single offering. Because mashups leverage data and services from public Web sites and Web applications, they池e lightweight in implementation and built with a minimal amount of code. Their primary business benefit is that they can quickly meet tactical needs with reduced development costs and improved user satisfaction. Gartner warns that because they combine data and logic from multiple sources, they池e vulnerable to failures in any one of those sources.
2. Real World Web
Increasingly, real-world objects will not only contain local processing capabilities妖ue to the falling size and cost of microprocessors傭ut they will also be able to interact with their surroundings through sensing and networking capabilities. The emergence of this Real World Web will bring the power of the Web, which today is perceived as a "separate" virtual place, to the user's point of need of information or transaction. Technologies rated as having particularly high impact include:
Location-aware technologies should hit maturity in less than two years. Location-aware technology is the use of GPS (global positioning system), assisted GPS (A-GPS), Enhanced Observed Time Difference (EOTD), enhanced GPS (E-GPS), and other technologies in the cellular network and handset to locate a mobile user. Users should evaluate the potential benefits to their business processes of location-enabled products such as personal navigation devices (for example, TomTom or Garmin) or Bluetooth-enabled GPS receivers, as well as WLAN location equipment that may help automate complex processes, such as logistics and maintenance. Whereas the market sees consolidation around a reduced number of high-accuracy technologies, the location service ecosystem will benefit from a number of standardized application interfaces to deploy location services and applications for a wide range of wireless devices.
Location-aware applications will hit mainsteam adoption in the next two to five years. An increasing number of organizations have deployed location-aware mobile business applications, mostly based on GPS-enabled devices, to support queue business processes and activities, such as field force management, fleet management, logistics and good transportation. The market is in an early adoption phase, and Europe is slightly ahead of the United States, due to the higher maturity of mobile networks, their availability and standardization.
Sensor Mesh Networks are ad hoc networks formed by dynamic meshes of peer nodes, each of which includes simple networking, computing and sensing capabilities. Some implementations offer low-power operation and multi-year battery life. Technologically aggressive organizations looking for low-cost sensing and robust self-organizing networks with small data transmission volumes should explore sensor networking. The market is still immature and fragmented, and there are few standards, so suppliers will evolve and equipment could become obsolete relatively rapidly. Therefore, this area should be seen as a tactical investment, as mainstream adoption is not expected for more than ten years.
3. Applications Architecture
The software infrastructure that provides the foundation for modern business applications continues to mirror business requirements more directly. The modularity and agility offered by service oriented architecture at the technology level and business process management at the business level will continue to evolve through high impact shifts such as model-driven and event-driven architectures, and corporate semantic Web. Technologies rated as having particularly high impact include:
Event-driven Architecture (EDA) is an architectural style for distributed applications, in which certain discrete functions are packaged into modular, encapsulated, shareable components, some of which are triggered by the arrival of one or more event objects. Event objects may be generated directly by an application, or they may be generated by an adapter or agent that operates non-invasively (for example, by examining message headers and message contents).EDA has an impact on every industry. Although mainstream adoption of all forms of EDA is still five to ten years away, complex-event processing EDA is now being used in financial trading, energy trading, supply chain, fraud detection, homeland security, telecommunications, customer contact center management, logistics and sensor networks, such as those based on RFID.
Model-driven Architecture is a registered trademark of the Object Management Group (OMG). It describes OMG's proposed approach to separating business-level functionality from the technical nuances of its implementation The premise behind OMG's Model-Driven Architecture and the broader family of model-driven approaches (MDAs) is to enable business-level functionality to be modeled by standards, such as Unified Modeling Language (UML) in OMG's case; allow the models to exist independently of platform-induced constraints and requirements; and then instantiate those models into specific runtime implementations, based on the target platform of choice. MDAs reinforce the focus on business first and technology second. The concepts focus attention on modeling the business: business rules, business roles, business interactions and so on. The instantiation of these business models in specific software applications or components flows from the business model. By reinforcing the business-level focus and coupling MDAs with SOA concepts, you end up with a system that is inherently more flexible and adaptable.
Corporate Semantic Web applies semantic Web technologies, aka semantic markup languages (for example, Resource Description Framework, Web Ontology Language and topic maps), to corporate Web content. Although mainstream adoption is still five to ten years away, many corporate IT areas are starting to engage in semantic Web technologies. Early adopters are in the areas of enterprise information integration, content management, life sciences and government. Corporate Semantic Web will reduce costs and improve the quality of content management, information access, system interoperability, database integration and data quality.
典he emerging technologies hype cycle covers the entire IT spectrum but we aim to highlight technologies that are worth adopting early because of their potentially high business impact,� said Jackie Fenn, Gartner Fellow and inventor of the first hype cycle. One of the features highlighted in the 2006 Hype Cycle is the growing consumerisation of IT. 溺any of the Web 2.0 phenomenon have already reshaped the Web in the consumer world�, said Ms Fenn. 鼎ompanies need to establish how to incorporate consumer technologies in a secure and effective manner for employee productivity, and also how to transform them into business value for the enterprise�.
The benefit of a particular technology varies significantly across industries, so planners must determine which opportunities relate most closely to their organisational requirements. To make this easier, a new feature in Gartner痴 2006 hype cycle is a 叢riority matrix� which clarifies a technology痴 potential impact - from transformational to low � and the number of years it will take before it reaches mainstream adoption. 典he pairing of each Hype Cycle with a Priority Matrix will help organisations to better determine the importance and timing of potential investments based on benefit rather than just hype,� said Ms Fenn.
2006 Hype Cycle for Emerging Technologies
Note to editors: More information on each of the technologies identified in the emerging technologies hype cycle and on the priority matrix can be obtained from Gartner PR.
Despite the changes in specific technologies over the years, the hype cycle's underlying message remains the same: Don't invest in a technology just because it is being hyped, and don't ignore a technology just because it is not living up to early expectations.
釘e selectively aggressive � identify which technologies could benefit your business, and evaluate them earlier in the Hype Cycle�, said Ms. Fenn. 擢or technologies that will have a lower impact on your business, let others learn the difficult lessons, and adopt the technologies when they are more mature.�
About the Gartner 2006 Hype Cycles
The 滴ype Cycle for Emerging Technologies, 2006� report is one of 78 hype cycles released by Gartner in 2006. More than 1,900 information technologies and trends across more than 75 industries, technology markets, and topics are evaluated by more than 300 Gartner analysts in the most comprehensive assessment of technology maturity in the IT industry. Gartner's hype cycles assess the maturity, impact and adoption speed of hundreds of technologies across a broad range of technology, application and industry areas. It highlights the progression of an emerging technology from market over enthusiasm through a period of disillusionment to an eventual understanding of the technology's relevance and role in a market or domain. Additional information regarding the hype cycle reports is available on Gartner痴 Web site at http://www.gartner.com/it/docs/reports/asset_154296_2898.jsp.
Each Hype Cycle Model follows five stages:
1. "Technology Trigger"
The first phase of a Hype Cycle is the "technology trigger" or breakthrough, product launch or other event that generates significant press and interest.2. "Peak of Inflated Expectations"
In the next phase, a frenzy of publicity typically generates over-enthusiasm and unrealistic expectations. There may be some successful applications of a technology, but there are typically more failures.3. "Trough of Disillusionment"
Technologies enter the "trough of disillusionment" because they fail to meet expectations and quickly become unfashionable. Consequently, the press usually abandons the topic and the technology.4. "Slope of Enlightenment"
Although the press may have stopped covering the technology, some businesses continue through the "slope of enlightenment" and experiment to understand the benefits and practical application of the technology.5. "Plateau of Productivity"
A technology reaches the "plateau of productivity" as the benefits of it become widely demonstrated and accepted. The technology becomes increasingly stable and evolves in second and third generations. The final height of the plateau varies according to whether the technology is broadly applicable or benefits only a niche market
The Marine Irish Digital Atlas - Spatial Ireland
The Marine Irish Digital Atlas (MIDA) has been officially launched. It provides information about geographically referenced data for the island痴 marine and coastal areas. The Atlas displays vector and raster datasets, associated metadata and relevant multimedia.
Tim Bray's post alerted me to this excellent slideset explaining feeds, giving an excellent clear history of RSS and Atom.
TriXML2006-BeyondBlogging.pdf (application/pdf Object) "Beyond Blogging: Understanding feeds and publishing protocols" (Dave Johnson, Sun Microsystems, 2006).
This site processes images of barcodes you upload Barcodepedia.com - the online barcode database. Cool. No wonder it's getting Slashdotted.
This looks like a very interesting concept: O'Reilly Radar >Gutenkarte: Geo annotation of Gutenberg texts. A sample map generated: Thucydides' classic The History of the Peloponnesian Wars. Just think, you could do with thrillers set in a city and track the chase scenes on a map! Excellent stuff.
I am in Brussels at the NESSI General Assembly. This is a new type of process being promoted by the European Commission in advance of the new 7th Framework Programe, the next phase of the EC research funding process (due to formally cover 2007-2013). The idea is to allow key commercial interests to form a "Technology Platform" providing a forum for companies to meet and define a common research agenda that defines roadmap(s) addressing the strategic development of the industrial sector in Europe (primarily the EU 25 member states).
NESSI, Networked European Software Services Initiative, is one of these technology platforms (or ETPs), currently comprising 22 core partners from four types of organisation: industry, SMEs (Small and Medium-sized Enterprises) academia, user group representation.
As of now the overall strategic objectives have been documented SRA (Strategic Research Agenda) Volume 1: Framing the future of the service oriented economy and further documents are in progress.
It is certain that the general move of Information Communication Technologies (ICT) towards services is a global technology trend. The ambitious aim of this initiative is to help define this trend in an integrative way that encompases end users, technologists (academic and indistrial) and solutions providers.
Sala had sold only 106 paintings in 110 days. Not even one painting per day. Then he sold 315 in two days!
Cheapest one is now USD 160 plus p&p!
Tom Raftery on O'Reilly trademarks 'Web 2.0' and sets lawyers on IT@Cork!
Brady on Mobile Gaming O'Reilly Radar > Where 2.0: Pixie Hunt - looks like it is really starting to hot up as a topic. I hope we can get involved from the TSSG in designing and developing some good mobile games with a location-based element.
Bernie's at EdTech 2006: IrishEyes: Sound education in your pocket
Brady on Sphere's Blog Search.
Sphere launched a month ago, but i only just got to really check it out last week when i sat down with tony conrad, ceo and founder (Tony was previously involved in Oddpost and is an advisor for Automattic). The technical team is the crew that brought us Waypath.com - an early blog search engine.
Sphere is an impressive blog search engine and one that is sure to rise in traffic. In a very short time it has already reached feedster's traffic levels and surpassed Pubsub (they have a while to go before reaching Technorati).
As they build their index they are focusing on avoiding splogs and pulling in quality (reminds me of techmeme.com's approach). Their index allows for a search to run over a 4 month period and they have a very useful UI element that allows you to see the post distribution for your query. You can use this tool to focus your search on a custom date range (the default for a search is a week).
Oracle’s Identity suite is very exciting! The products that are part of this suite (at least the ones that interest me) are:
- Xellerate - aquired from Thor
- COREid - aquired from Oblix
- OWSM (Oracle Web Services Manager) - aquired from Oblix who aquired it through Confluence (iirc)
- Virtual Directory - aquired from OctetString
The good news it that this is a spectacular set of functionality. The bad news is that you will NEVER FIND what you need on their godawful website. So - here is my “frequently wished for but rarely found” list:
....For some wacky reason, the COREid stuff is considered to be logically part of the Oracle Application Server. I know, that makes no sense, since the core service does not run in a container, and it is perfectly possible to run COREid and never have anything to do with the Application Server. Still, for browsing purposes, this information is CRITICAL.
Tim O'Reilly on TrafficGauge rocks. Looks like there is a market for dedicated hardware with GPS/Mapping functionality.
I’ve been accumulating things Atomic to write about for a while, so here goes. Item: You’ll be able to blog from inside Microsoft Word 2007 via the Atom Publishing Protocol. Item: Sam Ruby has wrangled Planet to the point where it handles Atom 1.0 properly. Item: Along the way, Sam reported a common bug in Atom 1.0 handling, and his comments show it being fixed all over (Planet, MSN, and Google Reader, but not Bloglines of course); the Keith reference in Sam’s title is to this. [Update: Gordon Weakliem extirpates another common bug from the NewsGator universe.] Item: The Movable Type Feed Manager is based on James Snell’s proposed Threading Extensions to Atom 1.0; Byrne Reese seems to think that particular extension is hot stuff. Item: Nature magazine is extending Atom 1.0 for their Open Text Mining Interface. Item: The Google Data APIs are old news now, but it looks like they’re doing Atom 1.0 and playing by the rules. Last Item: Over in the Atom Working Group, we’re getting very close to declaring victory and going for IETF last call on the Protocol document.
It has recently been announced that Virtuoso has gone open source.
Here is a weblog announcement: Virtuoso is Officially Open Source! (Kingsley Idehen's Weblog).
This is a detailed history of how the product has evolved: oWiki | Main.VOSHistory
I first heard of this announcement via Jon Udell's A conversation with Kingsley Idehen announcing a podcast interview with Kingsley Idehen, and indeed I'd heard of the prodct becuase of Jon's earlier postings dating back to 2002.
An excellent article on walled gardens and the value of open systems for sharing information/data/knowledge/apis/... and so on: Breaking the Web Wide Open! (complete story) :: AO
About the author of this piece:
Marc Canter is an active evangelist and developer of open standards. Early in his career, Marc founded MacroMind, which became Macromedia. These days, he is CEO of Broadband Mechanics, a founding member of the Identity Gang and of ourmedia.org. Broadband Mechanics is currently developing the GoingOn Network (with the AlwaysOn Network), as well as an open platform for social networking called the PeopleAggregator.
Great picture from Paul Watson. And what was he doing up so early in the morning you might ask? The answer was catching an early bus to Dublin to atttend the Irish Web 2.0 Conference organised by Enterprise Ireland. Given that Paul works on the TSSG project Feed Henry, probably our most Web 2.0 ish project in our portfolio of 30 or so projects, it was very appropriate for him to attend.
Update: this article describes the conferernce:
Ireland.com piece on Web 2.0
From the monthly Netcraft web server survey Netcraft: Apache Now the Leader in SSL Servers
As the original developers of the SSL protocol, Netscape started out with a lead in the SSL server market. But they were soon overtaken by Microsoft's Internet Information Server, which within a few years held a steady 40-50% of the SSL server market.
Apache has taken much longer to reach the top. Version 1 of Apache did not include SSL support : in the 1990s, US export controls, and the patent on the RSA algorithm in the US, meant that cryptographic support for open source projects had to be developed outside of the US, and were distributed separately. Several independent projects provided SSL support for Apache, including Apache-SSL and mod_ssl; but commercial spin-offs, like Stronghold by c2net (later bought by Red Hat), were more popular at that time.
Now that mod_ssl is included as standard in version 2, Apache has become more popular for hosting secure websites. The total for Apache includes other projects from the ASF including Tomcat, and includes Apache-SSL, but does not include derived products like Stronghold or IBM HTTP Server. c2net/Red Hat includes only Stronghold and Red Hat SWS.
Apache is also gaining from geographical changes. The US, where Microsoft retains a strong lead, used to have over 70% of the Internet's secure websites. Other countries have been catching up, however: countries including Japan and Germany, where Apache is preferred, have faster growth in SSL sites. As ecommerce has caught on in other countries, the US share has been diluted, and is now only 50%.
Netcraft's SSL survey has been running since 1996. It tracks the growing use of secure web servers on the Internet, and the server software, operating systems and certificates that are used. Single user and company subscriptions are available, and custom datasets can be produced on request.
Tim O'Reilly has just published a posting explaining why he, and we all, read Jon Udell's blog postings and excellent articles.
O'Reilly Radar: Nice Recognition for Jon Udell.
I heartily agree with everything Tim says here, and so I thought I'd reproduce it in full with attribution; thanks Tim for using the Creative Commons licence that allows me to do so.
I still have a cherished copy of Practical Internet Groupware, and copies of old BYTE Magazine articles by Udell. He really has charted the rise of Web 1.0 and Web 2.0 in a way no one else has done, in his typically modest fashion. Keep it coming Jon!
Jon Udell writes on his blog: "Folio, the recursively-named magazine for magazine management, has included me in its list of 40 industry influencers, in the Under the Radar category." Here's what they had to say about Jon.
Of course, for active blog readers, Jon is anything but under the radar! Jon's Radio has long been one of the leading technology blogs. But even more striking is just how far ahead of the curve Jon is. His book Practical Internet Groupware, which I published back in 1999, prefigured the whole explosion of interest in what is now called social computing. Jon was also the very first person to articulate (at least for me) the vision of what we're now calling Web 2.0. (He gave a keynote talk on what we now call "the programmable web" at our first Perl Conference in 1997!) Jon was just a bit too early! He's long been one of the people I watch to learn about what comes next.
I wrote a preface to that book (now out of print, although still available on Safari) in which I told the world what I think of Jon Udell. It seems like an appropriate time to repeat what I said then:At O'Reilly & Associates, we have a history of being ahead of the curve. In the mid-'80s, we started publishing books about many of the free software programs that had been incorporated into the Unix operating system. The books we wrote and published were an important element in the spread and use of Perl, sendmail, the X Window System, and many of the programs that have now been collected under the banner of Linux.While the bulk of the book is out of date, Jon's vision of the programmable web and his creative approach to using existing applications in new ways, is still worth reading today.
In 1992, we published The Whole Internet User's Guide and Catalog, the book that first brought the Internet into the public consciousness. In 1993, we launched GNN, the first ever Internet portal, and were the first company to sell advertising on the Web.
In 1997, we convened the meeting of free software developers that led to the widespread adoption of the term Open Source software. All of a sudden, the world realized that some of the most important innovations in the computer industry hadn't come from big companies, but rather from a loose confederation of independent developers sharing their code over the Internet.
In each case, we've managed to expose the discrepancy between what the industry press and pundits were telling us and what the real programmers, administrators, and power users who make up the leading edge of the industry were actually doing. And in each case, once we blew the whistle, the mainstream wasn't far behind.
I like to think that O'Reilly & Associates has functioned as something like the Paul Revere of the Internet revolution.
I tell you these things not to brag, but to make sure you take me seriously when I tell you that I've got another big fish on the line.
Every once in a while a book comes along that makes me wake up and say, "Wow!" Jon Udell's Practical Internet Groupware is such a book.
There are several things that go into making this such a remarkable book.
First, there is the explicit subject: how to build tools for collaborative knowledge management. As we get over the first flush of excitement about the Internet, we want it to work better for us. We're overwhelmed by email, our web searches baffle us by returning tens of thousands of documents and only rarely the ones we want, and our hard disks bulge with documents that we've saved but don't know how to share with other people who might need them.
Jon's book provides practical guidance on how to solve some of these problems by using the overlooked features in modern web browsers that allow us to integrate web pages with the more chaotic flow of conversation that goes on in email and conferencing applications. While much of the book is aimed at developers, virtually anyone who uses the Internet in a business setting can benefit from the perspectives Jon provides in his opening chapters.
How to build effective applications for conferencing and other forms of Internet-enabled collaboration is one of the most important questions developers are wrestling with today. Anyone who wants to build an effective intranet, or to better manage their company's interactions with customers, or to build new kinds of applications that bring people together, will never think about these things in the same way after reading this book.
Second, more than anyone else I know, Jon has thrown off the shackles of the desktop computing paradigm that has shaped our thinking for better part of the last two decades. He works in a world in which the Net, rather than any particular operating system, is truly the application development platform.
All too often, people wear their technology affiliations on their sleeve (or perhaps on their T-shirts), much as people did with chariot racing in ancient Rome. Whether you use NT or Linux, whether you program in Perl or Java or Visual Basic, these are marks of difference and the basis for suspicion. Jon stands above this fragmented world like a giant. He has only one software religion: what works. He moves freely between Windows and Linux, Netscape and Internet Explorer, Perl, Java, and JavaScript, and ties it all together with the understanding that it is the shared Internet protocols that matter.
Any developer worth his salary in tomorrow's market is going to need a cross-platform toolbox much like the one Jon applies in this book.
Third, and perhaps most importantly, Jon has laid his finger on the most important change in the computer industry since the introduction of the Web.
Especially in the later chapters of the book, he lays out a vision in which web sites themselves can be considered as reusable software components. The implications of this paradigm shift are truly astonishing. I confidently predict that in the years ahead, the methodologies Jon demonstrates in this book will be the foundation of multibillion dollar businesses and whole new schools of software development.
As Bob Dylan said, "Something is happening here, but you don't know what it is, do you, Mister Jones?" Well, Jon Udell does know, and if you'd like to know as well, I can't suggest a better place to start.
The Irish post office, An Post, has published an on-line address validation system
You have to register, and then you can search to verfy any valid Irish address. The database seems to be based the delivery route, some some addresses list "nearby" major towns that the mail is routed through. Unfortunately, no geo-encoding is provided.
Thanks to Antoin antoin@eire.com サ An Post Irish national address database now available to the public
This article includes a good summary of the current state of HTTP authentication XML.com: httplib2: HTTP Persistence and Authentication, in the context of developing a Python HTTP client library (httplib2) to support what should be supported. Here's the main of argument extracted:
In the past, people have asked me how to protect their web services and I've told them to just use HTTP authentication, by which I meant either Basic or Digest as defined in RFC 2617.
For most authentication requirements, using Basic alone isn't really an option since it transmits your name and password unencrypted. Yes, it encodes them as base64, but that's not encryption.
The other option is Digest, which tries to protect your password by not transferring it directly, but uses challenges and hashes to let the client prove to the server that it knows a shared secret.
Here's the "executive summary" of HTTP Digest authentication:The problem with Digest is that it suffers from too many options, which are implemented non-uniformly, and not always correctly. For example, there is an option to include the entity body in the calculation of the hash, called auth-int. There are also two different kinds of hashing, MD5 and MD5-sess. The server can return a fresh challenge nonce with every response, or the client can include a monotonically increasing nonce-count value with each request. The server also has the option of returning a digest of its own, which is a way the server can prove to the client that it also knows the shared secret.
- The server rejects an unauthenticated request with a challenge. That challenge contains a nonce, a random string generated by the server.
- The client responds with the same request again, but this time with a WWW-Authenticate: header that contains a hash of the supplied nonce, the username, the password, the request URI, and the HTTP method.
...
The bad news is that current state of security with HTTP is bad. The best interoperable solution is Basic over HTTPS. The good news is that everyone agrees the situation stinks and there are multiple efforts afoot to fix the problem. Just be warned that security is not a one-size-fits-all game and that the result of all this heat and smoke may be several new authentication schemes, each targeted at a different user community.
I have been a big proponent of XForms withn the TSSG (though I haven't yet convinced our commercial development teams to deploy solutions based on X-Forms).
2006-03-14: XForms 1.0 Second Edition Is a W3C Recommendation.
The World Wide Web Consortium today released XForms 1.0 Second Edition as a W3C Recommendation. The new generation of Web forms, XForms separate presentation and content, minimize round-trips to the server, offer device independence, and reduce the need for scripting. This second edition adds clarifications and corrects errors as reported in the first edition errata. Second edition publications include the following documents.
This is a good overview article on the subject Why XForms Matter, Revisited - O'Reilly XML Blog
Here's an interesting reference to a Ney York Times article from John Battelle's blog John Battelle's Searchblog: Abortion, Adoption, Amazon.
The full article is here
Amazon Says Technology, Not Ideology, Skewed Results
By LAURIE J. FLYNN
Published: March 20, 2006
Amazon.com last week modified its search engine after an abortion rights organization complained that search results appeared skewed toward anti-abortion books.
Until a few days ago, a search of Amazon's catalog of books using the word "abortion" turned up pages with the question, "Did you mean adoption?" at the top, followed by a list of books related to abortion.
Amazon removed that question from the search results page after it received a complaint from a member of the Religious Coalition for Reproductive Choice, a national organization based in Washington.
"I thought it was offensive," said the Rev. James Lewis, a retired Episcopalian minister in Charleston, W.Va. "It represented an editorial position on their part."
Patty Smith, an Amazon spokeswoman, said there was no intent by the company to offer biased search results. She said the question "Did you mean adoption?" was an automated response based on past customer behavior combined with the site's spelling correction technology.
She said Amazon's software suggested adoption-related sources because "abortion" and "adoption" have similar spellings, and because many past customers who have searched for "abortion" have also searched for "adoption."
Ms. Smith said the "Did you mean adoption?" prompt had been disabled. (It is not known how often searches on the site turn up any kind of "Did you mean..." prompt.)
Customers, however, are still offered "adoption" as a possibility in the Related Searches line at the top of an "abortion" search results page. But the reverse is not true.
Ms. Smith said that was because many customers who searched for abortion also searched for adoption, but customers who searched for "adoption" did not typically search for topics related to abortion.
Still, the Rev. Jeff Briere, a minister with the Unitarian Universalist Church in Chattanooga, Tenn., and a member of the abortion rights coalition, said he was worried about an anti-abortion slant in the books Amazon recommended and in the "pro-life" and "adoption" related topic links.
"The search engine results I am presented with, their suggestions, seem to be pro-life in orientation," Mr. Briere said. He also said he objected to a Yellow Pages advertisement for an anti-abortion organization in his city that appeared next to the search results, apparently linked by his address.
Web software that tracks customers' purchases and searches makes it possible for online stores to recommend items tailored to a specific shopper's interests. Getting those personalized recommendations right can mean significantly higher sales.
But getting it wrong can cause problems, and Amazon is not the first company to find that automated online recommendations carry risks.
In January, Walmart.com issued a public apology and took down its entire cross-selling recommendation system when Web customers who looked at a boxed set of movies that included "Martin Luther King: I Have a Dream" and "Unforgivable Blackness: The Rise and Fall of Jack Johnson" were told they might also appreciate a "Planet of the Apes" DVD collection, as well as "Ace Ventura: Pet Detective" and other irrelevant titles.
An excellent tutorial on leveraging HTTP authentication in web-based applications REST based authentication
Sam Ruby: The REST Elevator Pitch
To recap, the REST elevator pitch is
- Identification Of Resources
- Manipulation Of Resources Through Representations
- Self-Descriptive Messages
- Hypermedia As The Engine Of Application State
The holy grail of Irish mapping | John Handelaar
This is an excellent article explaining what Ajax is and how to use from Perl perl.com: Using Ajax from Perl.
Meng Wong in this post Internet Governance: An Antispam Perspective raises some thought provoking reasons for operating a whitelist-only policy for all open communications, including email (i.e. refuse unless permission has been granted is the default).
zZine Magazine - Your Magazine, Your Voice.
Comment on PHP vs Ruby on Rails: Quoderat サ Rails vs. PHP: MVC or view-centric? (Reference: CakePHP)
cakephp vs. ruby on rails -> GOOD!
PHP vs. ruby on rails -> BAD!
This is a pointer to my MSc (taught) students who are looking for ways into the debates around REST for an assignment I have just set Middleware Matters: Interfaces and interop (now I wait see if any of them spot it :-).
I like the idea of using an annotated Flickr image as a design document: moinmoin django=python.org ? on Flickr - Photo Sharing!
I'm building a simple enough web app, to manage some project related data. Without claiming the ability to see around corners, I'm quite sure this app will grow over time because the data it's working against is nebulous and that will push for more and more views. The main decision so far has been to keep data in XML+RDFAnd then went on to argue why choosing a suitable framework was so hard (e.g. not wishing to enter the Zope "parallel world"). Revisiting the situation now he finds more hope with 4 stacks being front runners: In neither case does he mention any Perl-based frameworks. Fair enough, I suppose, though he could have mentioned Catalyst if only to dismiss it. I like the way that "... using FastCGI, the same Catalyst application will run under IIS, Zeus, Apache or lighttpd. If you use Apache you can also run your application with any version of mod_perl". It's real power comes from the mature alternatives in CPAN, but you probably need to be familiar with these to dive in here.
The latest Netcraft Web Server Survey Netcraft: February 2006 Web Server Survey reveals 76,184,000 responding web sites (68% Apache), and 33,197,512 active web sites (66.5% Apache).
Django | The Web framework for perfectionists with deadlines
I know that it's getting harder to pick web development frameworks as they multiply, but my recent reading suggests that this may be one to watch as a Python alternative to Ruby on Rails.
Jeremy Jones' post to the O'Reilly ONLamp.com weblog gives some gory details from a real developer's perspective.
This is an excellent article summarising a talk Dyson gave on Google Edge: TURING'S CATHEDRAL by George Dyson
A flavour of the article:
My visit to Google - Despite the whimsical furniture and other toys, I felt I was entering a 14th century cathedral, not in the 14th century but in the 12th century, while it was being built. Everyone was busy carving one stone here and another stone there, with some invisible architect getting everything to fit. The mood was playful, yet there was a palpable reverence in the air. "We are not scanning all those books to be read by people," explained one of my hosts after my talk. "We are scanning them to be read by an AI."When I returned to highway 101, I found myself recollecting the words of Alan Turing, in his seminal paper Computing Machinery and Intelligence, a founding document in the quest for true AI. "In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children," Turing had advised. "Rather we are, in either case, instruments of His will providing mansions for the souls that He creates."
Ask Jeeves Blog: Which Feeds Matter?
lists statistics on how many RSS subscribers there are to blogs syndicated on Bloglines (owned by Ask Jeeves).
I am in Brighton (UK) at a conference about mobile government EURO mGov 2005. The paper I'm presenting is on the underlying infrastructure for mobile services including the move towards IPv6 and the advantages of Open Access Networks (OANs) for public networks (such as those funded by national, regional and local governments). Colleagues from the TSSG in WIT, Jimmy McGibney, Alan Davy and Steven Davy are presenting on how some specific projects in the TSSG can help in the development of next generation mobile government services, in particular with respects to security and service composition (M-Zones, DAIDALOS and SEINIT). In addition Yorck Rabenstain (PSI AG, Germany) is presenting on a project that the TSSG are partners in called RISER that looks at authentication and identity management mechanisms for citizens for eGovernment services.
One common message from the conference is that there are distinct subsections of mGovernment:
Some key enabling factors here are the lowering costs of the hardware for mobile devices, the prevalance of these devices (people have mobile phones and PDAs/laptops with wireless access), the lowering costs of network connectivity and increasing coverage of mobile operators and locally deployed networks (remembering that useful things can be done even without always-on access, just with docking/syncing when in range), and the interoperability of Internet-based technologies that makes it cheaper to develop and deploy distributed services that allow access to services using these devices.
The main bottlenecks are the power consumption of the mobile devices (battery life and recharging is problematic especially of more sophisticated devices), coverage and cost of the various mobile technologies (it costs money to deploy and manage your own network, and/or to use an existing operator's network, and although things are improving, many areas have difficulties getting suitably priced access). The concensus seems to be that the primarily barrier to deployment of new mobile services for government is continuing the high cost of creating integration with enterprise back-office software systems. In effect, many of these systems are propiatory and require a lot of effort to customise for mobile access. The promise is that the next generation of web technologies (Service Oriented Architecures and so on) can help reduce the cost of developing interoperable solutions in this space, just as in all areas of ICT, but few examples of real deployed systems used these advanced technologies.
My personal perspective is that, at the end of the day, all mobile services are simply ICT systems that send data over wireless networks between one location (often centralised) and another. The huge productivity gains that have been possible using Internet-technologies for development of these services (i.e. using IP at the network layer, and web-based protocols at the application layer) are equally valid in the mobile domain as in the desktop with fixed Internet. Problems with the proliferation of the number of devices mean that IPv6 is the obvious choice for such services, thus avoiding NAT-translation issues arising from IPv4 private address space being used to address these devices.
One main difference between mobile Internet (i.e. IP-based) services and fixed Internet services is that often the end device is not fully able to deploy a standard web browser. Some argue that this will end soon, and everyone who needs one will have a smart phone, PDA or laptop that can support a browser (e.g. I personally have an Opera browser on my Sony-Ericsson P910i). Others argue that simpler interfaces such as limited text-menu browsers, simple SMS messaging or even voice-enabled IVR can continue to provide useful mobile services, and thus create a space for separate style of services.
Another big debate is whether mobile operators should be involved in the loop in terms of developing and providing higher level services (other than just Internet connectivity). Internet purists feel that the operators' job should be to provide reasonably priced IP connectivity for mobile devices, and that the services should be developed in a flexibile way that does not require any individual neotiations with operators. It could be argued that is particularly important in the public sector where it would be ludicrous for a local government to only support access to services from citizens using a subset of the mobile operators! Others argue that the mobile operators have access to additional information (such as the SIM-based authentication of identity, an established billing relationship, and potentially some forms of location-based context) that can be very usefully integrated into mobile services in the mGovernment domain, and it would be foolish to ignore the opportunities provided by close collabortaion with operators.
O'Reilly Network: The Geospatial Web: A Call to Action I get really excited by the possibilities of linking location to web-based services, particularly mobile services. This is a great outline of key issues from a web development perspective.
Facinating discussion on how RDF allows for missing data (missing is not broken) and provides a real flexible platform for future semantic systems:
RSQ: Really Simple Querying?.
In a facinating article on 1060 Netkernel Peter Rodgers explains what it is and where it came from: XML.com: Introducing NetKernel.
UPDATE 2005-05-01 The author Peter Rodgers of this article contacted me to advise me that the XML.com had an editorial mis-fire yesterday, and provided some incorrect information about the article. I've updated the information in the main posting with the changed author, title and URL. Looks like the problem was incorrect linkage of content with header metadata (author/title) and the article is what I had read, just wrongly atributed, and it's still facinating! There's an on-line discussion forum for those interested: Discussion Forum.
whatever: Onfolio can now sync with Bloglines Looks like this provides a useful way to integrated desktop and web-based RSS-feed browsing.
Jon Udell's article End HTTP abuse | InfoWorld | Column | 2005-04-20 | By Jon Udell descibes a criticism of some popular web-based services for using GET when they should have used POST (because there were side effects that changed data), thus they broke the RESTful rules. It is intesting that even in the very simple space of HTTP (where there are only four possibilities: GET, POST, PUT and DELETE) popular services (Bloglines, Flickr, and del.icio.us) don't follow the basic rules for RESTful services. Jon argues it may be because the lightweight programming toolkits (e.g. Python, Perl and JavaScript) support GET more easily than PUT (and he gives examples of what he means here). So potentially these services were simply trying to make it as easy as possible for people to integrate with their services, and the fault lies in the toolkits that should make it even simpler to use POST than is is now, thus removing any excuse for not following the RESTful rules.
This is an excellent article on the various social bookmarking tools available and why it is worth using them: Social Bookmarking Tools (I): A General Review
Google Maps for the US has had a big impact showing what is possible with good map coverage linked to a web infrastructure.
Now they've lauched a beta version for the UK and Ireland Google Maps UK and More (Google Weblog) but the level of detail on the zoomed in maps is still very poor. Hopefully this can be improved, and can create the same buzz that it has in the US.
IRISH-TIMES -- Daybreak reader John McCormac pointed out to Irishblogs that Robin O'Brien Lynch (ROBL) extends Caoimhe Burke's 15 minutes of fame in an article explaining Flickr to readers of The Irish Times. "There are lots of theories and ideas as to the reason for Flickr's immense popularity," says Caoimhe Burke, a researcher in multimedia at DCU. "Essentially it is the most user-friendly photo-sharing service online, but it offers far more to its users than online photo storage.
"The communities that the service facilitates help to ensure its popularity among existing users and add a unique dimension that serves to attract new members.
"Admittedly it probably has more of a trendy image than other services. the service was conceived and developed by a group of young internet enthusiasts (take a bow, Stewart and Caterina) and has become the service of choice amongst the young and hip."
ROBL postulates "the most important of all the factors is Flickr's appeal to bloggers." Add to that Flickr's open API.
ROBL observes, "More often than not, a blog will carry the phrase 'view my Flickr' rather than 'view my photos'. As blogging becomes more and more popular in the mainstream, it is carrying Flickr with it to ever-increasing exposure."
"Bloggers are attracted to Flickr because the service provides tools that are specifically used for uploading images to blogs," says Burke in the Times article.
Did you know there are groups on Flickr for Ireland and Irish blogs? Isn't it a shame that the tag word "Ireland" does not rank among the top 150 most commonly used tags by Flickr users? Gold star for the person who correctly identifies the most frequently tagged Irish county in Flickr.
Robin O'Brien Lynch -- "Flickr of personality behind website's success" in "Technology" section of The Irish Times, April 15, 2005. More coverage in the Irishblogs Yahoo! group.
XML.com: XML Namespaces Don't Need URIs
This is a good explanation for why URI-based namespaces in XML cause a lot of problems for developers/users.
Jon Udell describes how to use on-the-fly self-rewriting web pages Jon Udell: Software as a service: have it your way. This uses a Firefox plugin Greasemonkey, and the technique that that is becoming known as AJAX (Asynchronous JavaScript and XML), though the XML optional. John points out that there's nothing new in this, but that the prevalance of Firefox and Greasemonkey could make it it a lot more common.
Dave Orchard has an interesting discussion of the advantages of using straight forward HTTP URIs (an actual web reference) over the idealistic URNs (abstract notations for a resource that may map to a HTTP URI) Dave Orchard's Blog: Why HTTP uris are better than urns and even id: uris for identifiers
Why HTTP uris are better than urns and even id: uris for identifiersWhen creating a URI based identifer, perhaps the most important decision is which uri scheme to use. Two of the most common schemes are http: and urn: schemes. A common reason given for using URNs for identifiers, such as namespace names, is that an http: identifier appears to humans as a location and hence dereferencable. Another common reason is to come up with an identifier that is location-independent or that is "movable" from one location to another.
URIs have context
The first argument, that http: uris are "locations", is based upon incomplete understanding of the use of URIs. Any data type exists in a context, in this case URIs. The context will define the use of a URI, and includes social and technical context. A URI on the side of a van will convey the social meaning that it can be typed into a browser and some good stuff will show up in the window. Other contexts for the use of URIs include namespace names, references to documents, and identifiers for *things*. There is never the case that a URI is simply "found" without a context. The key point is that every use of a URI for an identifier has a context.The use of uris in namespace names is enlightening. Imagine 2 scenarios, one with a urn and another with an http: uri. The namespace specification defines a context, which roughly speaking says that a namespace name SHOULD not be considered dereferenceable. Any software component that is written assuming that a namespace name MUST be dereferencable is violating the namespace specification, ie the context. It may be that the namespace owner has guaranteed that they will provide a document at the namespace name, but this must be on a subset of the entire set of namespace names. Clearly generic XML software should not be written to assume dereferencability of namespace names.
It is natural for a human reading an xml document with a namespace name that they do not know about to want to understand more about the namespace. This is why the TAG recommends providing a document at a namespace name that provides both human and machine readable information.
Trendmap for "Micheal O Foghlu"
This links to a mapping of how often my own name shows up in vaious search engines. The technology can be used for more useful searches too.
Bill de hモra tries to draw together some threads in the various debates on REST and WS-* over the past few years and proposes a list of challenges for each camp in the hope that some form of mutual constructive debate can lead to synthesis: Bill de hモra: Synthesis
This site passed me by until reminded by Jon Udell's recent post: Jon Udell: Upcoming events in Keene, NH
The idea is that anyone can join in and advertise event linked to "metros", metropolitan areas. You can create new metros as needed. Jon was bemoaning th lack of the critcal mass effect for this free service (as opoosed to Flickr for uploading and sharing images that has really taken off, and has just been bought over by Yahoo!).
The RSS feeds and iCal events that can be autogenerated (and subscribed to) from the Upcoming.org entries give it the potential to add real value to the networked blogosphere....
Middleware Matters: SOAP vs. REST: stop the madness - Vinoski and Udell agree that people should just use what they need and drop the rivalry. Hear, hear.
There used to be loads of HTML tutorials on-line, and that's how I picked up things in the early days of the web. I haven't seen so many up to date versions, but this seems close: XHTML Tutorial
Interesting that Jon Udell seems to be quoted by REST proponents and SOAP proponents Jon Udell: Don't throw out the SOAP with the bathwater
Sifry's Alerts: State of The Blogosphere, March 2005, Part 2: Posting Volume
Sean McGrath is still sceptical about SOAP success stories: Sean McGrath, CTO, Propylon
Sober reflections on REST and web services WS-* Bill de hモra: The integrator's dilemma
Various SOAP vs REST debates rumble on.
It is good to see some sanity remaining: Middleware Matters: SOAP vs. REST -- huh? (Steve Vinoski)
1+1 = 0 (Christopher Ferris)
For information about SOAP see W3C SOAP 1.2 and SOAP Discussion.
For information about Representational State Transfer (REST) see the REST WiKi, in particular see Roy Fielding's dissertation and Fielding and Taylor's ACM Transactions on Internet Technology, Vol. 2, No. 2, May 2002, Pages 115 - 150 article.
Eric Newcomer's post Eric Newcomer's Weblog: This Week - XML at W3C TP gives an insight into the XML standardisation processes in the W3C. For example it was interesting to hear that recinding XML 1.1 was discussed!
This month's Netcraft web server survey Netcraft: March 2005 Web Server Survey Finds 60 Million Sites finds over 60 million web sites.
The milestone comes just nine months after the survey crossed the 50-million mark in May 2004, as the growth of the Web continues to accelerate, approaching the dizzying pace of the height of the Internet boom. During the year 2000, the number of sites found by the Netcraft survey doubled from 10 million to 20 million in just seven months. More recently, it took 13 months for the Web to grow from 40 million to 50 million sites.
Roy Fielding posted to the apache-httpd-dev email list yesterday 27th Feb 2005:
MARC: msg 'Happy Birthday, we are 10'.
The recent Netcraft Feb 2005 Web Server Survey found Apache was used by over 40 Million hosts.
Apache is one core element of the so-called LAMP platform (Linux, Apache, MySQL/PostgreSQL, Perl/PHP/Python), a set of open source software languages and platforms that have enabled the huge growth in Internet and web-based software in the past 15 years. O'Reilly maintain a portal site for news relating to LAMP: ONLamp Portal.
Debunking SAML myths and misunderstandings discusses SAML (Security Assertion Markup Language), and some common misundersandings and myths about SAML. A good read for anyone interested in digital identity using web technologies.
Myth: SAML is an authentication authority
SAML is an authentication protocol that is used between servers. You still need something that actually performs the login for you. All SAML can say is "you have logged in." For example, when an LDAP server authenticates a user, the authentication authority is the LDAP server even though the LDAP server may be using SAML to communicate the authorization.In a complete authentication system, you still need to write a policy decision point to decide if a user may access a Web page. Additionally, you still need to write a policy enforcement point. This is a servlet or application that receives the authorization, checks the role and authorization, and then makes an assertion. Several companies provide commercial policy decision point and policy enforcement point solutions, including IBM.
Tom Murphy has been writing in the Irish Independent (sadly not available as an RSS feed as yet) on blogging. He has posted follow-up information his own blog PR Opinions. It's good to see blogging entering the mainsteam in Ireland. Thanks to Bernie Goldbach for the cross links. The focus of Tom's discussion is the danger of employee blogging, and the ethical dilemas that could lead to the loss of your job. Excellent stuff.
UPDATE: Checked Tom's logs and found he's been blogging since March 2002, about the same length of time as I have. Cool....
Twenty to Watch in 2005 -- Interview -- CMS Watch
I wonder if all the overlapping areas of modern ICT did a similar 20 to watch, how many names would appear in all the lists! Interesting stuff. I'll take it as 20 to read (if they blog, and quite a few of them do).
In this quirky reporter style blog Read/Write Web: Web 2.0 Weekly Wrap-up, 30 Jan-6 Feb 2005 a lot of very useful information is covered. It reminds me of a lot of the content of John Battelle's Search Blog but with an explicitly journalistic stance.
Schneier on Security: Authentication and Expiration
Here Schneier debates the idea of an ongoing relationship and mutual interst in maintaining it, versus on-line companies keep your details even if you no longer wish them to.
Ask Jeeves Acquires Bloglines as this posting on Bloglines explains. Many bloggers have commented on this noting the potential synergy (e.g. John Battelle).
Bernie asks about IrishEyes: Mapping Irish Bloggers. Personally, I've signed up to the GeoURL, (though the service is currently down). This means that there is metadata on my home page giving my co-ordinates:
meta name="ICBM" content="52.2068,-7.4236"
Bill de hモra argues that it is time to stop complaining about metadata and start using it. I agree, but note that all the success stories he notes use clever forms of social engineering to make it worthwhile creating the metadata in the first place. Cory's famous rant against metacrap was arguing against heavily engineered central systems that just assumed some poor smuck would create the metadata needed, and of course no one ever did. Typical of this type of system are those imposed by managers on employees for "tracking" without analysis of the overhead of using the system. Anyone who has used an inflexible problem logging system will know what I mean.
Harry Halpin has just published an article XML.com: Reviewing the Architecture of the World Wide Web on O'Reilly's XML.com site.
It is strange that the formal architecture document, AWWW (Architecture of the World Wide Web) released in December 2004, authored by the TAG (Technical Advisory Group of the W3C) has come 10 years after the event, so to speak. This is becuase the web arose from a series of specific specifications (i.e. definition of a url, http, and html) rather than an overall architecture. The best practice has grown, to some extent organically, around the core.
This article puts the AWWW in context and summarises its key points, and is thus very useful.
This product looks like a great innovative way of experimenting with lightweight services: NetKernel
Version 2.0.2 has now been released, though this Jon Udell posting refers to an earlier version:
The 1060 REST microkernel and XML app server (Thursday, February 26, 2004).
On 15th December 2004 the W3C released the "Architecture of the World Wide Web, Volume One" as a recommendation.
Architecture of the World Wide Web, Volume One
I have just read up on XML Security: Control information access with XACML
by Manish Verma of Second Foundation. One of our MSc students, Mike White, is working on applying these techniques to an integrated Smart Space management system we're specifying and developing in the TSSG. It looks very interesting.
JefTel is a free secure peer-to-peer email system designed to make email itself a more trusted mechanism for exchanging messages.
Jon Udell's recent article in InfoWorld InfoWorld: Under Gmail's hood: October 22, 2004: By Jon Udell : APPLICATION_DEVELOPMENT : APPLICATIONS : WEB_SERVICES describes how Gmail has succeeded in producing a very smart User Interface whilst still being limited to a browser as a front end.
I hardly beleive it: .eu Domain Name Contract Signed: Registration Could Begin in Six to Nine Months. I have been hearing that this was about to happen for nearly 3 years now. Well now it has, and we may be able to use the domain with a year.
This posting discusses the issue of RSS scalability (popular RSS sites can get hammered by requests thus making RSS hard to scale) and describes Bloglines new API that allows programmersto use Bloglines as a cache for popular feeds O'Reilly Network: The New Bloglines Web Services
Here's a nice post Explaining RSS. Includes links to a vedeo and a resource links page.
John Battelle posts a blog entry John Battelle's Searchblog: Search Volume by Type of User with a preview of some slides from Gian Fulgoni, founder of Comscore analysing how users use Internet search engines. Very interesting..... "the heaviest users of search, who are a minority of total search users, account for the vast majority of search queries. This seems a case where the tail is not as powerful as the head....And clearly, the more folks use the web, the more they use search. What happens when the majority of us are heavy users of search? Time to buy more servers...."
Two recent issues of The Linux Jornal (March, April 2004) had excellent articles by Mick Bauer on the BalaBit IT Zorp Firewall as an application proxy.
Al though the Leopard Project is primarily aimed at providing a baseline structure for the easy development of eGovernment open source projects, by combining in one integrated fashion, in a method that works with a variety of Linux distributions, all the components of a basic LAMP system (Linux, Apache/SSL, MySQL/PostgreSQL, Perl/Python/PHP) this project may simplify the basic setup of many types of system relying on this core architecture.
XML.com articles on News Standards: XML.com: News Standards: A Rising Tide of Commoditization [May. 05, 2004]
It looks like a new version of Groove is in beta testing. Ray Ozzie, ex-Lotus Notes architect, has been working with his team to create a really productive piece of hybrid p2p workgroup collaboration software. They've also put a Groove Blog on-line to help disseminate information.
The reviews so far look fairly positive:
Uniting under Groove (InfoWorld) is an article by Jon Udell showing how Groove has embraced .NET and thus enabled use of Groove by a much wider range of users and developers. His weblog entry announcing this article gives further examples of the new flexibility.
I have been doing some background reading on PURLs (Persistent URLs) and shortened URLs like metamark (as used by the Perl6 developers). It may well be worth setting up my own short/persistent URL server. The PURL code is free for reuse. The main difference seems to be that PURL requires you to register as user to create new references, whereas metamark can be anonymous (but cannot edited afterwards). There are some others to look at. Interesting that PURL was created by OCLC (mainly famous for providing a library cataloguing MARC record service) who seem to be very active in the IETF working group on URNs which may solve this issue more gracefully.
I recently read: Jonathan Brazil's Weblog: Space Weather an interesting link to an on-line space weather site which would be very useful if you were thinking of launching a satellite.
However, I have an ulterior motive in linking to this message on Jonathan's weblog, I'm experimenting with the Moveable Type feature called: Trackback. This system allows weblogs to cross-reference each other and maintain lists of these references. Effectively this allows other webloggers to add value to your site by citing your weblog entry as the source for their ideas.
The system of peer referencing closely mirrors how we behave in groups. We often value people who others have referred us to, and we often judge the value of things by the number of times they are referred to by others.
Ben Hammersley has listed a number of related mechanisms in this article on his weblog "Trackback in the Saddle Again". See also Samy Ruby's article on cohesion and "manufactured serendipity".
Other weblogging systems use similar mechanisms to this. In particular Radio Userland (as it maintains a central index of all postings) can auto-create these cross-links by parsing its own feeds. Also, Radio maintains a lits of the most accessed radio weblogs on its site.
As usual, Jon Udell has been instrumental in making me see the power behind these concepts of automated social engineering, particularly his post on Crossing the bridge of weak ties.