17 January 2012

The Biggest Problem with Technology

The biggest problem with technology is complexity. A technology is only as valuable as its ability to be used to provide value (for a feasible, commensurate cost). When any technology is too complex it creates barriers to its use. Even if a technology that has great value to people and organizations, it will see little interest if it is too complex. Even if the value of the technology is commensurate with its complexity, the lack of use (aka adoption) will be apparent. There may be a general lack of interest to a technology, however if interest is high and adoption is low, there's a high probability that complexity is the cause. This can be summed up in a simple equation:

technology adoption is inversely proportional to its complexity

Much of the following comes from a discussion on complexity when working on a Semantic Web solution for a financial services client awhile back.  This example illustrates a common pattern I've seen repeat itself in technology time and time again, and I expect this to continue:

One technology that fits this complexity equation is the Semantic Web. If you look at the initiatives such as standards, notations, reference implementations, technologies and even products, for the Semantic Web a few obvious observations may be made.

First, most of the "members" of the Semantic Web community are in Universities, working in R&D labs for commercial companies or governments or in high-tech fields such as bio-tech.

Second, much of the work is designed for and by individual researchers and users. If you survey the open source Semantic Web technology you'll notice very little work has been done to support multi-threading, concurrency or even performance at Web scale - that is an exercise for the user. Perhaps commercial products provide this, but they are too costly and their trial versions are insufficient time-wise if you have a day job and a life or capability-wise if you want to do something slightly complex.

Third, much of the work to date is theoretical, however there are some excellent examples of Semantic Web solutions. Unfortunately, these real examples are too complex and esoteric for mainstream extrapolation, but it does prove that given sufficient motivation one can make the Semantic Web real (but not really practical).

Which brings us back to the complexity equation. If you have the time and can hire or fund research to develop Semantic Web solutions, you can; most people can't, and most organizations that could, won't. Why? Because it is risky, the value isn't assured for the required investment and there are many commercially proven technologies with less risk and sufficient value to fund. Semantic Web is too complex, and therefore too risky or expensive, to warrant its use in general, mainstream, computing solutions.

But what if you want, need or could use what the Semantic Web promises?

Faced with the challenge, promise and opportunity of semantic technology, a Web community formed to address practical use of semantics. Not intentional initially, though, as they had a more practical need, semantics was an implicit byproduct of their efforts. They surveyed what they knew/know and what was already out there to use and reuse. Let's see: HTTP, HTML, XML, XHTML, tagging, RSS, Atom, AtomPub,.... Then in a brilliant move, instead of standards designed by committee over many months or years, they created agreed conventions where anyone in the community is able to participate, discuss and offer recommendations and ideas.

The conventions are extensible; allowing others to add and extend, and share them. Anyone can add this metadata to their existing information,extend it to meet their needs, embed it into their existing Web information or dynamically create and combine this information. If someone receives this information and is unable to parse some or all of this metadata, they can simply ignore it. How's that for practical? This is the essence of the lower case semantic web, and it really started with the distributed community known as microformats.org, often referred to as simply microformats.

Embedding and including meta-information in content to provide basic, but potentially very rich, meaning in content on the Web was a pragmatists dream. This pragmatic, practical approach to add structured semantics to Web content in a manner accessible to a wide range of Web practitioners at many levels, was at the opposite end compared to the approach of the formal Semantic Web.

While the big R&D shops, universities and others with deep pockets continued their theoretical modeling and implementation exercises of the Semantic Web in their Ivory Towers; bloggers, wikimasters, webmasters and other denizens of the Web began to structure, define, add and share meta information using the common tools at hand to create something new, practical and usable. Since microformats are made of very basic components of the Web (and Web 2.0), they are readily understood by many and varied denizens of the Web from PhDs to corporate developers to Junior High kids who can put them to use in whatever context they require.

In essence, the upper-case Semantic Web has found itself circumvented by the defacto conventions (still not standards) of the lower-case semantic web used by many. Again, simplicity has trumped complexity in the utilization of technology, in this case, semantic technology.

Even if the Semantic Web is better, richer, fuller, more rigorous, etc., than the semantic web, simplicity will usually win over complexity. In this context, winning is really widespread use. Widespread use infuses more content throughout the long tail of the Web with semantics through simple meta-information, which in turn leads to more meaning in Web content. With the increase in meaning in Web content, we have a richer set of meaning with which to work in the next round of Web content creation and distribution. This is recursive, as more semantic content leads to more meaning, which leads to more semantic content...

Have a look for yourself, microformats and Semantic Web and see which is best for your semantic needs.


06 January 2012

What's an alternative to SOA?

Thought you'd never ask (maybe you didn't).  This is not new, but hopefully this will encourage a few to look deeper into this option, especially those in enterprises with enterprise architecture teams. Using your favorite search site, have a look at Web-Oriented Architecture (WOA) or Resource-Oriented Architecture (ROA).  I'll use WOA, but ROA is interchangeable for this post.  

The good news about WOA is you probably have many of the technologies and infrastructure components, and know enough about the standards and technologies to get a WOA up and running fairly quickly. If you are an enterprise architect, you may not like the approach or may want to combine it with SOA, big, complex EA and any number of other things, but try to keep an open mind and keep things simple.

To keep things simple, define a small set of standards and technologies that will be used in your WOA. To get started, you will need a basic set of standards/technologies. For example, a good starting point would be:
  • URI
  • JSON (and/or others)
  • REST
If needed, you can add RSS, XML, and other open standards, content types, etc., to define the fundamental set of enabling standards/technologies for your organization's WOA. The objective is to keep WOA-based solutions to a small set of common web standards, and resist any temptation for more complex approaches. Keep it simple. If there appears to be a situation that can't be addressed by your choice of standards, you should search for solutions, ask questions or hire an experienced web consultant (probably not a bad thing to do early in the process as he/she can help you define your WOA standards and technology options), etc., that is, the web (for the most part) has been built on these standards so whatever you need to do should be possible...if you can't figure out how, find someone that can or has!

Looking at the above list you have fundamental distributed service architecture capabilities:
  • HTTP/HTTPS - provides messaging and methods
  • URI - provides the resource identification (endpoint addressability)
  • JSON - provides a simple data representation
  • REST - provides the resource (instance, service) model
With these, you can define a very robust WOA.  We have a way to send/receive messages, package messages, define messages, consume/respond to messages.

You'll also need to consider security.  There are several options such as TLS (SSL), OAuth, OpenID, OpenLDAP, encryption, etc.; there are open source and commercial options...but essentially, you need authentication and authorization at a minimum, and you may want more such as accounting, auditing, encryption, etc. 

The heart of a SOA is the Enterprise Service Bus (ESB).  If you consider a SOA model, the ESB "sits in the middle" processing all (most/many...some subset if not all) message requests and responses; it may transform, parse, enrich, strip, burst, etc. processing the message. If you are using WSDL and the plethora of web standards that can be added to WSDL, you'll have a fairly large envelope to process which will take even more time. To process these messages (maintaining SLAs and QoS) would require significant infrastructure, that is scalable and fault tolerant and the end result is likely expensive. If you are using a commercial ESB...even more costs. Complexity and cost, what are the benefits, when will the benefits be realized?

The heart of a WOA is the web server.  There are many options here, and you probably already have your own favorite set of web server technologies.  That said, this may be a good time to have a look at more modern web servers (e.g., nginx). At the simplest level, a web server processes requests, ideally quickly and efficiently and can scale to a large number of users.  In a WOA, requests can be processed locally (by / on the web server) or remotely, by another web tier that hosts/exposes REST endpoints.  The REST service implementation is up to you, what it does and how it does it, is also up to you. That's really it. 

In SOA, a collection of web services isn't sufficient as you need service infrastructure too.  In a WOA, having a web server and REST services is about it.  If you want SLA, QoS or other analytics, you can add those to your web server or your current infrastructure if you don't have them. SOA builds on the basic web infrastructure (adds more infrastructure); WOA is the basic web infrastructure. WOA is used by the Internet, it should meet your needs as well.

Everybody is doing it!

Everybody is doing it...

Are they? How do you know? Just because it's popular doesn't mean everybody or even a majority are doing it. Even if everybody or even most are doing it doesn't mean it is right or always right for everything. And, if it's being done, how well is it being done? How do you know?

What are some things everybody is doing?

Service Oriented Architecture (SOA)

How's that working out for you? Are things better post-SOA than pre-SOA? Have you performed a cost / benefit analysis? Are things more efficient or cost effective? Are things more or less complex? Are you able to better respond to business requirements, faster time-to-market? Have your software, hardware, support, maintenance and labor costs increased due to the need to implement, maintain and support SOA?  Has your availability improved / unplanned outages, SLAs and QoS improved? Can these be associated with your SOA initiative?

What large public web sites offer SOA (SOAP, WSDL) service endpoints vs. offer REST endpoints?

I'm not saying SOA is a bad idea, just saying it could be wrong for your business model or simply overly complex and expensive for your requirements or your organization.

One thing that should be obvious is to look at who is promoting any particular approach, model, technology etc. and consider why they are promoting it.  It is likely they have something to gain from use of that which they are promoting. Yes, this includes me too.

One of my motivations, besides looking for consulting opportunities, is architectural simplification.  Things have a tendency to get complex on their own, without having to start with a complex architectural approach.  For my customers, I make assessments based on many factors to determine a recommended approach, solution and/or technology that will meet their requirements and expectations. Have two or more options with pros and cons is always a good idea.

Who is promoting the use of SOA?  Is it your IT department C-level / SVP execs? Is it a consulting firm? Is it your own "Enterprise Architecture" team/personnel? Is it your software vendors? I find it is not often IT execs promoting SOA or EA unless they have been influenced by some other player.  IT senior execs, other than tech/innovation focused CTO / Labs / R&D types, are predominately cost/efficiency/risk management focused...keeping costs under control, keeping business systems up and running and meeting their customer requirements in a timely/cost effective manner.   Even if something promises to be better, new things introduce disruption, cost, uncertainty and risk.  Even more so if these new things are complex. 

SOA isn't bad or good, it fits or it doesn't which depends on your specific requirements and your capacity to support it. 

So, keep an open mind, consider alternatives and options, and consider the motivations of those who are promoting SOA, EA, Clouds or anything else for that matter. Complexity is expensive, and once in place it is difficult/costly to remove.  Be an informed, educated technology consumer. Just sayin'.

05 October 2011

SSJS = Server Side JavaScript

LiveScript was a very nice, general purpose programming language initially, however it was subsumed into the marketing hype of the day to be renamed and forced fit into a company's web browser product to manage the HTML DOM and we have the often/much maligned result: JavaScript.

I know many web developers, especially in enterprise settings, that barely tolerate JavaScript and others that "hate" it (their words not mine) and others that banned its' use or at least limited its' use for a myriad of reasons, mostly due to security.  Web security is very convenient if you want to limit something...even if there is a kernel of truth to it...however that is mostly an implementation issue that should not reflect on the language itself, IMO.

FWIW/IMO, ECMAScript is maturing rather nicely. The JavaScript language has a great set of programming features some of which were created in one of my favorite languages that never-really-was-successful, Self.  Dynamic, prototypes, functional, imperative, OO, ... very flexible.  For more information on the language itself, I highly recommend Eloquent JavaScript.

V8, Google Chrome's JavaScript engine, has reinvigorated this space, though I'd like to see more interest from Mozilla on Rhino and SpiderMonkey...and see other players enter this space.   By far, the most interesting and promising SSJS is provided by node.js.  I'd like to hear about other options and/or experiences using node.js or SSJS, please share 'em if you got 'em!

No technology is perfect, and you have to determine which technology is the best fit for purpose for your solution.  I have a core set of technologies that I use for what I tend to call "nextweb" or realtime web solutions, and SSJS is a foundational component...and so far node.js is the best option I've found for SSJS. 

In the past, JavaScript has been limited by its' implementations and where it has been "forced fit", though the number of client-side JavaScript frameworks and SSJS indicate that these limitations are no longer valid.  With a small core set of technologies, most with open source options, and open standards, the nextweb can be utilized today by any organization willing to capitalize on it...if only they can give up some of their reliance on enterprise "bloatware" for their web solutions.

SSJS, just think about it, and maybe, just do it!

30 September 2011

Wow! What a wild ride the last few years! Digging into my old web presence I found this blog. I haven't used it since 2008, and I'm surprised to see it is still here. Then, I was involved in Semantic Web solutions; since then, I refocused on realtime web solutions which are also amenable to the Cloud.

The last few years have been a wild ride technology-wise and I have an updated perspective on the web...nothing earth shattering or paradigm shifting if you already know this stuff...here is a high-level summary:

  • Microformats are a very good thing; the tactical, practical semantic web (lower case)
  • JavaScript is a wonderful programming language, in spite of it being forced into the HTML DOM
  • jQuery is an amazing and useful framework.
  • REST is all you need for distributed web architectures (sure, there are a few more foundational standards)
  • Realtime web is viable, but not well-known in terms of value and technologies to most mid-large size organizations who stick with their big name, high cost technology solution providers and bloatware.
  • Realtime web solutoins can be designed to use a small set of common standards and wide range of open source software.
  • Enterprise architecture - where's the value realization? For those who have embarked on enterprise architecture initiatives, have large teams of enterprise architects with all the necessary certifications (and/or consultants too) have you seen real ROI? Have you broken even? Are you even tracking this?
  • WOA > SOA
  • Artificial Intelligence and Knowledge Engineering provide implicit value but don't sell well if stated explicitly and directly.  One needs to use more popular buzzwords and euphemisms.
Going forward, I will focus on my experience and findings regarding WOA and realtime web architectures, designs, technologies, web standards, as well as using the same for server-side with mobile and desktop clients (web, native); JavaScript and related frameworks, server-side JavaScript. Of course, I will continue to ask for information on enterprise architecture value realization and ROI.

Thanks for reading!

28 July 2008

Semantic Web Open Source Software Hole?

If you have been working or playing with any of the open source software (OSS) efforts that implement the W3C standards, you may have noticed there is a major hole: thread safety and concurrency! Most of the OSS software is very good for learning, experimenting or creating single user solutions, but beyond that, you are out of luck.

That is, if you want to use semantic OSS libraries or code, you have to provide the multi-threading support. Impossible? No, of course not, but to do so, you have to provide a threadsafe / concurrent environment or get the OSS source code and rewrite it. Neither of these options are appealing. This is work, a means to an end, and its extra. If you want to provide an OSS-based semantic solution on a website where there is a possibility of two or more concurrent users ;) you will need to do some work.

Of course, buying a commercial product is an option if you can afford it, and are willing to work at the vendor's release pace and no access to source code, etc., that's fine...but then, you aren't using OSS now either.

If you are going to provide open source software for semantics, or just about anything else, and you want your software to become popular and enjoy widespread use in the "real world" - you need to provide thread safety, concurrency and even consider multi-user environments (e.g., like a web server!). If not, then your OSS will not likely see much use or notice.

I think the main reason for the slow uptake of semantic technology (and some other advanced technologies) on the web is not complexity, standards or usefulness of the concept, but rather the state (or rather the lack of state management / thread safety and concurrency) of Semantic Technology OSS and the cost of commercial semantic technology.

Its time many OSS projects recognize the realities of software for the web: multi-user environment support is a good place to start.

What do you think?

10 January 2008

REBOL 3, where art thou?

Nearly a year since I blogged about my anticipation of REBOL 3 and its imminent release, and I'm still waiting. Anyone else waiting too? The latest status update was last October. Now the reason things "appear" to be so quiet on REBOL 3 is simple. Since I'm not an active participant in the development of REBOL 3, I don't have access to the information flow. This cuts down the noise the actual, active developers need to deal with, and the promise of new, open AltME worlds once 3 is released all makes sense.

As I was waiting looking in from the outside, I took a quick look around the REBOL sites and found (from here) the alpha is available. (scroll to the bottom of the blog entry to get the link). Kudos to the REBOL 3 team! (and many thanks too!)