Every day it seems, I become more RESTful. As I look out over the ever increasing morass that are the SOA / web service standards, emanating from a plethora of standards bodies, consortia and vendor alliances I often wonder, when the complexity mavens will cease complexity expansion. Unlike global warming, this complexity is man-made.
Having worked in and continue to work in SOA, EAI, JavaEE, CORBA and other distributed solutions and technologies for many years, thre is one thing these all have in common: All have suffered from design-by-political-committee aspects and/or complexity which limited their usage and market growth potential. What has the IT industry learned from these efforts?
Well, I learned and experienced and still live in this morass filled enterprise SOA world. Unlike some, after I learn and experience, I like to improve, I like to simplifiy what I have learned and experienced, so I thought: how may one make sense of all of these past attempts, from all these growing numbers of standards and technologies for web computing for business, government, science and more? How may one separate the necessary from the unnecessary? How can one do SOA in a simplified manner? What are the basic elements required to enable SOA, b2b, m2m, s2s, c2b, c2c, etc., in a scalable, fault-tolerant, manageable manner with dynamic discovery, production and consumption? What is a viable, basic, simple model? One doesn't have to look too hard to see examples of a simple successful model for SOA web services, but one needs to know the real / actual from the mere usurpers of the term who often live in marketing... the simplification? Web 2.0.
I won't provide definitions for SOA, Web 2.0, REST, etc., those are all over the web. What I want to do is look at where SOA/Web Services emerged and where REST / Web 2.0 emerged, just a comparison that I hope will show the utter futility of one approach that is destined for limited success like many of its enterprise predecessors until it collapses under its own weight and complexity vs. the simple, lightweight alternatives that are providing value now, at much lower cost of ownership, and continue to grow their value proposition exponentially! This is just a stream of thought. You will notice that I am a firmly convinced proponent of REST and Web 2.0 approaches, though I still work in a world dominated by SOA web services and the never-ending standards explosion, so here I vent:
Let's start by abstracting and describing the basic problem model. First, let's divide the web world into two roles: Consumers and Produers. Next we'll divide this world into two perspective or approach camps: Simple and Complex. For each role, either perspective or a combination, may be used. This combination is happening on rare occassions, invariably though the complexity takes over.
Consumers and Producers
The web world is divided into producers and consumers, though these roles may be, and are often, dynamic and reversible, depending on the context, i.e., a producer may also consume and a consumer may produce information in a given interaction. For example, when one uses a search engine one enters search criteria which is consumed by the search engine, which produces results to the requester (the original consumer) who consumes the response.The search engine consumed the request however the search engine may use the original search request and subsequent selections to do more than simply create a search response, it is, in effect, consuming the request which will assist its production / servicing of future requests. By providing value to consumers, the service provider also receives value from those who consume its services.
Producers produce content or information that some consumers may want to consume. So producers need to decide what they want to produce and how to let potential consumers find and consume what they produce. Producers typically produce information with a goal that their information be consumed so they must find a way for potential consumers to consume the information they produce.
As a producer, or service provider, I need to let potential consumers find me, to learn about what information I produce and what I require in order to produce the information. For example, if a producer offers to provide a stock quote, it should tell potential consumers that fact, as well as what consumers must do to get a quote and what that quote response will look like in form, content... Provide a valid stock symbol perhaps, a currency and maybe a stock exchange? What are the defaults and acceptable values? Does the Consumer get back a dollar amount? Are these delayed or real-time quotes? Are there charges or disclaimers, etc....
Consumers need to be able to find producers that produce the information that they are interested in consuming, understand the requirements for consuming a producer service, expectations and the terms and conditions of use, etc. and the format(s) for the request and response supported, required and offered by the producer.
So far this is simple, there are plenty of web sites that provide static and/or dynamic content, aggregator web sites, web services that provide content / information and mashups that combine and consume the products of multiple producers to provide a new whole...then the complexity can start in, if one is so disposed to allow it.
Sometimes the interactions between consumers and producers have formal definitions, possibly one or more alternatives. For example, a producer may provide a URI to which one may simply send an HTTP GET, or have a description that describes how to format an HTTP POST request or maybe it is an RSS or ATOM feed at this URI, maybe its more formalized....you may have a set of parameters and values, or an XML document described by XSD and/or where you need to authenticate via HTTPS or maybe its a SOAP endpoint whose address is defined using a WS-Addressing endpoint reference, whose operations and requirements are described in WSDL, according to WS-RF or WS-Transfer or both via WS-Resource Transfer (are you with me on the growing complexity?!) with a formalized SLA defined by some other standards, with transactions defined by other standards, with parameters and formats defined by several schemas and namespaces and resources and .... and.... and...Whew!
Now, I'm all for consistency and clearly defined descriptions, definitions, formats, etc. but at some point all this complexity adds up: adds cost, reduces performance, increases error potential, becomes more difficult to use, etc. all of which overshadows the whole reason why a consumer may have wanted to consume the producer's product in the first place....to get something of value done. That value may be profit, learning, etc., but it is useful work. If I have to put a dollar in and my ROI is twenty cents, why would I do that? That's like making charitable contributions to the Red Cross where much of what you donate goes to pay top executives, not to the people who actually need it, you know the one's you're making the donation to help!
So, why work so hard and get back less value than you put in? If you go down the standards-based SOA web service path...well, there you are and there goes your ROI!
If you want more consumers to consume your produced products, you need to provide access as easily and cheaply as possible... so that it is consumable by more consumers. it goes without saying that your information product must be accurate, correct, precise and valid! If your producer is not easy to use then someone else will provide a product that is better, cheaper, faster and/or easier and get used by more consumers....if a consumer needs a CS PhD to use your producer, to consume your content, well there are fewer CS PhD's out there than entrepreneurs, high school students, Moms, Dads, Little League teams, and small businesses and other types of information consumers...you need to make it easier for anyone/everyone...so 3 simple rules
Rule #1: Simple is Better than Comlex. Always.
Rule #2: Simple does not imply less rigor, just less work i.e., work smarter not harder.
Rule #3: If its getting more difficult or complex you are probably doing something wrong (or you have a CS degree and just can't help yourself. I have a CS degree and I often need to remind myself of these rules!)
Perspectives: Simplicity vs. Complexity
Survey the Web. Look at all the most successful information producer / consumer / mashups out there and see what they are using, what their technologies are, where the information they consume comes from and the form of the produced/consumed information. Now, look at what enterprise IT shops are doing, just do a search on jobs for “web services, SOA, architect, enterprise” or look at IT job posting for the Fortune 1000 companies...better yet go read some specs at oasis now put these specs to good use in a simple, easy to use (produce/consume) solution.
What you will find are the two basic camp mentioned before, that I call: 1) the simplicity camp and 2) the complexity camp.
First, I'll describe the complexity camp: Many in the complexity camp work in enterpriise IT. The complexity camp is exemplified by / are big companies with big IT budgets and staff that do much processing over the Web, both internally within their organization and externally with customers, partners and suppliers. They have a lot of legacy systems made up of all those prior technology efforts that linger on, seemingly forever...there is never time to do more than keep them on life support so they in turn continue to provide a source of complexity that grows over time. The complexity camp likes standards. It likes standards so much, that it wants lots of 'em. Closed, old, new, contradictory, overlapping, vague, constrainging and open standards. All these standards then require complex products that are used to create solutions using all these myriad standards. Many of these standards have alternatives from other standards bodies that do some of the same and some different things, so you need people on staff to help you navigate all the different standards, their status and their future direction and to provide coordination of all these standards, etc. Once you have that figured out, you'll need to add your own corporate, division and area standards and interpretations, maybe some SixSigma or other fifth element processes and procedures, and of course more staff to oversee this too. Next, you'll need to have some heavyweight tools to develop solutions using all these standards too. Yes, there are some open source tools and technologies you can use, but to be most efficient you'll probably need to buy some commercial tools too, and these will cost you time and money, maybe small money, maybe big money...it all depends...on what you want to do, and how, and why and how frequently, and other factors. Then of course, you'll need weeks of education to learn how to use those tools. Finally, you are prepared to develop your customer solution. What were those requirements again? So the complexity camp is often found in traditional, enterprise IT staffed with the alchemists of the BS in CS variety where: Complexity is enterprise IT's rationale of existence; just business as usual.
Ask any CEO or business executive if they would be interested in reducing the overhead, costs, size, etc. and/or improve the responsiveness, ROI and effectiveness of IT. Care to guess what their answer may be? And so many people in IT wonder why IT is being outsourced and off shored? Hello! Hello? Anybody home?
Like so many human organizations, that become self-fulfilling and self-sustaining, carrying on the work, form and vestiges of the past long after it ceases to be required... non-profits to cure incurable diseases, civil rights organizations that now advocate for preferential treatment rather than actual equality, etc., Not all, but many IT shops are like this too. Most enterprise CIO's and IT shops are like dinosaurs that haven't realized they are extinct! Case in point, what enterprise IT shop has had to deal with the growth rates and capaciaty expansion experienced by inventors / practitioners of Web 2.0 concepts such as Google, YouTube and MySpace? Few if any! Nor could their IT shops assimliate such growth / speed if they had to! They would collapse! Case Closed.
I make no attempt to hide where my sympathies are, just look at the reality of the Web today, who are the players, the movers, the shakers and the leaders? Web 2.0 companies! The Simplicity Camp is the only realistic, practical and pragmatic way to go if you want to provide viable solutions in a cost effective manner.
The Simplicity Camp
Tim O'Reily initially defined Web 2.0 and many others have adopted and expanded the concept, much is being written about it and more importantly, much is being done with it now, today!
Driven initially by constraints (money, time, other resources, etc.), the simplicity camp looked for alternative ways to approach the problem of web computing, doing more with less. Tried and true technologies of the Web such as HTTP, HTML then XML*, XHTML, etc., low/no cost such as open source software stacks such as LAMP, MAMP, SAMP or even WIMP, then more open source emerged that could assist the simplicity camp, simple more efficient technologies and approaches often based on the tried and true technologies of the Web: HTTP, URI, XML, Namespaces --> and a seminal dissertation by Roy Fielding ---> REST and with the advent of WEB 2.0 a new, more simple way was defined.
The emergence of complementary technologies such as RSS/Atom, Wiki's and AJAX, and producer services available from some big companies in a simple, consumable format i.e., one did not need to have a CS degree and arcane alchemic knowledge of SOA web services, i.e., SOAP, WSDL, UDDI, and more WS-* specs, standards in order to consume these offered services from Google, Amazon, Yahoo and many other well-known industry giants. In fact, more consumers consume information from producers using plain old HTTP GET/POST, REST and/or Web 2.0 than via the standards based, enterprise SOA Web Services. Recently, I heard Google was "sunsetting" = phasing out, its SOAP API for search in favor of an AJAX API (AJAX is a cornerstone Web 2.0 technology which has been so successful that commercial and open source vendors of web services based SOA products have added it to their products! Wonder what that really says?! )
While the standards morass around web services based SOA plods along meandering down its many political paths, the REST / Web 2.0 proponents are providing simple, elegant solutions that enable the web for all to consume, produce and more.
If you're in an entrprise IT department doing SOA and Web Services, and "Enterprise" and/or "Architect" is in your title, you need to have a look around outside your walled garden.
Which camp are you in?
29 January 2007
06 December 2006
Whither AI?
I took a few classes on AI back in th '80s, and did some prolog, lisp, expert system, neural net and genetic algorithm work in the late '80's and I've been hooked ever since. (I also recently did work on mobile, intilligent, multi agent systems for which my employer submitted patent applications. But Agents are another blog topic!) Something about software that seems to reason, seems to "analyze and think", derive solutions, adapt, work with incomplete information, refine, etc., is intriguing. I had a professor that said the best form of enterprise computing was batch processing (at that time interative systems such as TSO, CMS and CICS were significantly more expensive than batch). When asked why, he said, "because there is little or no human interaction involved. and people, especially skilled people, are expensive." batch systems processed data unattended eliminating many human jobs (long before the term IT was commonplace, it was known as DP - data processing). Automation was a key objective in Data Processing. I spent many early years in my career automating and maintaining automation. You don't hear the terms automation or data processing mentioned much anymore, no less a key objective.
Unfortunately for automation proponents in the DP days, not everything would lend itself to automation with the technology of the day. Knowledge Workers - workers who had knowledge about the business - were skillled resources who were expensive to develop and to maintain. If only there was a way to automate the thought procesesses of these knowledge workers. This is one of the key business drivers that enabled the AI wave in the '80s.
Essentially, the processing power was not available at a price point to allow AI to deliver on its promise, so the AI bubble burst. I have an issue with the position that "AI failed to deliver on its promise", while it may not have lived up to all the marketing hype (what technology has?) it did deliver much promised value. I know because we dellivered working expert systems, GA, and neural net solutions, however with the "enterprise" thinking of the day ("bigger is better") solutions did not come cheap. Nor were they easy to implement due more to corporate politics than the AI technology of the day..
My first commercial expert system was developed using Borland's TurboProlog (Released during the reign of Philippe Kahn and it is now at PDC) which, I think, was $99 a copy at the time! Now if you had Prolog developers, or hired them, how is this not cost effective?! Noooooo, we had to buy a multi-million dollar expert system monstrocity that probably cost 7 figures to acquire and more to run (with mainframe components) annually, not to mention the runtime fees (anyone else remember runtime fees?!). It wwas also harder to use than writing solutions in TurboProlog. Gotta love those "pointy-headed" bosses and decision makers. That's when you realize some organizations just have too much money to burn. They could have spent a fraction of that using us "knowledge engineers" and TurboProlog to do expert systems. All of us had mainframe experience, we could have made it work with the mainframe or the VSAM files, IMS and DB2 systems - and for much less!
My first neural network application consisted of a bunch of ANSI standard C code shareware and a now-forgotten ANSI standard C compiler, which I used to create a POC. When we did RFPs to neural net vendors of the day, the price tag was $600k+ for development + runtime fees + transaction fees (anyone else remember runtime fees?!). I had a low-cost alternative using NeuralWare (thankfully, who is still going strong! great products IMHO!) for 2 developer seats, and code generation, training for 2, etc. which would have enabled us to do our own neural net solutions for a start-up price of about $20k. When I stated this, my "pointy-headed boss" at the time got angry at me and the fact that my total estimate was about $80k to do the solution in-house. The fact that I was able to demostrate success with shareware was not a welcomed fact either (the POC success was what lead to the RFP to vendors in the first place). It was possible to deliver cost effective AI solutions in 1989-1990, if you didn't have a pointy-headed boss! Why they paid us "Knowledge Engineers" to be Knowledge Engineers and to do knowledge engineering, yet actually "green-light" very little of what we did for production I may never know, but at least I had fun and learned much.
Fast forward to today: AI has really made a strong comeback and is being used in many solutions and many domains. You can find freeware, shareware, open source for just about any AI sub-genre. So if it is so available, where is it in the enterprise? Unfortunately, mainstream enterprise development has yet to make AI a visibible part of their solutions. Short of a few autonomic computing, operations management, BI, analytics and rules engines, very little AI is mainstream (Yes, outside of gaming I mean!). Now we also have the open standards, networks and low-cost compute power to really apply AI in a cost effective manner. Why we don't build intelligent software as a matter of course in enterprise development is another mystery. I think this is due, in part, to lack of awareness. If your CS program even had an AI course or concentration in the last 10 years would be something (I don't folllow current university curricula, so I could be mistaken).
Over the years, I have had the opportunity to mentor younger/junior programmers, designers and architects. The few that I considered to have the "right stuff" (and worth the time) would often ask advice which would often lead to the topic of AI. I encourage those who joined the IT ranks after ~1995 or already knew about Java before graduating, to look at all the old (and new!) books, papers, conference procedings. languages, tools, etc. on AI theory and practice. Many of these sources are freely available on the Web. Be aware, learn AI and apply it to enterprise solution development. The time for mainstream intelligent software is long overdue.
Let me know if you are doing or applying AI in an enterprise, commerciall setting...I'd love to hear about it and hopefully others would too! Whither AI? Everywhere.
Unfortunately for automation proponents in the DP days, not everything would lend itself to automation with the technology of the day. Knowledge Workers - workers who had knowledge about the business - were skillled resources who were expensive to develop and to maintain. If only there was a way to automate the thought procesesses of these knowledge workers. This is one of the key business drivers that enabled the AI wave in the '80s.
Essentially, the processing power was not available at a price point to allow AI to deliver on its promise, so the AI bubble burst. I have an issue with the position that "AI failed to deliver on its promise", while it may not have lived up to all the marketing hype (what technology has?) it did deliver much promised value. I know because we dellivered working expert systems, GA, and neural net solutions, however with the "enterprise" thinking of the day ("bigger is better") solutions did not come cheap. Nor were they easy to implement due more to corporate politics than the AI technology of the day..
My first commercial expert system was developed using Borland's TurboProlog (Released during the reign of Philippe Kahn and it is now at PDC) which, I think, was $99 a copy at the time! Now if you had Prolog developers, or hired them, how is this not cost effective?! Noooooo, we had to buy a multi-million dollar expert system monstrocity that probably cost 7 figures to acquire and more to run (with mainframe components) annually, not to mention the runtime fees (anyone else remember runtime fees?!). It wwas also harder to use than writing solutions in TurboProlog. Gotta love those "pointy-headed" bosses and decision makers. That's when you realize some organizations just have too much money to burn. They could have spent a fraction of that using us "knowledge engineers" and TurboProlog to do expert systems. All of us had mainframe experience, we could have made it work with the mainframe or the VSAM files, IMS and DB2 systems - and for much less!
My first neural network application consisted of a bunch of ANSI standard C code shareware and a now-forgotten ANSI standard C compiler, which I used to create a POC. When we did RFPs to neural net vendors of the day, the price tag was $600k+ for development + runtime fees + transaction fees (anyone else remember runtime fees?!). I had a low-cost alternative using NeuralWare (thankfully, who is still going strong! great products IMHO!) for 2 developer seats, and code generation, training for 2, etc. which would have enabled us to do our own neural net solutions for a start-up price of about $20k. When I stated this, my "pointy-headed boss" at the time got angry at me and the fact that my total estimate was about $80k to do the solution in-house. The fact that I was able to demostrate success with shareware was not a welcomed fact either (the POC success was what lead to the RFP to vendors in the first place). It was possible to deliver cost effective AI solutions in 1989-1990, if you didn't have a pointy-headed boss! Why they paid us "Knowledge Engineers" to be Knowledge Engineers and to do knowledge engineering, yet actually "green-light" very little of what we did for production I may never know, but at least I had fun and learned much.
Fast forward to today: AI has really made a strong comeback and is being used in many solutions and many domains. You can find freeware, shareware, open source for just about any AI sub-genre. So if it is so available, where is it in the enterprise? Unfortunately, mainstream enterprise development has yet to make AI a visibible part of their solutions. Short of a few autonomic computing, operations management, BI, analytics and rules engines, very little AI is mainstream (Yes, outside of gaming I mean!). Now we also have the open standards, networks and low-cost compute power to really apply AI in a cost effective manner. Why we don't build intelligent software as a matter of course in enterprise development is another mystery. I think this is due, in part, to lack of awareness. If your CS program even had an AI course or concentration in the last 10 years would be something (I don't folllow current university curricula, so I could be mistaken).
Over the years, I have had the opportunity to mentor younger/junior programmers, designers and architects. The few that I considered to have the "right stuff" (and worth the time) would often ask advice which would often lead to the topic of AI. I encourage those who joined the IT ranks after ~1995 or already knew about Java before graduating, to look at all the old (and new!) books, papers, conference procedings. languages, tools, etc. on AI theory and practice. Many of these sources are freely available on the Web. Be aware, learn AI and apply it to enterprise solution development. The time for mainstream intelligent software is long overdue.
Let me know if you are doing or applying AI in an enterprise, commerciall setting...I'd love to hear about it and hopefully others would too! Whither AI? Everywhere.
05 December 2006
REBOL, REBOL, Waiting for REBOL 3.0!
If you haven't heard of, or tried, REBOL you should have a look. What it does, and how easily it works is just amazing. The size:power ratio is incredible. This is the way much business computing could/should be done. If you are a SMB (small to mid-size business), you should consider REBOL (large enterprises too, but then the beauracracy and politics would probably be problematic because REBOL can actually help your organization!).
I've been dabbling with REBOL for the last few years. Based on my experience with current and prior versions, the next version, 3.0, promises to be even more amazing. So, Carl I'm waiting patiently for version 3.0, please update the timeframe estimates!
I plan more blogs on REBOL in the future. It is a key tool in my personal toolbox. REBOL is different, innovative, simple, elegant and powerful: all things I like in a programming language, and it is more than a programming language.
Disclaimer: I don't work for REBOL Technologies nor have I been compensated to endorse or develop REBOL solutions. (I am interested in REBOL solution development opportunities. If you are a SMB and in need of enterprise software solutions, let me know.
I've been dabbling with REBOL for the last few years. Based on my experience with current and prior versions, the next version, 3.0, promises to be even more amazing. So, Carl I'm waiting patiently for version 3.0, please update the timeframe estimates!
I plan more blogs on REBOL in the future. It is a key tool in my personal toolbox. REBOL is different, innovative, simple, elegant and powerful: all things I like in a programming language, and it is more than a programming language.
Disclaimer: I don't work for REBOL Technologies nor have I been compensated to endorse or develop REBOL solutions. (I am interested in REBOL solution development opportunities. If you are a SMB and in need of enterprise software solutions, let me know.
Labels:
lightweight scripting,
programming language,
REBOL,
SMB
Web 2.0: The new RAD
Web 2.0 is a great concept. If you take what I consider to be the seminal, definitive and diffinitive definition of Web 2.0, a key aspect is the coupling of data and processing logic...an extension of object-oriented concept of encapsulated state and behavior.
Other interesting aspects of Web 2.0 are also similar to aspects of the RAD trend in the 1990's Client/Server era. Many of the Web 2.0 vendors are enabling a web version of RAD - a RAD for Web Applications (RAD4WA) or Rapid Web Application Development.
Even though they dropped support of the Mac version of their tool :( , when it comes to Web 2.0 application development I like ActiveGrid. One of the first things you'll notice about AG is that nearly every artifact is represented by some open standard; the rest are open scripting language artifacts or represented by an XSD. Developing a Web application integrating web services and databases is simple. straightforward and quick.
Another brilliant move by AG is their server model. I can only imagine the discusion that lead to their server design: we don't want a cluster model, we want a grid of low-cost, commodity processing nodes (LAMP stack servers) that will provide enterprise scalability, fault-tolerance, etc. The big vendors are spending huge amounts of money to develop grid / utility computing, what will we do? What does nearly everyone have, or at least have access too? Apache Web Server. We'll write a grid module for Apache. Absolutely brilliant: simple and elegant.
Finally, AG lets you use any/all of a myriad of scripting languages and/or plug-in your own. If you are interested in Web 2.0 or developing Web 2.0 applications AG is worth a look.
Back to RAD...
RAD worked well. We were able to offload processing from expensive mainframes onto low-cost PCs and Unix boxes (where the databases lived). For the most part, the RAD solutions were primariliy data presentation and data entry with input validation (sounds familiar to what most commercial web sites provide, no)?
RAD tools and solutions had their issues and limitations too. They worked well for a certain class of applications where the business logic was not overly complex; they didn't have separation of concerns (e.g., MVC) which was invented and lived nearly exclusively in the Smalltalk world, as did much of the design patterns back then. As complexity rose and maintenance cycles accrued they became very brittle and difficult to maintain. Not to mention they were proprietary.
During part of the RAD era, I worked for a Smalltalk vendor named Digitalk, Inc. Unlike most of the RAD tools of the time, our's was based on Smalltalk. The RAD tool was called PARTS - an instance based RAD programming tool. That is, you build your Smalltalk classes as usual and used PARTS to create instances of any of the classes in your image and knit them together into an application. You could even combine instances into a component (called a Part), that you could save in Parts catalogues and reuse in other applications. Very simple and powerful. PARTS, though standalone too, was part of Digitalk's Visual Smalltalk Enterprise - IMHO the best enterprise C/S RAD toolkit available at the time. What Digitalk VSE was to C/S RAD, I see striking similarities to what AG is to Web 2.0.
It remains to be seen if the Web 2.0 application development vendors can provide more than Rapid Web Application Development and deployment tools. I think some can, are and will. I've picked my winner, what say you?
Web 2.0 is RAD for the Web. I hope it goes better this time around!
Disclaimer: I used to work for Digitalk, Inc. and I have a fond nostalgia for their products. I do not work for ActiveGrid nor do I have any business relationship with them. But if you are hiring or planning for an ActiveGrid project, please let me know. We should talk!
Other interesting aspects of Web 2.0 are also similar to aspects of the RAD trend in the 1990's Client/Server era. Many of the Web 2.0 vendors are enabling a web version of RAD - a RAD for Web Applications (RAD4WA) or Rapid Web Application Development.
Even though they dropped support of the Mac version of their tool :( , when it comes to Web 2.0 application development I like ActiveGrid. One of the first things you'll notice about AG is that nearly every artifact is represented by some open standard; the rest are open scripting language artifacts or represented by an XSD. Developing a Web application integrating web services and databases is simple. straightforward and quick.
Another brilliant move by AG is their server model. I can only imagine the discusion that lead to their server design: we don't want a cluster model, we want a grid of low-cost, commodity processing nodes (LAMP stack servers) that will provide enterprise scalability, fault-tolerance, etc. The big vendors are spending huge amounts of money to develop grid / utility computing, what will we do? What does nearly everyone have, or at least have access too? Apache Web Server. We'll write a grid module for Apache. Absolutely brilliant: simple and elegant.
Finally, AG lets you use any/all of a myriad of scripting languages and/or plug-in your own. If you are interested in Web 2.0 or developing Web 2.0 applications AG is worth a look.
Back to RAD...
RAD worked well. We were able to offload processing from expensive mainframes onto low-cost PCs and Unix boxes (where the databases lived). For the most part, the RAD solutions were primariliy data presentation and data entry with input validation (sounds familiar to what most commercial web sites provide, no)?
RAD tools and solutions had their issues and limitations too. They worked well for a certain class of applications where the business logic was not overly complex; they didn't have separation of concerns (e.g., MVC) which was invented and lived nearly exclusively in the Smalltalk world, as did much of the design patterns back then. As complexity rose and maintenance cycles accrued they became very brittle and difficult to maintain. Not to mention they were proprietary.
During part of the RAD era, I worked for a Smalltalk vendor named Digitalk, Inc. Unlike most of the RAD tools of the time, our's was based on Smalltalk. The RAD tool was called PARTS - an instance based RAD programming tool. That is, you build your Smalltalk classes as usual and used PARTS to create instances of any of the classes in your image and knit them together into an application. You could even combine instances into a component (called a Part), that you could save in Parts catalogues and reuse in other applications. Very simple and powerful. PARTS, though standalone too, was part of Digitalk's Visual Smalltalk Enterprise - IMHO the best enterprise C/S RAD toolkit available at the time. What Digitalk VSE was to C/S RAD, I see striking similarities to what AG is to Web 2.0.
It remains to be seen if the Web 2.0 application development vendors can provide more than Rapid Web Application Development and deployment tools. I think some can, are and will. I've picked my winner, what say you?
Web 2.0 is RAD for the Web. I hope it goes better this time around!
Disclaimer: I used to work for Digitalk, Inc. and I have a fond nostalgia for their products. I do not work for ActiveGrid nor do I have any business relationship with them. But if you are hiring or planning for an ActiveGrid project, please let me know. We should talk!
Subscribe to:
Posts (Atom)