Showing posts with label review. Show all posts
Showing posts with label review. Show all posts

Monday, December 13, 2010

Gettin' Groovy at the IndicThreads Conference On Java

This weekend I presented at the 5th IndicThreads Conference On Java here in Pune. This would be the second time I attended this conference and the first time I presented at it. I was pleasantly surprised that it turned out to be a whole lot more informative and entertaining than I expected.

Harshad Oak kicked off with the keynote on recent changes in the Java ecosystem. It touched on topics such as Oracle's purchase of Sun, new language features, other languages on the JVM and potential avenues for the growth of Java (viz, cloud computing and mobile development). He concluded with the interesting viewpoint that Java devs would do well to wait, watch and ride the change.

Having landed the first slot of the very first day of the conference, I was pretty nervous about setting the pace right for the event with my DSLs in Groovy talk. I think I did pretty well , especially considering I hadn't gotten any time to practice the deck... at all!

The presentation spun up quite a few interesting discussions such as:

A bunch of people asked me if there were any real world examples of Grails apps; I think the Testimonials page on grails.org should answer that sufficiently.


It was also nice to be followed by another presentation relating to Groovy. Aniket Shaligram of Talentica demonstrated the benefits and caveats of the flex-scaffold Grails plugin by quickly building an app from scratch within a matter of minutes. And I must say, I was very impressed by the Flex view scaffolding, much as I always have been by the HTML scaffolding in Grails. The one concern that I did raise was around this technique landing the team with two MVC apps that need to keep their models in sync; while Aniket assured me that they faced no major hurdles there, experience forces me to maintain a healthy dose of skepticism.

Another talk that was met with particular enthusiasm was Shekhar Gulati's demonstration of Spring Roo. Shekhar also live-coded a Spring MVC app with various bells and whistles (persistence, relationships, security, deployment to GAE, et al). The productivity accelerating qualities of Roo were intriguing, but I've always been a little reserved about it and about Spring MVC in general. I'll leave that argument to a whole other post. :)

The Unconference hour in the middle of day 2 was another highlight of the event. The group discussed various interesting issues such as:
  • technology conferences should present more code and less kool-aid
  • potential Spring Roo addons
  • application profiling tools (JProfiler, VisualVM, IBM Health Center) and their shortcomings
  • with Spring gaining traction, would enterprises look to Java EE6?
  • should enterprises look at tools/frameworks beyond Spring MVC and Java EE?
In contrast to the rest of the event, the two presentations from event sponsor IBM were mostly J9 pitches- thoroughly passable.

The Sun (ok, ok Oracle) presentation on new features in Java EE6 was a welcome refresher. Considering I've been out of touch with that community for quite some time, it served to reintroduce me to a lot of tech as well as bring me up to speed with what's hot there. CDI's cute use of annotation based qualified DI caught my eye in particular.

The networking and open spaces weren't bad either. I got to catch up with a few old acquaintances, make some new introductions, add a few twitter followers and earn some cool hashcred! In hindsight, I actually regretted not getting business cards printed, but the second slide in my presentation pretty much made up for that.

The ThoughtWorks brand carried me well throughout the event even though I did absolutely nothing to indicate I was a ThoughtWorker (big shout out to good ole' TW!) A lot of people wanted to know how ThoughtWorks operates, why I left and what I'm doing now, why others have left and what they end up doing after ThoughtWorks- a somewhat muted (from my end) set of discussions, but very interesting all the same. People were usually pretty surprised when I told that that I actually quit ThoughtWorks to figure out what I want to do next and that I was currently voluntarily unemployed! :)

Well, since this is a fairly contrarian blog, here's what I wish would have gone better:
  • better gender distribution; we ended up with just 1 girl in the audience on day 2 :(
  • the turnout was pretty low at under 50; is Java as a technology really over the bump?
  • demo'ing CRUD apps and then engaging in RDBMS-bashing in a NoSQL plug
  • Enterprise still looking at dynamic languages with skepticism
  • Harshad calling Scala a scripting language :P
All in all, it was a weekend well spent. Kudos to Harshad, Sangeeta and the rest of the Rightrix team for another well organized event in the city. And thanks for keeping it eco-friendly!

Oh, and before I forget IndicThread Java celebrated it's 5th consecutive year this time, so we got to have cake!

Monday, July 28, 2008

Putting REST to rest

... or Why Implied Interfaces make for poor Implementations.

Representational state transfer (REST) is a style of software architecture for distributed hypermedia systems such as the World Wide Web.
- from Wikipedia

The term REST along with the principles it implies (or 'should' imply!) were put down in a doctoral disseration by Roy Fielding.

Actually IMHO, REST is just SOA and WS wrapped in a layer of OOP with just a generic Resource entity at the application layer. Even then, you kinda get to roll your own with everything- serialization, interaction, responses, validation...

Well... OK, I may have over-genericised that. But, the funny thing is, that's the funny thing about REST! It is very generic.

Even the primary 'Resource' definition is vague; the interface is almost completely implied-
  • POSTing to a URL creates a new resource (POSTing what?!)
  • I GET a resource referenceable at a given URL provided an id, otherwise... I get all resources at that URL (?); unless, of course, it is a singleton resource, in which case I get the singleton instance. :S
  • PUTting causes edits, although POSTs may behave in the same manner
  • HEADs can be used to detect the presence of a resource, while DELETEs... delete![1]
  • And I'm not even sure if OPTION and TRACE are relevant
And that's just REST over HTTP! Wait!! Is there any other REST?!? Well actually REST is a meta-architecture, it's protocol-agnostic. It's supposed to work for all hypermedia- so we should be able to do it over HTTP, FTP, SMTP, SNMP(?)...

What is hypermedia? Media hopped up on too much sugar?!? Well, the Oxford Dictionary defines it as
an extension to hypertext providing multimedia facilities, such as sound and video.
I don't know about you, but my brains just about ready to explode.

But wait, it gets better. Some people liked the REST over HTTP idea so much that they forked their own flavour. So Fielding has Vanilla, while (these guys) have Extra Fudge! (all puns intended) It's called Web-Friendly services- services that behave reeeeally well with HTTP. Basically services that behave like web pages. Whoa! I'm getting asmx flashbacks!!! So I'm gonna shut up about that now.

Then there's the Hypermedia As The Engine Of State Transfer people[2]. These guys really love the URL. To them the body is whatever it's gonna be! Now it's an XML document, now it could be a HTML FORM, then it could be a JSON object or, maybe, a hCard. I'm sorry, but I can't comprehend this comically 'uniform' interface where your interaction modes are implied, reference end-points are overloaded and results are undefined, or implied, or... did you say they're documented somewhere?

The question I arrive at, then, is- why do we want to be RESTful or Web-Friendly? The loudest argument I hear is that the infrastructure has has already been laid; web servers and proxies with caching, DNS servers, IP routers with path optimization algorithms, the whole kahuna! Think of all the automatic performance gains arising from all that caching! All the more reason to stick more stuff into the URL and fill up those HTTP parameters you never thought to use until now! You just ride the wave!

Hmm... it would seem that REST intends to counter the end-to-end principle by allowing for intelligence (optimizations) at the nodes, but that's a little too debatable; so <sidestep />.

But stepping back a bit, lem'me get this right. We're trying to fit an application protocol directly over a document delivery infrastructure? You've gotta be kiddin' me! I'm not saying this can't be done, just not without a cost- TANSTAFL. Just think about all the time you're going to spend on deciding and negotiating a wire format, and corresponding responses. And that's just one wire format. REST talks of 'representations', a resource could have more than one wire format! I don't know about you, but I've had trouble getting people to understand one wire format right, and now you're telling me they're gonna want to send 'resources' in XML, JSON, YAML, HTML, delimited strings, multi-fixed and EBCIDIC? Over my dead body!!!

Plus, the wireformats are completely arbitrary. Every resource implementer gets to pich their own. So you end up having to 'understand' the specific wire format of every resource you interact with. And by 'understand' they mean RTFM. So much for convention over configuration. Just too much Knowledge in head vs Knowledge in world.

Actually, the wire format is where it gets really funny. XML just had the attribute-element dilemma, REST elevates it to the payload-parameter dilemma! Most Web-Friendly service makers get away without a body to the requests, they just stick a bunch of 'required' HTTP parameters on to the resource URL. So much for 'representation' of 'state', we're back to representation of output. Why does this remind me so much of WS?

I think REST could find application in internal applications where the domain model is well defined, where resources and their interactions are well defined. But I'd still be skeptical because I tried that once at work and we kinda crashed and burned. The biggest brick wall we hit was latency. Given the 'resource' unicycle that REST provided and the specific verbs HTTP provided made our resource interactions multi-stage, especially since REST is stateless. When updating a resource, I wanted to GET a representation of the current state of the resource, modify it and PUT the modified state representation back into its place. But others on the team felt that adhering to those principles added unnecessary communication overheads and chose to add interaction verbs, which was conveniently provided for by ActiveResource. Oh, and that's just when we were not arguing about the wore formats.

Honestly, if you ask me, the only reason REST wont go down the tube like Hailstorm is because it's community driven, not governed and controlled by a megalomaniac. Everyone gets to roll their own and even fork while still calling it RESTful. But even if it does last, I think it's only going to be a buzzword that we'll all have a good laugh about in a few years-

Raj: Hey Saager, we're building this new service for so-and-so and it's accessible via, get this... RESTful resources! :D
Me : Wait, are you sure they're not Web-UNfriendly?!? =D

1 the DELETE's about the most logical thing I got so far! :P
2 sorry Dave, your post is the only one I could directly reference to HATEOS

Wednesday, September 12, 2007

The Purpose of Software

I've been meaning to write about this for quite some time and now I've finally found a forum.

This post on Dratz's Confessions of an IT Hitman isn't the first place where I've heard this said. Although I wouldn't go so far as to say that Chet is completely biased towards data-centric-thought-process and Databases, his letter to the CIO was a little one-sided; probably because he was trying to drive his point home. I am more inclined to lean towards Ralph Wilson when he reiterates the old addage that if a hammer is your only tool, every problem is a nail.

I'll admit- Data is the most important artifact that business generates. But, IMHO, the most important output of the software development activity is, by far, the simulation of a business process.

This follows the analogy that every program is a process-

a systematic series of actions directed to some end.

A program/system 'does' things; most evidently facilitates, automates or accelerates a given business function. Well, atleast good software.

Data generated by a system changes as much as the business processes that generate it; in fact data goes out of date faster. Today's data will be archived and stored for 'future reference', but will rarely be in 'active duty' as long as the processes that manipulate it. For all Thomases doubting my experience in this matter, I work in a team that maintains a Mortgage Origination System used in North America by a MNC bank. We recently wrote a job that archives data pertaining to about 8,000 mortgages every month; and they just archive unprocessed loans, add closed loans to that and the monthly figure would sky rocket.

Anyway, getting back to the point. I believe there is some merit to Chet's point of putting logic in the database, but...

you should put as much of the logic in the database as humanly possible

is a little extreme! I mean I don't want to run an INSERT and have an ORA-06502: PL/SQL: numeric or value error thrown to know that I entered one-too-many-zeros or entered a date in the wrong format! Trust me, there's dudes who stretch the idea this far.

Sidenote: Several situations warrant the temporary storage of data that doesn't meet all the integrity constraints. Well designed UI's would allow users to set arbitrary savepoints- even when ALL required data has not been entered. Think of all the times you were filling up a 53 field web form; and realized at field 24; that you need to run down to the drawing room to pull out the referral code for a vendor discount printed on a coupon that you got in the mail 3 months ago; but you don't want the form to timeout while you go fish the coupon out of the bowl in which you keep all of your discount coupons. So, you're sitting there gritting you teeth, seething away, wondering why the fr*cking developer didn't think of putting a SAVE button there! And when you do put one in, if the application is tied in to the database- you have data in inconsistent state; or you end up designing a schema that doesn't enforce all the integrity constraints declaratively. That's where a Domain model (sprinkled with a healthy dose of the NullObject pattern) comes into picture.

Good design is about tradeoffs. I'm not promoting 'Rail'ed webbers who think ActiveRecord is a Silver Bullet, but I'm also not saying that the ActiveRecord Pattern doesn't have any merit. IMHO, it can be implemented as well in PL/SQL as it can be in Ruby/Java/C#. The point is- where does it make the most sense given the constraints (environment) you are working with?

The notion that data is the MOST important thing in an application leads to the development of webapps that babysit single dimensional databases.

That's just saaad! Where is the CREATIVITY?

Update (04-Oct-2004): This just hit me, given that replacing an application (medium-to-large-scale purpose built, not some COTS) almost always involves data migration effort, representation of data (database) is as transient as the process (application)!

Wednesday, October 04, 2006

Do Developers make good Managers?

Well, turns out someone took note of my last post. (They did, however, refrain from posting a comment : ) One of my colleagues hinted that my definition of Leadership was... incomplete.

Here's the deal, I wanted to illuminate some very specific issues arising out of the concept of Leadership (with capital el) in Software Development Teams. As TDM & TRL indicated– in small groups of skilled professionals, it's hard to put one person in complete control without creating sociological issues. Added to all the other little frustrations of corporate life, these can very easily take you over the edge. Especially these, as team sociology tends to get VERY personal.

I'm guessing that adding the Leader role to teams is a direct consequence of Management's excessive need to delegate out the Manager role, aka Turning Developers into Baby-sitters. Management just seems to want a 'guy on the inside'! But, how wise is it to pick an insider (someone who's already inside the team) and switch roles?

What I'm really trying to get at is that (IMHO) Developers (REAL, HARDCORE developers) don't make good Managers... or Leaders... or... whatever you want to call that abomination of delegation! I prefer Field Manager in specific and Manager in general.

In creative collaborations, leadership is always situational. Each aspect of the problem invites the creative excellence of one or more team members. I don't believe any one person can 'drive' a sizeable effort from inception to conclusion alone. The Skill Sets (sociological as well as operational) required for these two roles are just too diverse for one person to cover! They have different Objectives; Their Interests and Incentives are usually in conflict... I mean how can you
put an individual in this situation? It's almost HARASSMENT!!!

Parting thoughts:

Managers give you a goal and let you figure out how to achieve it, monitoring along the way.

Leaders give you a direction to achieve your goals, walking along the way.

Thursday, September 14, 2006

Defining Leadership

From the Oxford English Dictionary–

[to] lead (v): conduct, guide, esp by going in front; direct movements, actions or opinions of
leader (n): person followed by others

So, let's go over that one more time.

person followed by others... directing their movements, actions or opinions

The whole thing instantly conjures up images of a Seargent carrying his platoon through the tall grass or a Captain marching his company into battle. Respect, Trust,... Glory!!!
So what's wrong with that? Nothing, except that contemporary corporate culture seems to be the only place where the word Leader immediately follows the word Team. "So,” you ask, "what's wrong with a team having a leader? Most normal thing in the world!" Or so you think...!
If you look at the above analogies colsely, you will notice that neither a platoon, nor a company is a team. They are hierarchical structures built into the military chain of command!
A Team is...

... made up of peers, equals that function as equals.

Peopleware
By Tom DeMarco & Timothy Lister


Even the military accepted this when they came up with the idea of special-purpose (ops) teams. Such teams comprise specialists who set aside their ranks to funnel their respective skills into attaining a single, common goal. Teams in our industry have no reason to be an exception.

For all the deference paid to the concept of leadership (a cult word in our [software] industry), it just doesn't have much place here.

Peopleware
By Tom DeMarco & Timothy Lister


Managers/Leaders are never part of the team for the simple reason that they can never be peers; another point sufficiently emphasized by TDM & TRL.
Besides, we're talking creative people here. Creative people are inherently Intelligent! They don't need to be led around like some pack, and they sure as hell don't want to be!
What's even more amazing is the kind of stuff that gets passed off as Leadership qualities, things like:
  • Charm (:P)
  • Persuasiveness (:S)
  • Getting things done (!)
  • Maintaining the status quo (:o)
  • Assuming authority (out of the blue!)
Parting thought: With all the talk of situational leadership, we seem to want the same people to pick up the flag in all the situations!

Thursday, July 27, 2006

When do YOU look at the user?

Think about it. You're putting in this cool new feature, it's going to save the users a lot of time, give them a ton of functionality and, hence, save them a truckload of money. AWESOME!

So, first we nail down the functionality... right? WRONG!!! First you figure out how the users will exercise the new functionality, because that's what you're ultimately giving them- a new user experience. a new interaction mode.

Unless you make sure the user is going to find the new interaction intuitive, any new functionality, however efficient, is a burden. If you make the user jump through a lot of hoops just to do his everyday job, your new feature SUCKS!!!

I actually experienced this first hand recently. We enhanced the TIL Regulation dashboard in an LOS to allow the users to monitor loans as they get dangerously close to the limits and notify them once they fall over. I actually reordered all the controls on the screen just to keep logically related data together. The new design was so intuitive that the Business Analyst approved it in a heartbeat and sent out an appreciation note! (Pat myself on the back)

At the same time, another team member was working on the screen used to configure the regulation limits. Man did they miss the bus! They wrote up the whole screen, new popups et al, and then sat around thinking, 'What if the user checks this but doesn't fill in that...?' WHOA! When he asked me for an opinion, I said that the new validations he was thinking of were cool, but, '...wouldn't it have been much more logical to have thought about this BEFORE you built the whole thing?' Nevertheless, he ended up putting in the validations (in this screen and a few others!). Rework can be such a productivity killer!

The user always comes first. Kathy Sierra agrees in her blog post captioned Ignore the competition.

Friday, January 20, 2006

Who da Man ?!?

I read a blog comment on our corporate intranet recently that said,

Modern banking isn't dependent upon IT, modern banking is IT.
Now there's a repulsive thought! How does anyone get off suggesting that IT could, or already has, taken over or replaced the Business it services? A specific business process implementation does not subsume the business function that it defines. Business Funtions (such as purchase, sale, accept deposit or lend) are independent of and supercede the means applied to realize them.

IT is not Business! IT doesn't run Business; People run Business. People are defined by their Actions; and Actions are channelized and automated using IT.

IT is merely a business facilitator/accelerator. What we really need to be focussing on is figuring out ways to shift from the Facilitator role to that of an Accelerator. That's where the real business propositions lie.

Basically, we ain't NEVER da Man!
We ain't NEVER GONNA BE da Man!!

So let's just get used to playing second fiddle to Business. Besides, why should IT be anything but IT?
We are Who We are.

Sunday, October 30, 2005

Subtext means no text! ...uuunh

I've finally gotten around to reading Jonathan Edwards' paper titled Subtext:Uncovering the Simplicity of Programming he presented at OOPSLA 2005, and I gotta say... it was an experience! I just don't know where to start, so I'll just start at the beginning.

Programming is HARD??!!?? I mean, COME ON!! Yeah, even I thought programming was hard, when I had just started off! Getting my fundamentals straight was the hardest thing, but that was my biggest hurdle. Programming isn't hard; visualizing relationships between loose-coupled abstract concepts, now that's hard. Saying programming is hard is like saying it's hard to put one foot in front of the other because you can't run the Olympic 600M! But the reasoning takes the cherry; Programming is hard because source code is removed from a programs behavior, because we're not compilers. Damn right we're not compilers, we're BETTER; that's why WE built compilers. Compiler were built to spare us from the 'inhuman' patience required to monotonously apply a 250 page language specification to any program from 2,500,000 to 25 lines of code, not because we couldn't do it. As for source code not presenting the behavioral view of the system, well, uhh, NEWSFLASH... I don't believe it was ever intended for that purpose! Source code presents a structural view of the system. We use collaboration diagrams and statecharts for that, but admittedly not all people and platforms provide for that, so I'll admit that there's no common ground here. Nevertheless, you can't blame that on source code.

Personally, I believe it's the lack of 'technology soft skills' like visualization and programming in the large that screw up most development, not that it's hard to represent ideas in a given syntax. Maybe we need to revisit CS courses to include these.

As for Usability, usability is about making a users experience with a system easier. Norman's Gulfs were developed as part of a user interaction study. Applying UEX concepts to source code and other programming media seems a little extreme.

The argument is text centric is a little ridiculous. Paper or for that matter any persistence medium is just that, a medium to store. Rich UI tools are just that, tools to program. WYSIYWG programming environments are at best visual/graphical representations or views of programs. How does WYSIWYG help if visualization is the inherent problem?

Coming to Subtext (finally!), Subtext is definitely not a language. Subtext is, as Edwards himself mentions somewhere in the paper, a programming ENVIRONMENT. It allows you to manipulate the structural components of the program directly. Kinda like playing with the AST.

Here's my gripe list on Subtext.

- Languages interface programmers to compiler/interpreter. Subtext opens up the gamut of program flow components to the programmers, something people have been trying to abstract/encapsulate from programmers in every paradigm shift. In my opinion working with actual program components is a huge step backwards, maybe we should just go back to Assembly or Machine language.

- The nomenclature used to describe program structure is ambiguous. Program nodes are divided into Structures and References, where structures could be Composites or Empty aka Atoms. Only Atoms can be leaves of the program tree, and References... so are References Atoms? But References could be linked to Composites, and expanded, with Reference Envelopes... you get where this is heading?

- No types. How do you work a mature, dynamic, 'reactive' language without... Oh wait...

- Subtext is STATIC!

- Labels are text comments. So I could just copy the Difference function into a structure and re-label it Add. I mean, we don't have just enough means of amplifying ambiguity already (!). Why is it so hard to admit that names are an identity mechanism? When I say Jonathan Edwards, we can IDENTIFY that the subject is a fellow at MIT who developed Subtext.

- Copy calling: This one is really weird. Structural changes are propagated both ways through copies, except divergences in variants are not propagated upwards. But a variant may not even have divergences. And divergences may exist outside variants as inputs. Man! I thought Polymorphism was hard, but this takes the cake, cherry and all!

- Subtext has no scopes! I can capture state from anywhere in the program flow, even in the middle of a function execution (multiple returns!). Encapsulation is simply absent. This looks like another one of those object based programming nightmares. JavaScript for the enterprise anyone?

- Subtext does away with variables by exposing (making reference-capable) entire program flow! Every line is as good as a global.

- Somehow, the data structure of a tree or even a graph falls short when projecting various aspects/views of a system. UI embellishments (compass, reference envelope, adaptive conditional table, etc) only complicate matters.

- Preliminary observation shows that programming in Subtext involves hard coding each and every flow of the program. So much for flexibility!

- Subtext suffers from IDE lock in. You can only work in a very specialized environment. I've seen other IDE specific platforms like PowerBuilder and Centura Team Developer (SQL Windows) hit the wall when programmers tend to get too deeply rooted in IDE specifics or when a requirement transcends the intent of the IDE.

- Then there's the question of programming in the large. It's not just scalability, but the idea of programming multiple disconnected, black box components in multiple source codes. Being in invalid states for short periods is, to me, a viable tradeoff for flexibility.

- Interestingly, the entire paper seems to sidestep the simple idea of persisting programs. How do you intend to store Subtext program trees? Text maybe!

Ever since I started reading about Subtext, I was intrigued by the idea of Reactive Computation. Seeing your program execute as you write caught my attention. But then I realized that I was so fascinated only because I had spent so much time explaining to people how programs work. That's when it hit me! Subtext is actually a great LEARNING tool. Rookies can use Subtext (minus the Theory of Copying) to learn complex program flows by direct state visualization. This could actually go along way in developing the soft skills I mentioned in the beginning.

My opinions apart, I don't want to discourage anyone's inquisition, so, I wish Jonathan Edwards the best of luck in his endeavors.

PS: My initial opinions were put up on JE's blog here.

Friday, October 28, 2005

Continuous Care Vs Initial Design

In his paper titled Continuous Care Vs. Initial Design, Robert Martin expresses his growing concern regarding the lack of awareness about creating maintainable systems. Quite like I mentioned in my first blog entry 'How do you measure Quality?', he points out that we must strive to finish a task right (for everyone) as opposed to just finishing it. The article goes on to describe why systems, no matter how well designed, can be reduced to a rotting carcass simply due to negligence. Every time a design is changed as a result of requirements changing in ways that the initial design did not anticipate, new and unplanned dependencies can be introduced between the modules if the changes are made without carefully considering the system's existing state. The latter half of the paper suggests the application of Agile Methodologies to counter such 'rotting'.

Hmm... we all seem to agree to that. So, what could we be looking at wrong this time? :)
Good old CONTEXT! ;)

You see, Martin talks of designing in the small (Agile). It's about how initial designs can never keep up with changing requirements. What I'm hinting at is that all the talk of Continuous Care is applicable to new development. Projects being planned now, to be developed tomorrow!

The whole thrust of the paper is towards changing engineering attitudes, about changing Methodologies. You can't change your methodology once the plane's taken off! Besides, no volume of care could ever fix a screwed up initial design. Martin says that , according to the Agile methodology, designs must be built to change and new requirements can change the design fundamentally too. But, how much 'care' do you put in before it's officially called a redesign!

With regards to Agile development, well I'm not a really big fan of it but I don't particularly think it's evil. The thing about the Agile methodology is that it's just plain simple misunderstood. Most uninformed people think being Agile is about getting it done in the simplest possible way in the shortest possible time. Well, I suggest they either read up on Agile development or com up with their own independent manifesto!

Personally, I'm a big upfront design guy. I find safety in sitting calmly and applying 'care'ful foresight to come up with a flexible, extensible design (I know, I know... how does the customer care! Customer's suck!!). That's why I think being open minded is more important than being Agile.

Sidenote: Here's a few more options, take your pick...

Wednesday, October 05, 2005

Wot say, Gartner?

My last bog- Specialists, Generalists & now Versatilists![http://thinkaround.blogspot.com/2005/10/specialists-generalists-now.html]- was based on the observations and predictions mad by Gartner in their Research paper titled 'The IT Professional Outlook: Where will we go from here?' [ http://www.gartner.com/DisplayDocument?doc_cd=130462 ]. Besides the culmination of a strong market for Versatilists, the paper also described various changes the IT Industry is poised to face by 2010. Changes such as:

  • Segregation of the industry into defined focus areas
  • Migration towards the ISV/ISP model
  • Growth in Relationship management and other Business facing positions
  • Increase in demand for Functional experts

The predictions revolved around the percentage workforce shifts that these changes would bring. Although very captivating at first, several predictions raise more than reasonable doubts. Lets see if I can outline the important ones I identified. ;)

To start at the top, the first Prediction on the cover says that, "By 2010, the IT profession will split into four domains of expertise: technology, information, process and relationships (0.7 probability)." Strangely, page 3 sees this prediction suddenly jump 10% to 0.8! Without explanation, I might add.

The third Strategic Planning Assumption on page 3 states, “Through 2010, 30 percent of top technology performers will migrate to IT vendors and IT service providers (0.8 probability).” Does this mean we will see individuals move to vendor/service provider firms or do we expect to see corporations favor the ISV/ISP model? This article was supposed to be for the ‘IT Professional’ but this seems to be more of a corporate viewpoint. Also, I’m assuming the shift is from core Consulting, but then where do all those custom manufactured ‘Harley Davidsons’ of software that form the core IT solutions for the likes of eBay, Wal-Mart and ICICI end up?

The oddities get really interesting once the Analysis takes off. 1.0 Introduction: Setting the Stage says that, “Business skepticism toward the effectiveness of IT, the rise of IT automation, worldwide geographic labor shifts and blended service-delivery models mean that IT professionals must prove that they can understand business reality — industry, core processes, customer bases, regulatory environment, culture and constraints — and contribute real business value to their enterprises.” First of all, what the hell is the blended service-delivery model? A little consulting, some process design and maybe a product (if we can find the time)! HE! HE! But on a serious note, do IT Professionals still need to prove that they can understand business realities? Sometimes I really hate the d***heads that made it big in the early 90’s because they ended up projecting a Programmer as someone perpetually hacking away at ‘alien’ code in some maintenance project. Damn them!

Moving on; 2.1 Global Outsourcing points to the acceleration of the offshoring/cosourcing initiative in various aspects. It just got me thinking, if offshoring were to grow really big, really aggressively, India would end up facing pretty steep competition from China and Brazil!

2.2 IT Automation seems to take the cherry. Just how the f*** do you automate Software Development!! I mean, what we come up with some sort of adaptive network (ala SkyNet) that simply ‘reads’ user Requirements to come up with a solution on its own? Man, that would really sound the death knell for Commercial Software Development!

In 3.0 The IT Profession Splits Into Four Domains of Expertise, the discussion on Technology infrastructure and services predicts that, “routine coding and programming activities will gradually shift to developing economies.” Like we don’t already get enough of that! What do we expect next? That they actually set up the processes whereby, they send us the code snippets, that they want us to splice, into the specified modules, ...

The focus area listing for the Technology Infrastructure and Services domain on page 8 was refreshing. To my joy, they placed Enterprise Architecture right at the top and Web Service very last. But, I’m not really sure haw Desktop Computing ended up under Infrastructure! On the same lines, Internet Design and Web Aesthetics somehow found their way under Information Design! :

Doubts apart, one of the most absurd observations I made was that the article made no attempt whatsoever to define the sample/scope they used to deduce all these predictions! Is Gartner trying to mask a set of market hunches behind a veil of numbers, or am I just ranting out of context??!!?? :S

Monday, October 03, 2005

Specialists, Generalists & now Versatilists!

The classic debate regarding the choice between Specialists and Generalists just got bigger. In a recent Research paper titled 'The IT Professional Outlook: Where will we go from here?' [ http://www.gartner.com/DisplayDocument?doc_cd=130462 ], Gartner added a new runner to the race- The Versatilist. According to Gartner, "Versatilists, in contrast, apply a depth of skill to a rich scope of situations and experiences, building new alliances, perspectives, competencies and roles." Now that's a mouthful! :)

To set a little background; Specialists build on intensified learning/training to excel in their chosen concentration within their domain, while Generalists prefer the extensive learning/experience approach where they stimulate limited exposure to various aspects or concentrations within their domain. But seriously, are these classifications water tight? No one could ever get anywhere by fine tuning themselves to just one paradigm; and I don't want to discuss what kind of impact you could make being the proverbial Jack of all Trades. The former is like finding the Answer to Life in the middle of a desert and the latter, like telling the world that you know how to light a match! The idea of Specialization was introduced at the height of the Industrial Revolution, a time when Capitalism was an accepted (and on occasion necessary) social evil. Generalization was a knee jerk reaction to over-specialization.

Now, we need to build Versatilists, Knights in Shining Armor, wielding all weapons with equal dexterity and skill! How real is the idea that a given individual could attain deep skill sets in multiple domains? According to Gartner, a Versatilist picks up greater roles and assignments as one increases the depth of current skills. But, wouldn't accepting a role outside your domain of expertise suddenly put you shallow? How much anticipation could you employ to mitigate this risk?

In my opinion, multi-faceted individuals view themselves in multiple dimensions. Each dimension represents an area of interest, a generalization; whereas the intersections of these dimensions would define concentrations, specializations. Each specialization bears at least two domain aspects; automatically multiplying the scope of roles or assignments where these intensive skills can be applied. It turns out to be very difficult to graph this idea, but it can be conceived as a series of overlapping pyramids. Each pyramid represents the skill base in a given domain, peaking at the specialization in that domain. Taller pyramids would represent an individuals majors. The overlap of pyramids would represent multiple roles applicable. So, an overlap higher in the pyramids would exhibit a deep skill (specialization) that may be applied to the benefit of roles/domains represented by both pyramids.