Monday, December 26, 2005

Benchmarking Expertise

Wikipedia defines as Expert as...

An expert is someone widely recognized as a reliable source of knowledge, technique, or skill whose judgment is accorded authority and status by the public or their peers.

Some pretty big phrases there! Let's just list them down for now.
  • widely recognized
  • reliable source
  • judgement is accorded authority
  • peers
Alright, on to the issue. I guess the debate of whether an expert is deemed an EXPERT by Qualification or Acceptance has been sufficiently settled. Refer the following for details:
What really remains to be settled is how we can benchmark a person's 'expertness'. I figure the four traits listed above could be a start, but most of them tend to raise questions. How 'widely' is enough 'widely recognized'? Just who are your 'peers'; people you work with, people of like mind?
I guess this is more of a controversy than a debate, but what the heck! :)

Sunday, November 20, 2005

Return to innocence

Remember when you were a child, trying to reach out to the world; every new sound was an experience, every word a question!

In trying to grasp the fundamentals of communication we realized that every word was merely a representation of some real world phenomenon; and meaning provided the necessary relation to attain understanding. Some words tend to have more meaning than others. For example, an apple can be sufficiently defined as ...

"a white fruit with 5-7 seeds at the core and a moderately thick skin which may be red, yellow or green in color"


...but inertia would require something to the effect of ...

"the tendency of a body to remain at its state of rest or of uniform linear motion until an external unbalanced force acts upon it."

In other words, some 'words' warrant more than meaning, they need a definition. A collection of definitions of related words more or less defines the linguistic scope (vocabulary) of that field.
The IT industry, much like the rest of the engineering field, has always been riddled with a gargantuan vocabulary. In their need for precision (and sometimes exclusivity), engineers tend to name and define every little concept and action encountered. In an environment like this it is essential that the fundamentals are not lost. Fundamentals truly are the building blocks of a strong understanding. A quick observation reveals that most complex concepts either build on or combine smaller more fundamental concepts to enhance a set of relationships between them. As a logical conclusion, if your fundamentals aren't strong, your understanding rests on shaky foundations!
By now you're probably asking yourself, "So what are we looking at wrong?"

Sad to say, but it is our outlook to the vocabulary itself. The inhumanly fast pace of the IT industry leaves little time for the crystallization of foundations. Over simplification of concepts through vague metaphors and analogies is just one of the corners often cut in this race. This often leads to over generalization of terminology by trivializing their meaning. Subsequently, learners tend to stretch terms too far without defining or (worse) understanding the context. [Refer my previous post on Continuous Care vs Initial Design]

Building a vocabulary isn't just knowing the words, it involves learning the words, meaning and all.

Let us return to innocence, to a time when we weren't ashamed of asking what words meant and why; and taking our time to understand them.

Sunday, October 30, 2005

Subtext means no text! ...uuunh

I've finally gotten around to reading Jonathan Edwards' paper titled Subtext:Uncovering the Simplicity of Programming he presented at OOPSLA 2005, and I gotta say... it was an experience! I just don't know where to start, so I'll just start at the beginning.

Programming is HARD??!!?? I mean, COME ON!! Yeah, even I thought programming was hard, when I had just started off! Getting my fundamentals straight was the hardest thing, but that was my biggest hurdle. Programming isn't hard; visualizing relationships between loose-coupled abstract concepts, now that's hard. Saying programming is hard is like saying it's hard to put one foot in front of the other because you can't run the Olympic 600M! But the reasoning takes the cherry; Programming is hard because source code is removed from a programs behavior, because we're not compilers. Damn right we're not compilers, we're BETTER; that's why WE built compilers. Compiler were built to spare us from the 'inhuman' patience required to monotonously apply a 250 page language specification to any program from 2,500,000 to 25 lines of code, not because we couldn't do it. As for source code not presenting the behavioral view of the system, well, uhh, NEWSFLASH... I don't believe it was ever intended for that purpose! Source code presents a structural view of the system. We use collaboration diagrams and statecharts for that, but admittedly not all people and platforms provide for that, so I'll admit that there's no common ground here. Nevertheless, you can't blame that on source code.

Personally, I believe it's the lack of 'technology soft skills' like visualization and programming in the large that screw up most development, not that it's hard to represent ideas in a given syntax. Maybe we need to revisit CS courses to include these.

As for Usability, usability is about making a users experience with a system easier. Norman's Gulfs were developed as part of a user interaction study. Applying UEX concepts to source code and other programming media seems a little extreme.

The argument is text centric is a little ridiculous. Paper or for that matter any persistence medium is just that, a medium to store. Rich UI tools are just that, tools to program. WYSIYWG programming environments are at best visual/graphical representations or views of programs. How does WYSIWYG help if visualization is the inherent problem?

Coming to Subtext (finally!), Subtext is definitely not a language. Subtext is, as Edwards himself mentions somewhere in the paper, a programming ENVIRONMENT. It allows you to manipulate the structural components of the program directly. Kinda like playing with the AST.

Here's my gripe list on Subtext.

- Languages interface programmers to compiler/interpreter. Subtext opens up the gamut of program flow components to the programmers, something people have been trying to abstract/encapsulate from programmers in every paradigm shift. In my opinion working with actual program components is a huge step backwards, maybe we should just go back to Assembly or Machine language.

- The nomenclature used to describe program structure is ambiguous. Program nodes are divided into Structures and References, where structures could be Composites or Empty aka Atoms. Only Atoms can be leaves of the program tree, and References... so are References Atoms? But References could be linked to Composites, and expanded, with Reference Envelopes... you get where this is heading?

- No types. How do you work a mature, dynamic, 'reactive' language without... Oh wait...

- Subtext is STATIC!

- Labels are text comments. So I could just copy the Difference function into a structure and re-label it Add. I mean, we don't have just enough means of amplifying ambiguity already (!). Why is it so hard to admit that names are an identity mechanism? When I say Jonathan Edwards, we can IDENTIFY that the subject is a fellow at MIT who developed Subtext.

- Copy calling: This one is really weird. Structural changes are propagated both ways through copies, except divergences in variants are not propagated upwards. But a variant may not even have divergences. And divergences may exist outside variants as inputs. Man! I thought Polymorphism was hard, but this takes the cake, cherry and all!

- Subtext has no scopes! I can capture state from anywhere in the program flow, even in the middle of a function execution (multiple returns!). Encapsulation is simply absent. This looks like another one of those object based programming nightmares. JavaScript for the enterprise anyone?

- Subtext does away with variables by exposing (making reference-capable) entire program flow! Every line is as good as a global.

- Somehow, the data structure of a tree or even a graph falls short when projecting various aspects/views of a system. UI embellishments (compass, reference envelope, adaptive conditional table, etc) only complicate matters.

- Preliminary observation shows that programming in Subtext involves hard coding each and every flow of the program. So much for flexibility!

- Subtext suffers from IDE lock in. You can only work in a very specialized environment. I've seen other IDE specific platforms like PowerBuilder and Centura Team Developer (SQL Windows) hit the wall when programmers tend to get too deeply rooted in IDE specifics or when a requirement transcends the intent of the IDE.

- Then there's the question of programming in the large. It's not just scalability, but the idea of programming multiple disconnected, black box components in multiple source codes. Being in invalid states for short periods is, to me, a viable tradeoff for flexibility.

- Interestingly, the entire paper seems to sidestep the simple idea of persisting programs. How do you intend to store Subtext program trees? Text maybe!

Ever since I started reading about Subtext, I was intrigued by the idea of Reactive Computation. Seeing your program execute as you write caught my attention. But then I realized that I was so fascinated only because I had spent so much time explaining to people how programs work. That's when it hit me! Subtext is actually a great LEARNING tool. Rookies can use Subtext (minus the Theory of Copying) to learn complex program flows by direct state visualization. This could actually go along way in developing the soft skills I mentioned in the beginning.

My opinions apart, I don't want to discourage anyone's inquisition, so, I wish Jonathan Edwards the best of luck in his endeavors.

PS: My initial opinions were put up on JE's blog here.

Friday, October 28, 2005

Continuous Care Vs Initial Design

In his paper titled Continuous Care Vs. Initial Design, Robert Martin expresses his growing concern regarding the lack of awareness about creating maintainable systems. Quite like I mentioned in my first blog entry 'How do you measure Quality?', he points out that we must strive to finish a task right (for everyone) as opposed to just finishing it. The article goes on to describe why systems, no matter how well designed, can be reduced to a rotting carcass simply due to negligence. Every time a design is changed as a result of requirements changing in ways that the initial design did not anticipate, new and unplanned dependencies can be introduced between the modules if the changes are made without carefully considering the system's existing state. The latter half of the paper suggests the application of Agile Methodologies to counter such 'rotting'.

Hmm... we all seem to agree to that. So, what could we be looking at wrong this time? :)
Good old CONTEXT! ;)

You see, Martin talks of designing in the small (Agile). It's about how initial designs can never keep up with changing requirements. What I'm hinting at is that all the talk of Continuous Care is applicable to new development. Projects being planned now, to be developed tomorrow!

The whole thrust of the paper is towards changing engineering attitudes, about changing Methodologies. You can't change your methodology once the plane's taken off! Besides, no volume of care could ever fix a screwed up initial design. Martin says that , according to the Agile methodology, designs must be built to change and new requirements can change the design fundamentally too. But, how much 'care' do you put in before it's officially called a redesign!

With regards to Agile development, well I'm not a really big fan of it but I don't particularly think it's evil. The thing about the Agile methodology is that it's just plain simple misunderstood. Most uninformed people think being Agile is about getting it done in the simplest possible way in the shortest possible time. Well, I suggest they either read up on Agile development or com up with their own independent manifesto!

Personally, I'm a big upfront design guy. I find safety in sitting calmly and applying 'care'ful foresight to come up with a flexible, extensible design (I know, I know... how does the customer care! Customer's suck!!). That's why I think being open minded is more important than being Agile.

Sidenote: Here's a few more options, take your pick...

Java is the new COBOL

Alright, now I know for sure that Java is seriously misunderstood! Don't get me wrong, I agree with the title, it's just the freaking reasons people give you for this!! Here's what I think.

Java is the new COBOL beacuse both are...

  • Standardized through a detailed, documented Specification -> Java Language Specification
  • Platform independent; as a direct consequence of the above
  • Significantly readable. Infact, both were designed with readability in mind (COBOL was kinda Programming for Managers!)
  • Built with the intent of seperating concerns. Used effectively, they let you manage the business and not the systems.
  • Expansive in their enterprise penetration. Ubiquitous.
  • Always in the backgoround, the unsung heros!

... not because...

  • Java has turned legacy. What idioit came up with that one?
  • Java is too verbose. OK, Java is verbose; but not TOO, THREE, FOUR...
  • Java makes programming simple. Programming IS hard. Java doesn't make it simple, it just makes it easier to write and read code!

... and Java also has to it's avantage that it is...

  • General purpose; Desktop, Enterprise, Mobile, Embedded, Realtime, ... did we miss anything??!
  • Deployable in a variety of ways; Application, Applets, Java WebStart, J2EE
  • Easily interfaced to most other systems
  • Correctly Open source; have you seen the number of Java projects running on the Internet!!!!

Wednesday, October 05, 2005

Wot say, Gartner?

My last bog- Specialists, Generalists & now Versatilists![http://thinkaround.blogspot.com/2005/10/specialists-generalists-now.html]- was based on the observations and predictions mad by Gartner in their Research paper titled 'The IT Professional Outlook: Where will we go from here?' [ http://www.gartner.com/DisplayDocument?doc_cd=130462 ]. Besides the culmination of a strong market for Versatilists, the paper also described various changes the IT Industry is poised to face by 2010. Changes such as:

  • Segregation of the industry into defined focus areas
  • Migration towards the ISV/ISP model
  • Growth in Relationship management and other Business facing positions
  • Increase in demand for Functional experts

The predictions revolved around the percentage workforce shifts that these changes would bring. Although very captivating at first, several predictions raise more than reasonable doubts. Lets see if I can outline the important ones I identified. ;)

To start at the top, the first Prediction on the cover says that, "By 2010, the IT profession will split into four domains of expertise: technology, information, process and relationships (0.7 probability)." Strangely, page 3 sees this prediction suddenly jump 10% to 0.8! Without explanation, I might add.

The third Strategic Planning Assumption on page 3 states, “Through 2010, 30 percent of top technology performers will migrate to IT vendors and IT service providers (0.8 probability).” Does this mean we will see individuals move to vendor/service provider firms or do we expect to see corporations favor the ISV/ISP model? This article was supposed to be for the ‘IT Professional’ but this seems to be more of a corporate viewpoint. Also, I’m assuming the shift is from core Consulting, but then where do all those custom manufactured ‘Harley Davidsons’ of software that form the core IT solutions for the likes of eBay, Wal-Mart and ICICI end up?

The oddities get really interesting once the Analysis takes off. 1.0 Introduction: Setting the Stage says that, “Business skepticism toward the effectiveness of IT, the rise of IT automation, worldwide geographic labor shifts and blended service-delivery models mean that IT professionals must prove that they can understand business reality — industry, core processes, customer bases, regulatory environment, culture and constraints — and contribute real business value to their enterprises.” First of all, what the hell is the blended service-delivery model? A little consulting, some process design and maybe a product (if we can find the time)! HE! HE! But on a serious note, do IT Professionals still need to prove that they can understand business realities? Sometimes I really hate the d***heads that made it big in the early 90’s because they ended up projecting a Programmer as someone perpetually hacking away at ‘alien’ code in some maintenance project. Damn them!

Moving on; 2.1 Global Outsourcing points to the acceleration of the offshoring/cosourcing initiative in various aspects. It just got me thinking, if offshoring were to grow really big, really aggressively, India would end up facing pretty steep competition from China and Brazil!

2.2 IT Automation seems to take the cherry. Just how the f*** do you automate Software Development!! I mean, what we come up with some sort of adaptive network (ala SkyNet) that simply ‘reads’ user Requirements to come up with a solution on its own? Man, that would really sound the death knell for Commercial Software Development!

In 3.0 The IT Profession Splits Into Four Domains of Expertise, the discussion on Technology infrastructure and services predicts that, “routine coding and programming activities will gradually shift to developing economies.” Like we don’t already get enough of that! What do we expect next? That they actually set up the processes whereby, they send us the code snippets, that they want us to splice, into the specified modules, ...

The focus area listing for the Technology Infrastructure and Services domain on page 8 was refreshing. To my joy, they placed Enterprise Architecture right at the top and Web Service very last. But, I’m not really sure haw Desktop Computing ended up under Infrastructure! On the same lines, Internet Design and Web Aesthetics somehow found their way under Information Design! :

Doubts apart, one of the most absurd observations I made was that the article made no attempt whatsoever to define the sample/scope they used to deduce all these predictions! Is Gartner trying to mask a set of market hunches behind a veil of numbers, or am I just ranting out of context??!!?? :S

Monday, October 03, 2005

Specialists, Generalists & now Versatilists!

The classic debate regarding the choice between Specialists and Generalists just got bigger. In a recent Research paper titled 'The IT Professional Outlook: Where will we go from here?' [ http://www.gartner.com/DisplayDocument?doc_cd=130462 ], Gartner added a new runner to the race- The Versatilist. According to Gartner, "Versatilists, in contrast, apply a depth of skill to a rich scope of situations and experiences, building new alliances, perspectives, competencies and roles." Now that's a mouthful! :)

To set a little background; Specialists build on intensified learning/training to excel in their chosen concentration within their domain, while Generalists prefer the extensive learning/experience approach where they stimulate limited exposure to various aspects or concentrations within their domain. But seriously, are these classifications water tight? No one could ever get anywhere by fine tuning themselves to just one paradigm; and I don't want to discuss what kind of impact you could make being the proverbial Jack of all Trades. The former is like finding the Answer to Life in the middle of a desert and the latter, like telling the world that you know how to light a match! The idea of Specialization was introduced at the height of the Industrial Revolution, a time when Capitalism was an accepted (and on occasion necessary) social evil. Generalization was a knee jerk reaction to over-specialization.

Now, we need to build Versatilists, Knights in Shining Armor, wielding all weapons with equal dexterity and skill! How real is the idea that a given individual could attain deep skill sets in multiple domains? According to Gartner, a Versatilist picks up greater roles and assignments as one increases the depth of current skills. But, wouldn't accepting a role outside your domain of expertise suddenly put you shallow? How much anticipation could you employ to mitigate this risk?

In my opinion, multi-faceted individuals view themselves in multiple dimensions. Each dimension represents an area of interest, a generalization; whereas the intersections of these dimensions would define concentrations, specializations. Each specialization bears at least two domain aspects; automatically multiplying the scope of roles or assignments where these intensive skills can be applied. It turns out to be very difficult to graph this idea, but it can be conceived as a series of overlapping pyramids. Each pyramid represents the skill base in a given domain, peaking at the specialization in that domain. Taller pyramids would represent an individuals majors. The overlap of pyramids would represent multiple roles applicable. So, an overlap higher in the pyramids would exhibit a deep skill (specialization) that may be applied to the benefit of roles/domains represented by both pyramids.

Thursday, September 22, 2005

How do you measure Quality?

Quality is not a process, but a state. In order to achieve that state we need to follow a specific path. But, to follow a path would mean not to follow other paths. Conventionally, it is this rigidity that ensures the attainment of a defined standard of quality. So, in effect, to achieve quality, we must instill among ourselves a sense of purpose greater than that of merely finishing the task at hand to free ourselves for the next; to achieve, not just a state of completion, but, a state of satisfaction.

Having said that, it is interesting to note that a state has very little measure than its achievement! Hence most Quality Metrics aim at measuring our adherence to the defined path (Process). But, do retrospective measures prove effective in ensuring, if not enhancing, quality? Could we actually be looking at the whole problem the wrong way round?

I believe a direct measure of the reliability of the planning done to ensure the achievement of a desired state would be more effective than a measure of adherence. I do not intend to downplay Tracking, but adherence to a faulty plan would logically lead to an undesired if not faulty result! In short, the quality achieved at the end of an activity can only be reliably determined if the corresponding planning artifacts are reliable. The following example could give you a clearer view of this approach.

Assume a scenario where billing is effected on the basis of estimates (ASSUME!). It would naturally follow that teams would attempt to provide reliable estimates to guarantee steady billing. Such reliable estimates would automatically warrant a well-defined and accurate means of deriving the size of a project; such objective sizing would in turn demand a clearly defined scope; and the stress on effective Scoping would subsequently translate to better Requirement Specifications.

I know what you’re thinking; Teams will end up trying to bloat estimates while the client makes it a point to shrink them to their limit. But I believe that this very conflict could result in the creation of an objective and well-defined estimation model approved by both parties. Also, such an approach would enhance not only the reliability of estimates, but also adherence as any positive variance would go unbilled.