An interesting experience the other day made me stop and think about this phenomenon called “digital” that has everyone so excited.
I was driving to work and stopped at a gas station to fill-up my tank. However, when I reached for my wallet, I realized that I had left it at home. I had no cash or cards with me. I panicked. I was already running late. Neither could I drive away, nor did I have the time to leave my car and go back for my wallet! In that dark moment came inspiration. I did have my mobile! So I fired up my mobile-wallet app, scanned the QR code at the gas station, paid with a click and was on my way in a jiffy. Woohoo, “digital” to the rescue!
That set me thinking. Why was this experience so powerful? This was like having an ATM where I needed it, when I needed it! Not only that, to the vendor, it was like having a cash-register connected directly to their bank account. If the gas station had a NFC capable POS, I could have moved cash from my bank to their bank by just waving my phone! “Digital” is omnipresent. Bank accounts, health data, weather information, maps and directions, public records, music, movies, books, real-time video from across the world, a dictionary or the complete encyclopedia – you name, you have it – whenever you want it, wherever you want it. Today, it is with us through gadgets and wearables, and all around us in thousands of digitally-enabled “things”. Tomorrow, it will be within us through implantable chips. And one day, at the moment of singularity, when the boundaries between brain and chip dissolve, it will be us.
Is this just about being everywhere? No, that is just the beginning. Not only is “digital” everywhere, it is also interconnected. An internet of digitally-enabled “things” bound together in a myriad of connections, talking to each other, exchanging information, acting in collusion. And, when the light of AI and analytics is thrown on this vast pool of interconnected data and events, “digital” becomes omniscient. The Google map in my car can see the data from all other cars along my route to tell me that I am heading into a traffic-jam, immediately sifting through all other possible routes to come up with the best alternative - all this, while continuously communicating with a satellite thousands of miles up in space! If that is not a pretty good definition of “all-seeing”, tell me what is.
Now what happens when these astounding powers of digital are coupled with robotics and sensors? The results are truly magical. Scientists can effortlessly control a Mars rover millions of miles away, surgeons can conduct extremely complicated surgeries remotely, cars drive themselves and warehouses become completely automated! “Digital” is becoming omnipotent. As more things become digital, as computing power multiplies and as more and more data is collected, the possibilities of digital keep expanding.
But why now? Why did the digital curve jump the chasm only recently? To explain, let me take you back to an interesting product that I built more than fifteen years ago as part of a team at HP’s Advanced Technology Labs. We were building a banking product based on “Zero Latency Enterprise” technology. While a customer withdrew cash at the ATM, a recommendation engine would do a near-real-time analysis of the customer’s relationship history, and come back with personalized up-sell and cross-sell offers on the screen. If the customer opted-in, an email would be sent to the salesperson’s palm-pilot to initiate necessary action. This was as close to “digital” as you could get fifteen years ago. But the ecosystem that truly propels digital to its current omnipresent, omniscient and omnipotent capabilities simply did not exist in those days. Our recommendation engine did not have the insight into the customer’s online activities, social circles, loyalty and credit cards, location history, online and offline purchases, club memberships, investments, online-trading transactions – the mountain of data that is available today. Nor could we reach the customer anytime, anywhere through their smartphone to make offers at the point of purchase or to send location-aware promotions. And, AI and robotics were still in their infancy. You can see how the possibilities would be limited even a few years ago.
“Digital” has come of age today because the network effects of the Internet of Things has crossed a tipping point, compounding the impact of the technology. I am not omniscient enough to see where this new technology wave will take us, but I can definitely advise you to grab your surf-board and start riding this wave – and be prepared for a wild ride!
Ruminations of a Technologist
Musings, outpourings, feelings...about Technology in the Software and Information Technology world - a world that is constantly changing, morphing, mutating, expanding...a world of infinite possibilities...
Monday, October 9, 2017
Sunday, May 28, 2017
So you think you are a Technology Company? Think again.
I was recently asked to speak on technology at our company
town-hall, and since our company is in the middle of a transformation to a
“Technology Company”, I decided to make that the anchor-point of my talk. Why?
Because I have found that most companies and their employees do not really
“get” this concept, leading to many interpretations and confusions and hence
resulting in lost opportunities. I wanted to make sure our employees got it
right.
As I thought through my talking points, I came up with a
small list of misconceptions about what a Technology Company really is. I decided
to share all the thoughts I had gathered here, in anticipation of an
interesting discussion and feedback from a larger forum…
So what is a “Technology Company”? The most common
explanation I have heard is - a Technology Company is one that effectively uses
technology to be successful in business. If you let yourself believe that, you
are in deep trouble, in my humble opinion. Merely “using” technology may have
been a good strategy in the 70s and 80s, but today, being technology-enabled is
just “table stakes”. Can you name any business today that has not adopted
technology? You can’t, right? Even local mom-and-pop shops now use
mobile-payments and take delivery orders on Whatsapp. So, would you call all of
them Technology Companies? I am sure you wouldn’t. So, what is it that
Technology Companies do differently?
Well, to answer that, let me borrow a term coined by Tom
Peters. The term he uses is “Re-Imagine”, which I think perfectly captures the
essence of what I am attempting to convey. Technology Companies are companies
that use Technology to Re-imagine business, rather than to just automate or
enable existing business. They use technology to find completely new ways of
doing business, they even invent new kinds of business on the strength of
technology. At a Technology Company, technology defines business, rather than
the other way around. Ask the big banks, and they will tell you how they
thought interconnecting their branches and installing billion-dollar
“core-banking” systems to centralize their operations would be all the
technology play that was needed, only to be left gaping at upstarts like Atom
(UK) and Ally (US), the new “online-only” or “direct” banks, that have
redefined (Re-Imagined!) what the banking business is. This quote from an article
in Wired magazine (http://www.wired.co.uk/article/digital-only-banks)
sums it up nicely – “I could make a compelling argument to say that Atom is
actually a data company that happens to have a banking license”. Well, there
you have it. I could not have put it any better myself. But wait, it gets even
better – “Atom's competitor, Mondo, is perhaps best known in the tech world
because it has run a series of hackathons to imagine new banking functions”.
See what I mean?
The second common fallacy I find is the idea that Technology
Companies are about the latest and greatest technologies. I have seen companies
become a veritable museum of new technologies, in an effort to embrace all that
is new and shiny, thinking that just being on the latest technology will give
them a business advantage. They keep building “Next-gen” versions of their
existing products using new technologies, thinking that will automatically make
the product better. No. Technology is just a tool, it is how you use it that
counts. Yes, the latest technologies may have capabilities that could give you
advantages, but only if you apply those capabilities to business in innovative
ways. Try this exercise. List out the top 50 companies by R&D spend. Next
list out the top 50 innovative companies. See how many names in the two lists
match. Surprised at the lack of co-relation? You shouldn’t be. Re-Imagination
happens when business ideas and technology capabilities come together in
previously un-conceived ways to result in completely unforeseen outcomes. That
is why it is so difficult. That is why it so often blind-sides established
players. And, that is why it is so valuable.
As I was finishing up my preparations for my talk, I tried
to anticipate counter-points that would come up, so that I could go in prepared.
One of the top questions I knew folks would have was – “Hey, we have been in
this industry for decades, and have unique knowledge and expertise in the domain,
and that is our biggest competitive advantage. Why do we need to become a
Technology Company”? Very good question, and many companies think along these
same lines and go – “Oh yeah, that’s right! So, if I use technology to automate
or enable these unique business strengths I have, I should be a winner on all
counts, right”? Umm…hold on. I know that logic sounds very reasonable. But don’t
jump to conclusions yet, follow me closely here. The problem is that your
“unique business knowledge and expertise” is competitive in the context of how
business is done today! It is tuned to the way the business works today. What
if the context changes to a “Re-Imagined” way of doing business? Would your
well-practiced orchestra go completely out of tune? Would your years of
in-grained processes and culture make it all the more difficult to turn the
ship and catch a new prevailing wind? You bet it will! Tell me, did Amazon have
decades of experience as a book-seller? Or was Uber a taxi-fleet operator for a
century? They are true examples of how Technology Companies change the game.
Well, that brings us to one final interesting question – if you
are a true Technology Company, do you
always remain so? Can you rest on your laurels and be assured of continuing
success? Aha! Life is never that simple. Remember, you rose to ascendancy
because you were the first to use technology to “change” how business was done,
so if you stop changing, what happens? Yes, you are right; your competitors
will soon catch on. What was unique to you will soon become established
business process! Your days ahead of the pack will be numbered, and soon the
herd will be all around you. And now, you will be as susceptible as the others
to a new upstart who changes the game again. So you see, to retain your
Technology Company badge, you will have to continue using technology to change
the game. Look at Facebook, the guys who wrote the rule-book on social media.
They are now trying their best to catch up to the new brats in town – Snapchat and
Instagram! The only company (that I can think of) that has relentlessly held on
to the Technology Company badge is Apple. No wonder both their customers and
their shareholders love them so much.
Hope I was able to get you to think differently about this topic. So, what do you think? Do you agree? Which of the fallacies listed above have you encountered? Do you have some more examples? What are your examples of great Technology Companies? I would love to know. Or, do you disagree? If so, please enlighten me with your point of view through your comments.
Sunday, February 5, 2017
Flying the product kite - creative tussle between Product Management and Engineering
If you have built software products as part of either Engineering or Product Management, I am sure you have often wondered why these two roles sometimes seem to be at cross purposes. Product Management always seems to want the greatest features now, and engineering always seems to be explaining why a feature is technically infeasible or difficult to complete in so short a time. You are very sure both sides are trying to bring their best to the game, and you keep wondering how they can play better as a team.
These kinds of experiences led me to the need for a simple analogy describing the relationship between Product Management and Engineering - something that teams could easily grasp, something that would stick. So where did that search lead me? It led me back in time to the days of childhood and the joy of flying kites!
In my model, Product Management can be pictured as a kite, soaring among the clouds, with Engineering as the little kid on the ground, holding the string firmly. Please see the picture below the next paragraph, where I have attempted to represent this visually. (Note : This is just my way of looking at this relationship. You readers may have other ideas or opinions. I would love to hear about them, and I look forward to your comments)
Product Management is up there facing the winds of change blowing through the business environment, trying not to be left behind. They have their head truly in the clouds, thinking up grand new features to leave competitors languishing in the lower echelons. Being up high, they are also able to gaze at distant horizons and see the future of the industry, and they may also look through their telescopes at other kite-flying teams to see what the competitors are up to. Engineering, on the other hand, is the kid running around on the tough technology terrain, trying to avoid prickly technical problems and the hard rocks of architectural dead-ends. They are the anchor, grounding the product in solid engineering, the voice of practicality and logic and reason that keeps the kite from being torn apart by the wind or snared by the electric pole. And just like the skillful interaction between kite and flier helps them reach new heights, close coordination between Product Management and Engineering is the only way to launch a successful product and keep the organization's banners flying.
If you have flown kites, you know that the only way to make the kite rise is to pull on the string, against the wind. Similarly, Engineering needs to have a firm hold on the string to help and guide Product Management through turbulent business scenarios. Another thing you will also know from your kite flying days is that to get the kite higher up in the sky, you need to successively pull on it and release the string to allow more and more of the string to play out, carrying the kite higher and higher. This is very important for Engineering to understand. The successive pulls on the string are equivalent to engineering hardening of the product where feature-creep is kept on a tight leash and the product resilience and performance is improved. The successive relaxations of the string are the innovations, hackathons, new technology adoptions and release marathons that Engineering undertakes to feed Product Management's needs for better features, improved user experiences and insightful business intelligence.
Kite-flying disasters are quite common when either of the parties stops playing as a team. An unruly kite that fails to respond to the inputs of the flier ends up in tatters or high-up on a tree, and a flier that pulls too insistently on the string is left with either a stalled kite or a broken string. These are important lessons for Product Management and Engineering to keep in mind.
By the way, in no way do I want to imply that these roles of kite and flier are rigid and exclusive. Far from it. Good engineers are expected to understand business and be aware of developments in the domain, and if they do, they can also become partners to Product Management in driving features. I have seen many instances of this happening. I have also seen equally commendable cases of Product Management being cognizant of the challenges faced by technology, and working with Engineering to create a road-map that provides enough space for deep technology transformations. So yes, the parts played by the kite and flier can overlap, and they often do. However, the analogy presented here does provide a very simple story of what the generic and ideal relationship between Product Management and Engineering should be like.
What do you think? Do you see your kite-flying lessons being as handy in product development and engineering as envisioned here? Or do you feel that product development is too serious a sport to be compared with mere kite flying? Do let me know through your feedback.
These kinds of experiences led me to the need for a simple analogy describing the relationship between Product Management and Engineering - something that teams could easily grasp, something that would stick. So where did that search lead me? It led me back in time to the days of childhood and the joy of flying kites!
In my model, Product Management can be pictured as a kite, soaring among the clouds, with Engineering as the little kid on the ground, holding the string firmly. Please see the picture below the next paragraph, where I have attempted to represent this visually. (Note : This is just my way of looking at this relationship. You readers may have other ideas or opinions. I would love to hear about them, and I look forward to your comments)
Product Management is up there facing the winds of change blowing through the business environment, trying not to be left behind. They have their head truly in the clouds, thinking up grand new features to leave competitors languishing in the lower echelons. Being up high, they are also able to gaze at distant horizons and see the future of the industry, and they may also look through their telescopes at other kite-flying teams to see what the competitors are up to. Engineering, on the other hand, is the kid running around on the tough technology terrain, trying to avoid prickly technical problems and the hard rocks of architectural dead-ends. They are the anchor, grounding the product in solid engineering, the voice of practicality and logic and reason that keeps the kite from being torn apart by the wind or snared by the electric pole. And just like the skillful interaction between kite and flier helps them reach new heights, close coordination between Product Management and Engineering is the only way to launch a successful product and keep the organization's banners flying.
If you have flown kites, you know that the only way to make the kite rise is to pull on the string, against the wind. Similarly, Engineering needs to have a firm hold on the string to help and guide Product Management through turbulent business scenarios. Another thing you will also know from your kite flying days is that to get the kite higher up in the sky, you need to successively pull on it and release the string to allow more and more of the string to play out, carrying the kite higher and higher. This is very important for Engineering to understand. The successive pulls on the string are equivalent to engineering hardening of the product where feature-creep is kept on a tight leash and the product resilience and performance is improved. The successive relaxations of the string are the innovations, hackathons, new technology adoptions and release marathons that Engineering undertakes to feed Product Management's needs for better features, improved user experiences and insightful business intelligence.
Kite-flying disasters are quite common when either of the parties stops playing as a team. An unruly kite that fails to respond to the inputs of the flier ends up in tatters or high-up on a tree, and a flier that pulls too insistently on the string is left with either a stalled kite or a broken string. These are important lessons for Product Management and Engineering to keep in mind.
By the way, in no way do I want to imply that these roles of kite and flier are rigid and exclusive. Far from it. Good engineers are expected to understand business and be aware of developments in the domain, and if they do, they can also become partners to Product Management in driving features. I have seen many instances of this happening. I have also seen equally commendable cases of Product Management being cognizant of the challenges faced by technology, and working with Engineering to create a road-map that provides enough space for deep technology transformations. So yes, the parts played by the kite and flier can overlap, and they often do. However, the analogy presented here does provide a very simple story of what the generic and ideal relationship between Product Management and Engineering should be like.
What do you think? Do you see your kite-flying lessons being as handy in product development and engineering as envisioned here? Or do you feel that product development is too serious a sport to be compared with mere kite flying? Do let me know through your feedback.
Saturday, August 6, 2016
Technological Advances, or just "degrees of separation"?
I am very sure you have heard about the "Six degrees of Separation" theory as a party-topic or at the office water-cooler. Interestingly, as computer hardware, software, and networking have advanced through the ages, computing has gone through its own "degrees of separation".
In the beginning, everything was one big block - the hardware, the OS, the applications - everything came from the same vendor and ran on the same box that took up the space of a house! Things were simple, it was a close-knit family living in a single room.
Then, in the mid-to-late 1960s, the first "application software" was developed and sold by a third party. This was a major step, since till this point, the business software applications used to be bundled along with the hardware and OS, and no-one thought it could be any other way. Now, the computer hardware companies were joined by a new class of software companies - the Independent Software Vendors. Thus were the giants like Microsoft born.
Meanwhile, the dumb terminal had separated from the mother-ship, and we were on to the next era of separation - the client-server era! This hardware separation was soon copied into software-side separation too, and voila, we had our "two-tier architecture"! Well, three is always better than two, right? At least the architecture pundits thought so, and stretched the two-tier architecture to create the new "three-tier-architecture", leading the English dictionary to create space for a new word in our vocabulary - "middleware".
Things were going well with the three-tier world for some time, and then the industry was bitten by the separation bug again. We started hearing about "distributed" systems. Everything could now be distributed - services, servers, databases, disks - and they could be distributed around the room, around the data center, or across larger LAN/WAN setups. We were now in the world of "n-tier" architecture, and our single-room-dwelling family had now separated, divided-up into hundreds of sub-members, and sprawled out across town and country.
However, the story was far from over. Before we had time to lament the break-up of the close-knit family and their scattering all over terra-firma, the separation drama reached for the clouds. And when mobile joined the party, things went really crazy! As I sit here and type on my web browser, it looks like it is all happening on my lap (now, don't get any ideas, all I mean is it is happening on my laptop...), but in reality, the server sending me this page could be anywhere on earth, the database storing my words could be at the other corner of the globe, and my precious text could be merrily flying around clouds, traveling over thin air or undersea cables. It truly is mind-boggling. The single-room dwelling is now a global village. How separated is that?
But why limit our thoughts to mere earth? The farthest computer today is probably on the Voyager 1 spacecraft which is a mind-boggling 20 million kilometers away from the earth and counting, and if a client on earth were to "ping" the server on Voyager 1, it would take about 38 hrs. to come back with a response, since that is the round-trip time for light to Voyager 1! (http://voyager.jpl.nasa.gov/where/) Talk about a really slow network! So, it is not hard to imagine the day when our computing cloud would be separated across millions of miles of intergalactic space.
Well, enough about separation, it is time for my separation from this close-to-too-long post. See you soon in my next post.
In the beginning, everything was one big block - the hardware, the OS, the applications - everything came from the same vendor and ran on the same box that took up the space of a house! Things were simple, it was a close-knit family living in a single room.
Then, in the mid-to-late 1960s, the first "application software" was developed and sold by a third party. This was a major step, since till this point, the business software applications used to be bundled along with the hardware and OS, and no-one thought it could be any other way. Now, the computer hardware companies were joined by a new class of software companies - the Independent Software Vendors. Thus were the giants like Microsoft born.
Meanwhile, the dumb terminal had separated from the mother-ship, and we were on to the next era of separation - the client-server era! This hardware separation was soon copied into software-side separation too, and voila, we had our "two-tier architecture"! Well, three is always better than two, right? At least the architecture pundits thought so, and stretched the two-tier architecture to create the new "three-tier-architecture", leading the English dictionary to create space for a new word in our vocabulary - "middleware".
Things were going well with the three-tier world for some time, and then the industry was bitten by the separation bug again. We started hearing about "distributed" systems. Everything could now be distributed - services, servers, databases, disks - and they could be distributed around the room, around the data center, or across larger LAN/WAN setups. We were now in the world of "n-tier" architecture, and our single-room-dwelling family had now separated, divided-up into hundreds of sub-members, and sprawled out across town and country.
However, the story was far from over. Before we had time to lament the break-up of the close-knit family and their scattering all over terra-firma, the separation drama reached for the clouds. And when mobile joined the party, things went really crazy! As I sit here and type on my web browser, it looks like it is all happening on my lap (now, don't get any ideas, all I mean is it is happening on my laptop...), but in reality, the server sending me this page could be anywhere on earth, the database storing my words could be at the other corner of the globe, and my precious text could be merrily flying around clouds, traveling over thin air or undersea cables. It truly is mind-boggling. The single-room dwelling is now a global village. How separated is that?
But why limit our thoughts to mere earth? The farthest computer today is probably on the Voyager 1 spacecraft which is a mind-boggling 20 million kilometers away from the earth and counting, and if a client on earth were to "ping" the server on Voyager 1, it would take about 38 hrs. to come back with a response, since that is the round-trip time for light to Voyager 1! (http://voyager.jpl.nasa.gov/where/) Talk about a really slow network! So, it is not hard to imagine the day when our computing cloud would be separated across millions of miles of intergalactic space.
Well, enough about separation, it is time for my separation from this close-to-too-long post. See you soon in my next post.
Sunday, May 19, 2013
The Success Paradox
I have been advising many customer CTOs and VPs with their product strategies, product roadmaps and modernization efforts. In the past I have also led new product developments, and have had engineering ownership of a mature banking product with over 500 customers. Looking back at what I learnt from all of this, I realize that I see a clear trend here - the more success a product has had in the recent past, the higher the chances that engineering is nearing a dead-end. The higher the past success, the more difficult it is to achieve the next leap in engineering for future success of the product. And, on top of that, the longer you wait to take the next leap in technology and engineering, the worse it gets.
Why does this happen?
Let us assume you head engineering, and have started building a new product. You have a clear vision of what the product needs to do, and how you are going to achieve it. The design, architecture and roadmap of the product is based on this initial vision. The choice of technolgies and tools is similarly based on current needs and current availability. Everything goes well in development. You release the product into the market and sit back and relax, expecting to keep working on the roadmap at your pace and priority. Suddenly, your product picks up! You have new customers signing up every day and guess what, your plans are hijacked. The business wants to capitalize on the momentum and starts pressurizing you into providing new features faster, features that you have never planned for. Customers start getting pushy about their defects and their feature requests. The load on the system keeps increasing dramatically. You hire a larger team, go with the flow, start churning out releases by the dozen, add many new features, increase the infrastructure footprint, integrate with a bunch of partner products....you are running just to keep up!
The years fly by, and one fine day, business comes back and tells you - "the product is not good enough, and engineering does not seem to be able to give us what we need in time". What?! Are we talking about the same product that was beating the charts 5 years ago? Yes, we are. Unfortunately, while you were busy fixing bugs, adding new features, improving performance to meet the increasing load on the system, and fighting off impractical feature requests, the world has moved on. Competitors have come out with cooler stuff built on newer technology. Their solutions are more modular and can integrate with other services. They are more nimble and agile. On the other hand, your technology, that was shining new at inception is now rusty, your architecture looks dated and monolithic, your interfaces are not open enough. And guess what, over all those years, as you were madly keeping up with "Business as usual", technical debt has been silently creeping up behind you. The trickle of technical debt, that you always planned to catch up with in the next release, is now a mountain, blocking your way to agility, nimbleness and efficiency. Each new feature now takes longer to develop, and is costlier. No wonder business is complaining!
I see this story repeated again and again.
So, what is the solution?
Well, once you get to this state, there is no easy way out. So, my suggestion is, never let yourself get to this stage. Keep "watering the roots" - keep looking at ways to improve the architecture, keep refactoring and catching up with tech debt, keep an eye out for new technologies and trends and adopt what is necessary, keep in tune with business strategy and align the product roadmap accordingly, and of, course, use an Agile or Lean development methodology. These would help, but would not insulate you completely. You will still have challenges. But just being aware of the paradox and taking adequate steps should make life much easier.
Why does this happen?
Let us assume you head engineering, and have started building a new product. You have a clear vision of what the product needs to do, and how you are going to achieve it. The design, architecture and roadmap of the product is based on this initial vision. The choice of technolgies and tools is similarly based on current needs and current availability. Everything goes well in development. You release the product into the market and sit back and relax, expecting to keep working on the roadmap at your pace and priority. Suddenly, your product picks up! You have new customers signing up every day and guess what, your plans are hijacked. The business wants to capitalize on the momentum and starts pressurizing you into providing new features faster, features that you have never planned for. Customers start getting pushy about their defects and their feature requests. The load on the system keeps increasing dramatically. You hire a larger team, go with the flow, start churning out releases by the dozen, add many new features, increase the infrastructure footprint, integrate with a bunch of partner products....you are running just to keep up!
The years fly by, and one fine day, business comes back and tells you - "the product is not good enough, and engineering does not seem to be able to give us what we need in time". What?! Are we talking about the same product that was beating the charts 5 years ago? Yes, we are. Unfortunately, while you were busy fixing bugs, adding new features, improving performance to meet the increasing load on the system, and fighting off impractical feature requests, the world has moved on. Competitors have come out with cooler stuff built on newer technology. Their solutions are more modular and can integrate with other services. They are more nimble and agile. On the other hand, your technology, that was shining new at inception is now rusty, your architecture looks dated and monolithic, your interfaces are not open enough. And guess what, over all those years, as you were madly keeping up with "Business as usual", technical debt has been silently creeping up behind you. The trickle of technical debt, that you always planned to catch up with in the next release, is now a mountain, blocking your way to agility, nimbleness and efficiency. Each new feature now takes longer to develop, and is costlier. No wonder business is complaining!
I see this story repeated again and again.
So, what is the solution?
Well, once you get to this state, there is no easy way out. So, my suggestion is, never let yourself get to this stage. Keep "watering the roots" - keep looking at ways to improve the architecture, keep refactoring and catching up with tech debt, keep an eye out for new technologies and trends and adopt what is necessary, keep in tune with business strategy and align the product roadmap accordingly, and of, course, use an Agile or Lean development methodology. These would help, but would not insulate you completely. You will still have challenges. But just being aware of the paradox and taking adequate steps should make life much easier.
Saturday, January 12, 2013
Web 3.0 - are we there yet?
We are now all too familiar with Web 2.0. It has been around for sometime, and we have heard a lot about how it has transformed the world of internet. Now, with the advent of HTML5 and the rapid developments in "rich media" and "responsive web", Web 2.0 already seems like a relic from the past. So why are we not getting to Web 3.0 yet?
Don't you know, Web 3.0 is already here! "Why did I not hear about it?" - you ask. Well...remember, Web 2.0 was more a marketing terminology than anything else. It was used to put a label to the state-of-the-art web at the time, it was a handle technology marketers could use. It was never really a "technical specification". So, though I find that in many ways, we are already into Web 3.0, we are still waiting for someone to turn on the marketing and publicity blitz to make us sit up and take notice.
Why do I say we are already into Web 3.0? Just as Web 2.0 was defined by a few major things - democratization of web, Asynchronous Calls (AJAX) and Subscription and feeds (RSS etc), Web 3.0 is supposed to be built on four key concepts - semantic web, personalization, artificial intelligence and "anytime anywhere" access. All of these are already available today in some form or other! Twine, which was first announced way back in 2007, was a good attempt at a semantic web. Though it did not succeed, it still laid the foundations. Today, many social networking and search sites use semantic search. iGoogle is the best example of personalization, and it is very much here. Artificial intelligence is evident in many of the features of popular sites, be it the graph searches of Facebook, or iGoogle, or Siri. And need I say anything about "anytime anywhere"? It is one of the most heard terms these days.
So believe me you, Web 3.0 is already here! If you are interested in knowing more about Web 3.0, this site has links to some wonderful material.
Don't you know, Web 3.0 is already here! "Why did I not hear about it?" - you ask. Well...remember, Web 2.0 was more a marketing terminology than anything else. It was used to put a label to the state-of-the-art web at the time, it was a handle technology marketers could use. It was never really a "technical specification". So, though I find that in many ways, we are already into Web 3.0, we are still waiting for someone to turn on the marketing and publicity blitz to make us sit up and take notice.
Why do I say we are already into Web 3.0? Just as Web 2.0 was defined by a few major things - democratization of web, Asynchronous Calls (AJAX) and Subscription and feeds (RSS etc), Web 3.0 is supposed to be built on four key concepts - semantic web, personalization, artificial intelligence and "anytime anywhere" access. All of these are already available today in some form or other! Twine, which was first announced way back in 2007, was a good attempt at a semantic web. Though it did not succeed, it still laid the foundations. Today, many social networking and search sites use semantic search. iGoogle is the best example of personalization, and it is very much here. Artificial intelligence is evident in many of the features of popular sites, be it the graph searches of Facebook, or iGoogle, or Siri. And need I say anything about "anytime anywhere"? It is one of the most heard terms these days.
So believe me you, Web 3.0 is already here! If you are interested in knowing more about Web 3.0, this site has links to some wonderful material.
Sunday, August 28, 2011
Technology Deja Vu
I feel like I am in the middle of that popular sci-fi film - "Back to the Future" !
Every new technology that hits the headlines, seems to remind me of something I have seen before. It brings back memories of the past, it raises the same old questions, the concept does not feel truly "novel".
Do you feel so too? If you have been around in the IT industry for more than a couple of decades and have earned your programming chops on the trusty old mainframes, I bet you do get that feeling, right?
Enough of talking in the abstract. Let us look some examples...
Let us start with that prime example of new technology - the cloud. Wow, you can now have your software and services run anywhere out there in the wild, and access them at will! You need not know where they are running, you need not worry about resource constraints (kind of, since you can set it up to be elastic), you need not worry about downtime. Heavenly, isn't is? Yes, it is, but is the concept new? Were things very different in the "multiprocessor" days of the mainframe? We never used to bother where our processes and programs were running, and resources were not usually a constraint either. And downtime? Well, in my days as a Tandem (later Compaq NonStop, and then HP NonStop) programmer, I remember the demos at the Cupertino (California) labs, where the customers were shown true redundancy - you could bring down a CPU, pull out a memory or network card, and voila! the system would continue as if nothing had happened! Was the concept of having your software programs, processes, and services running on a "processor farm" with true redundancy built-in very different from the concept of your services running on a "server farm" with cross-region redundancy? The scale is different, of course, but I believe the template is the same.
Hadoop is making waves with its capability to split large workloads into smaller chunks and then ship them to separate machines which can then chew on these bites in parallel. That must be a new concept, right? Well, I think not. In the mainframe world, we used to have the concept of DB queries being broken down into smaller chunks during the "compilation" of the queries. These chunks would then be intelligently shipped-off to the "disk processes" that the RDBMS would be running close to each physical disk. The idea was that each work-unit of data processing would be done by a disk process that was closest to the physical disk that the data was residing on - thereby guaranteeing the best performance. Again, the scale and infrastructure today are different, but the concept is tried and tested.
And what about all the excitement about responsive and interactive web pages? Aren't they making the web pages heavier and heavier? Aren't they moving more and more processing to the client? Aren't the heavy Javascript frameworks starting to look more like client-server technology of the old days, with AJAX calls to servers and listeners and call-backs? I leave it to you to decide...
So, what is really happening here? In my opinion, the new capabilities of the hardware and infrastructure are allowing us to use the old concepts in new ways. The massive scaling possibilities of connected processors are feeding the cloud and technologies like Hadoop. The increasing capabilities of mobile devices are fuelling heavier clients. Thus, it is more like old wine in new (and much larger) bottles! I would really love to see some truly new wine, though.
Every new technology that hits the headlines, seems to remind me of something I have seen before. It brings back memories of the past, it raises the same old questions, the concept does not feel truly "novel".
Do you feel so too? If you have been around in the IT industry for more than a couple of decades and have earned your programming chops on the trusty old mainframes, I bet you do get that feeling, right?
Enough of talking in the abstract. Let us look some examples...
Let us start with that prime example of new technology - the cloud. Wow, you can now have your software and services run anywhere out there in the wild, and access them at will! You need not know where they are running, you need not worry about resource constraints (kind of, since you can set it up to be elastic), you need not worry about downtime. Heavenly, isn't is? Yes, it is, but is the concept new? Were things very different in the "multiprocessor" days of the mainframe? We never used to bother where our processes and programs were running, and resources were not usually a constraint either. And downtime? Well, in my days as a Tandem (later Compaq NonStop, and then HP NonStop) programmer, I remember the demos at the Cupertino (California) labs, where the customers were shown true redundancy - you could bring down a CPU, pull out a memory or network card, and voila! the system would continue as if nothing had happened! Was the concept of having your software programs, processes, and services running on a "processor farm" with true redundancy built-in very different from the concept of your services running on a "server farm" with cross-region redundancy? The scale is different, of course, but I believe the template is the same.
Hadoop is making waves with its capability to split large workloads into smaller chunks and then ship them to separate machines which can then chew on these bites in parallel. That must be a new concept, right? Well, I think not. In the mainframe world, we used to have the concept of DB queries being broken down into smaller chunks during the "compilation" of the queries. These chunks would then be intelligently shipped-off to the "disk processes" that the RDBMS would be running close to each physical disk. The idea was that each work-unit of data processing would be done by a disk process that was closest to the physical disk that the data was residing on - thereby guaranteeing the best performance. Again, the scale and infrastructure today are different, but the concept is tried and tested.
And what about all the excitement about responsive and interactive web pages? Aren't they making the web pages heavier and heavier? Aren't they moving more and more processing to the client? Aren't the heavy Javascript frameworks starting to look more like client-server technology of the old days, with AJAX calls to servers and listeners and call-backs? I leave it to you to decide...
So, what is really happening here? In my opinion, the new capabilities of the hardware and infrastructure are allowing us to use the old concepts in new ways. The massive scaling possibilities of connected processors are feeding the cloud and technologies like Hadoop. The increasing capabilities of mobile devices are fuelling heavier clients. Thus, it is more like old wine in new (and much larger) bottles! I would really love to see some truly new wine, though.
Subscribe to:
Posts (Atom)