Monday, October 30, 2006

Give your ITSM implementation a kick in the SaaS

I’m a strong believer in the ‘On-Demand’ model of software, also called Software-as-a-Service’ (SaaS). When I see articles titled “IT Execs To Vendors: Your Software Stinks”, it only increases my believe that ‘On-Demand’ might be a great way to give your traditional tool vendors a kick in the SaaS and accelerate your implementation of ITSM at the same time.

Everyone knows that tools will help automate processes, but when do you apply them and at what cost? Trying to wait until some process improvement cycles are completed is the best practice approach, but the pressure to automate is unrelenting and the big gorillas only crank up the pressure even more by offering Service Desks, CMDBs and visions of nirvana.

I’ve already spewed on about CMDB madness and the savage journey that can bring (see Avoiding a Savage Journey on the Road to ITSM Excellence) so I won’t continue that here, but trying to 'do it by the book' can be just as bad. The fact is, tools can help significantly!

The sad reality is that automation is one way to help ‘do more with less’, but often customers are simply not ready to make large strategic investments in software early in the journey. Trying to go through process implementation without any tool investment can result in hitting the wall, and making investments before you’re really ready can significantly increase the risk of making a bad investment.

Leveraging SaaS tools is one way to bridge that gap. Traditional software products often result in lengthy implementation cycles -- which is why SaaS may not be an option for these products -- but there are products that have been designed to enable SaaS, which can be used very effectively in ITSM implementations.

I Pity the Fool who doesn’t use a Tool!

Subscribing for a 90 day implementation can quickly provide value for the ITSM implementation, while offering a pilot to measure the value from the tool at the same time:

One thing we do, which I think every customer should demand, is a pilot. The full implementation of a robust enterprise solution is up and running very quickly, they use it for 90 days, and if they are making money from it, they decide to subscribe for some longer period. If they haven't proven value to themselves before then, why do it?” – SaaS Skepticism is Predictable, IT Business Edge, 9/28/2006

So, don’t be ‘a fool with a tool’, but don’t be a fool without one either. Kick some Saas and get on the Right Road.
As posted on Fear & Loathing on the Road to IT Service Management Excellence

Saturday, July 29, 2006

Implementing a CMDB is Like Blogging Alone: Why Products & Process won’t be enough to reconnect with the business

Starting your ITIL journey with a very complex, usually expensive, lengthy and often invasive technology-based initiative may only serve to increase the divide between IT silos and, more importantly, IT and the business. In a similar vein, too much focus on process may simply lead to more policy and procedure manuals that sit on a shelf.

The problem with Change, Configuration and CMDB implementations is they do not really enable a real-time connection between IT staff, and between IT and the business, which tends to perpetuate vicious cycles of tribal warfare.

“When people lack connection to others, they are unable to test the veracity of their own views, whether in the give or take of casual conversation or in more formal deliberation. Without such an opportunity, people are more likely to be swayed by their worse impulses….”
- Robert Putnam (2000) Bowling Alone: The collapse and revival of American community, New York: Simon and Schuster: 288-290

In the book Bowling Alone, by Robert Putnam, “Putnam warns that our stock of social capital - the very fabric of our connections with each other, has plummeted, impoverishing our lives and communities … we sign fewer petitions, belong to fewer organizations that meet, know our neighbors less, meet with friends less frequently, and even socialize with our families less often. We're even bowling alone.

The focus on Process (BPM, ITIL, CobiT, et al) and Products (read CMDB, SOA, et al) by IT leads me to believe we’re talking more than ever – but sometimes communicating even less than ever before.

I like the concept of blogging so much I’ve found myself actually Blogging Alone! (Personally, I’d rather bowl alone than blog alone, so please visit my blog!) The hype around the CMDB can have a similar effect on your ITIL implementation.

It’s the People

It’s the social networks that really make things happen in most companies, not those dusty old policies and procedures. It is the network of people-to-people commitments that are often what make things go (or not go).

So, when looking to embark on a ‘quality journey’, remember at the end of the day it’s the people --- and that intricate social network of commitments – that are often the ‘current state process’ and that people may fiercly protect this tribal knowledge.

Process, Products and Paradigm Shifts

In a recent webinar more people were familiar with the CMDB than with ITIL (see EMA’s webinar: CMDB Adoption in the Real World - Just How Real Is It?), which was interesting considering that the CMDB is very much an ITIL term. Just shows you what market opportunity will do to reality.

Getting your IT staff to achieve the paradigm shift to a services orientation is going to require people skills more than anything else, and your selection of tools --- particularly early in the journey --- can significanlty impact how people react to the implementation of IT service management.

Services, Stakeholders and Real-Time Analytics

Stakeholders & Services targeting is a fundamental best practice that is often ignored or skipped as customers try and “accelerate” implementation. This often means the implementation of ITIL considers the business from afar, rather than part of a cross-functional team.

While this may provide an easier path to get the ball rolling, at some point the business had better become part of the team. Process and commitment based stakeholder analysis leveraging both business and IT tracks can ensure that all stakeholders are included and services are understood from the customer’s perspective.

Starting with the end in mind assumes IT truly understands the business process, when sometimes that process is not that well understood even by the business! It also drives participatory decision techniques, which are successful more than 80% of the time.

In addition, Product-led ITIL implementations are likely to focus on the technology, particularly when the supplier is also driving process improvement activities. (The ITIL literature has spoken at great length on this subject.)

When considering process improvements and investments in automation, consider the following:

-Investments should target areas for highest return
-Should enhance ITSM process communication
-Should be consistent with business and IT objectives
-Should be driven by stakeholder input

Realizing the Paradigm Shift

I understand and agree that an ITIL journey needs to include eventual design and implementation of things like Change and Configuration Management, the CMDB, and other critical process and technology related efforts.

However, making these investments should be driven by participatory decision techniques and should enable every tribe to see the same information at the same time (a fundamental CMDB concept). The evolution to an ITIL-based CMDB is going to take time and significant effort, in most cases at least a year.

But achieving a paradigm shift involves people. Process and Product-centric implementation efforts can lead to edict-based decisions (Change freezes, etc.) which are the least successful of all decision techniques.

The case for service-oriented monitoring, particularly where real-time analytics can be incorporated into the solution, can provide every stakeholder with an end-to-end view of the IT business service infrastructure that is tailored to their perspective; without the time, cost and risk of implementing a CMDB. (See the White Paper, Choosing a monitoring system for your IT infrastructure?)

It also focuses on where most companies are spending the most money; isolation and diagnosis of complex, n-tier infrastrcuture problems.

These kinds of solutions can provide an intelligent, virtual operations bridge which is absolutely consistent with best practice. More importantly, it provides IT and business management with a solution that can help with the the hardest part of change --- people.

The ROI on People

Quickly providing a real-time source of truth, via a ‘top-to-bottom’ and ‘end-to-end’ IT business service infrastructure monitor with root-cause analytics, can help people get focused on the real problem and learn to trust each other. (When driven by the business, it can also provide political cover for IT tribes since it becomes a business-driven mandate.)

The argument of those concerned with social capital is that when harnessed it generates economic returns. More particularly, the benefits claimed include:

Better knowledge sharing, due to established trust relationships, common frames of reference, and shared goals.

Lower transaction costs, due to a high level of trust and a cooperative spirit (both within the organization and between the organization and its customers and partners).

Low turnover rates, reducing severance costs and hiring and training expenses, avoiding discontinuities associated with frequent personnel changes, and maintaining valuable organizational knowledge.

Greater coherence of action due to organizational stability and shared understanding. (Cohen and Prusak 2001: 10) (from Social Captial in Organizations)

Providing the ability to monitor monitor what is happening at every layer of every component of an end-to-end business service, and automatically identifing which layer of which component is the source of a problem establishes a basis of real-time truth. This is the key to establishing a real and lasting paradigm shift.

Your road to ITIL best practice does not have to be a savage journey. Consider an implementation approach based on stakeholders, services and intelligent service monitoring. By applying all the best practices -- Process, Products and People – you can achieve both ROI and a quality culture along the way.

Wednesday, July 05, 2006

CMDB interoperability: Waiting for Nirvana

Have you heard the good news?

The 800-pound gorillas have formed an 'alliance' in order to provide interoperability between their respective CMDBs. Of course I thought OASIS~DCML has been working on that, but I admittedly couldn't tell you technical folks squat about OASIS~DCML or what the hell our 800 pound friends are up to...maybe somebody who can translate the geek-speak into a language we can all understand will help us...

as for me, I got some serious deja vu going on....but perhaps more importantly, if you're implementing --- or want to implement --- IT service management best practice based on ITIL how does this impact your Road Map? Should you shout halleluia and just trust your 800 pound gorilla of choice to provide interoperability someday as promised? (If you do, I have a bridge I'd like to sell you)...

no, there are other (safer) options available to you. You KNOW that you must start with an analysis of your current processes first, so don't even think about tools until you've completed this step. However, if you've analyzed your processes and believe that automation (via a CMDB tool) is in order consider this:

  1. 1) The CMDB, like any tool, must automate your processes based on 'Where You Are Today'
  2. 2) The CMDB, like any tool, must provide a clear business case
  3. 3) The CMDB, like any tool, should create value to the organization QUICKLY

Of course, even if you decide you want a CMDB you'll have to understand and define those nasty relationships between CIs (which really means at least some degree of SLM --- ok, ok so we buy an SLM tool right? NOT )...

and how about the fact that (according to IDC, et al) most of the savings attributed to IT service management seem to be focused on more effective and efficient problem isolation & diagnosis (see Building an ITIL Business Case?...Slow & Steady Wins the Race)

finally, ask yourself: How long will it really take to achieve a CMDB as ITIL defines it? (see some interesting discussion at the ITIL skeptic)

While it's hard to question the staying power of 800 pound gorillas, there are some tenacious little badgers in the forest that can really help focus your Journey on the Right Path without holding you hostage waiting for interoperability nervana. One of these is one you've heard about from me many times, as I'm a former customer and 'true believer'; a small firm called eG Innovations.

This company has spent less time hyping ITIL and CMDB and much more time keeping thier eye on the effective & efficient problem isolation ball...quite simply, the software leverages a patented data flow and dependency based correlation logic that enables them to monitor what is happening at every layer of every component of an end-to-end business service, and automatically identify which layer of which component is the source of a problem. No rules to write, no code, no kidding!

As important, they do this across 75 major applications and platforms out of the box. I mean, out-of-the-box -- up and running in less than a day.

They call this 'service monitoring'. I call it 'A Good Way to Achieving the Paradigm Shift Required for Higher Levels of ITIL Process Maturity While Establishing a Foundation for a True CMDB and Letting the Gorillas Know That a Vision Won't Put Fires Out'. OK, I admit it's not a very catchy marketing slogan, but you get the point don't you?

You can wait for the 800 pound gorillas' Vision to become reality, you can Trust them now (and hope for the best) or you can focus on other areas that bring value (immediately) and gradually build the foundation of knowledge you'll need anyway to populate those beasts.

So get on the Right Road, but keep those headlights on!

Friday, June 09, 2006

Have you ever been Experienced?

Somebody asked me about "Quality of Experience' (QoE) recently. While those in technical silos may offer a brilliant dissertation on abstract polymorphic interfaces in clients and servers (see Button, Button, Whose Got The Button? Patterns for breaking client/server relationships, by Robert Martin) or even Distributed QoE, my answer is quite a bit simpler.

In my experience users will (usually) let you know if their experience is poor. You don't like the program, you change the channel. You get bad response from a web site, you shop somewhere else. So, of course you want to know what your users are experiencing!

However, different IT tribes will want to measure QoE for different reasons. Many want to be warned of a storm brewing, and be well prepared to explain why thier tribe is not the source of the problem (or fix it before they call). I call this the "it's the other bastards fault" motivator.

QoE is absolutely consistent with best practice. However, when investing in QoE technologies one should be careful of who is defining QoE (i.e., QoE of what services?). It should be the customer (read business process).

Two things worth considering before putting your budget dollars on the line:

1) Defining 'end-to-end' - Citrix access services is NOT a business service. It may be a critical segment of an end-to-end business service, but it is rarely the entire service. So, having 'end-to-end' knowledge of the Citrix servers right to the desktop is great --- but most business service infrastructures have a dizzying array of network devices, web servers, application servers, data base servers and applications. 'End-to-end' means every layer of every component required to support a business process.

2) What are you prepared to do? - So you took the plunge and purchased a QoE tool. Now that you've been warned (pray that your investment will warn you of an impending storm; otherwise your user could have told you --- for free), how will you isolate and diagnose the problem? Nice to know it's not in the Citrix server or the client, but then where is it?

This is where analytics come into the picture (see Analytics & IT Service Management on this blog), and things can get really complicated. However, it's good to focus on this objective:

The key to effective business service monitoring is the ability to monitor what is happening at each layer of the infrastructure --- across an array of distributed network, system and application components --- and automatically identify which component layer, in which domain, is the source of a problem.

QoE decisions, like many technology investments, can be tribally driven. This is particularly true if the organization has not invested in the time to understand and define 'what is a service' and performed some due diligence in analyzing processes.

Some IT tribes will display true leadership and go beyond thier comfort zones by incorporating other technical silos into the equation, but I suspect this is going to be difficult for many. Taking an approach driven by best practices (such as ITIL) can help avoid experiencing the angst associated with knowing with absolute certainty where the problem isn't, but not knowing where the problem is.

Monday, February 13, 2006

Analytics & IT Service Management

I saw a number of recent articles and webinars using the term 'analytics. One of them, 'Analytics' buzzword needs careful definition, by By Jeremy Kirk, IDG News Service, 02/13/06 you may have seen in the recent NetworkWorld Newsletter.

In this article, Gartner is quoted as defining analytics as:

["Analytics leverage data in a particular functional process (or application) to enable context-specific insight that is actionable." It can be used in many industries in real-time data processing situations to allow for faster business decisions].

In an ITIL-based context analytics can be viewed in many ways, since ITIL defines multiple processes, but the time savings associated with implementing IT service management is largely in the diagnosis and isolation of problems (IDC estimated as high as 75%). This is particularly true for n-tier infrastructures which are driving the need for ITSM in the first place.

In this case, IT service management analytics must include the ability to correlate real time data from every layer of every component of your n-tier infrastructure, automatically isolate the root cause AND present that in a way that is easily understood and actionable (to use Gartner's term).

This capability benefits many ITIL processes. The Service Desk, Incident and Problem management are obviously improved through a reduction in incidents and much greater ability to achieve proactive problem management. Service Level Management obtains a clear view of all interdependencies and real time and historical information on service performance. Capacity and Release Management are able to quickly isolate potential performance problems BEFORE changes are implemented (which, by the way, benefits both Change and Configuration Management).

When analytics is implemented in a way that multiple orgaizational silos can leverage the information in a way that's meaningful to them, even Application and ICT management benefit. Each obtain an easy to use, visual representation of each IT service which can help them understand and shift paradigms to a service orientation. In the case of ICT management, event filtering is highly automated which can lead to a significant improvement in the utilization of staff.

The key to effective monitoring of n-tier infrastructures is very much about analytics, but this does not mean men in white coats need to be crunching numbers, setting rules, etc. Successful anaytics in monitoring n-tier infrastrcutures is about how quickly and easily can the appropriate stakeholder obtain actionable information about what's happening with the service(s) from whatever perspective is relevant to them. Without taking months, without writing rules, and in a way that can keep pace with the business.

Silo-ed analytics may simply make things worse, since greater costs are added and much greater integration effort to achieve end-to-end services analytics. These silos can take different forms as well, such as response time 'analytics', or network 'analytics', even data base or application 'analytics'. In an IT service management approach, analytics should happen at the service level.

So, go ahead and leverage analytics for your environment. But understand why you're crunching the numbers in the first place.

Friday, January 27, 2006

N-tier Infrastructures, ITIL & Cross-Silo Performance Base Lines

It's no accident that the evolution to SOA and adoption of ITIL are happening at the same time. IT Service Management is, as the name indicates, about Services. If you're evolving to Service Oriented Architectures (SOA), then adoption of IT service management based on the IT Infrastructure Library is simply common sense. The supporting infrastructures enabling SOA are typically n-tier --- web front ends, application servers, data base servers, etc.

N-tier infrastructures are typically made up of multiple Configuration Item (CI) Segments, for example the Citrix segment, the backbone WAN segment, the Web front-end segment, etc. These are often the "IT silos" we here about.

Implementing a quality framework such as ITIL is very much about establishing cycles of continuous improvement and shifting paradigms from silos to services. In fact, the sooner you can establish the concept of rapid cycles of continuous improvement within your service improvement teams the better.

Many clients focus the initial ITIL implementation efforts on Change, Configuration and Release Management, which can lead to an initial improvement cycle that is just too long. This is especially true if --- in an attempt to define services from the business' perspective --- service definition takes an 'end-to-end' view, since all the tiers are now involved.

This increases the scope and complexity of CI relationships and CMDB establishment, and (more often than not) leads to the purchase of a CMDB tool...perhaps before you're really ready, since you may not have had any improvement cycles in other ITSM process areas. Design and development of the CMDB should be carefully planned, and must support every ITIL process.

Implementing cross-silo performance monitoring (i.e., true service monitoring, not simply response time monitoring), can provide service base lines of performance across every layer of every tier of your n-tier infrastructure.

This offers several advantages:
  • Clearly establishes service performance in both business and IT terms
  • Quickly identifies cross-silo dependencies for each service
  • Helps scope and target configuration design, development & base lining activities

Implementing a well designed CMDB is important, but can take many months. Installing a service monitor can be accomplished in weeks. In addition, establishing service monitoring --and obtaining cross-silo performance base lines for each critical service --- may help you get to a level of process maturity where CMDB definition and design makes more sense, and save you money along the way.

John M. Worthington, Principal
MyServiceMonitor, LLC

Friday, January 13, 2006

Monitoring - The Evolution

Monitoring their IT-infrastructures has historically been an afterthought for organizations, priority has always centered around application development and deployment. Hence traditionally the monitoring industry always lags behind a little bit as technologies evolve and shift in the landscape of application development and delivery. In recent years there is an emerging disconnect in the market between how applications are designed to work and how they are being monitored. Lets take it from the beginning... In early days applications were relatively simpler. Twenty years back, there were mainframes and clients, so if there was a problem it was very easy to locate, it was either in the mainframe - which affected everyone - or in the client. They had two relatively easy pieces to monitor. Most of the legacy players in the monitoring industry today evolved at this stage.

Later came the networking era where networks became a lot more complex and problems at the network level became an industry nightmare. At this stage every single problem was blamed on the network and most of the times it turned out to be true. So many tools cropped up especially to deal with monitoring networks and isolating issues at the network level... Over a period of time networks became more stabilized as the networking technology improved. But the fiasco of early days left such an indelible mark in the industry that even today in most organizations network department is really a secretive cult and no one outside of it gets to know their internals. The legacy monitoring players took time to get the network piece right but they eventually solved the network puzzle to a reasonable extent... so the market now has a set of key players who can do Client/server, legacy and network monitoring well.

By the time this played out the technology in the application development and rendering landscape has moved on... to n-tier architectures. N-tier architectures provide extreme flexibility, portability and scalability to application services. IT-industry has embraced the n-tier distributed architecture for its effectiveness and cost efficiency. This is the preferred architecture for the omnipresent web-services. An unattractive side effect of the n-tier architecture is that it introduced an amazing amount of complexity in the delivery infrastructure. Now multiple applications written in multiple languages running on multiple pieces of hardware must co-exist for the service to be effective. Due to the interdependency any small issue on one of these tiers tend to have a big impact on the service in a cascading effect. This coupled with the complex nature of the systems makes the process of isolating and identifying issues within the system a nightmare.

The solution put forward by legacy monitoring players to this problem is silo monitoring… effectively a tool to monitor every tier. In this model say for a simple Web-service you would have 3 different tools monitoring 3 different tierss (web, app, db). These tools are strong in their own domain and need a domain expert to run it. When there is a problem in the overall service you have different tools run by different domain experts who need to be brought together to identify what is the root-cause and what needs to be fixed. Since there is no transparency across the layers, most of these meetings turn into an exercise in the Blame-game. People tend to get defensive about their tier and it takes an extraordinarily long time to isolate even the simplest problems in this model. Hence the approach to monitoring the n-tier architectures by monitoring every tier individually as proposed by legacy monitoring players doesn’t work. This is the primary reason for the chaos in service delivery and affects the quality of service delivery even for fortune500 companies.

The right way to do this is to monitor the entire service as a single atomic unit instead of individual tiers. Monitoring every tier end to end and then bringing them together to view it as a single service gives you a complete perspective of this service. This enables the tool to be able to assess the impact of failures across the entire service. Also this tool needs to have a sophisticated enough correlation engine to be able to differentiate between causes and effects when an n-tier architecture goes through a cascading effect. Building a tool that monitors all the tiers of an n-tier infrastructure with equal competence and represent them in a uniform model is not an easy task. This is the primary reason why you don’t see many tools in the market that do that. Finally this tool has to provide the service operator information that he can act upon immediately rather than data that puts the onus on him to figure out the event. This is where the future of monitoring industry lies.