Wednesday, August 13, 2008

PLM & Semantics Part 3 - Requirements Taxonomies

One of the most obvious and easiest ways to see how Semantics or Semantic Integration drives Program Lifecycle Management (PLM) is through Requirements Management or Engineering and the development of taxonomies. It is often more likely that you will be faced with immediate project support tasks than a more global enterprise definition effort (either as part of a data standardization, COI or MDM initiative).

Every project though must have some level of requirements vetting in order to satisfy expectations for potential return on investment or affordability. These types of efforts are generally manual although some folks use models to help with cost estimation. At some point though, the high level or functional Requirements and Work Breakdown Structure (WBS) must be defined in order to set up project schedules.

With PLM, we can tackle this by developing a Semantic blueprint or foundation. The way I've approached this before is to use preliminary visualization tool (mind map or concept) to illustrate the functional requirements and the relationships between them. I then designate that as a Domain Taxonomy (or ontology depending on how detailed the relationship information is). The Domain Taxonomy then represents the pool of available terms groups and sub-groups with which to build logically relevant WBS segments. Then I build requirements taxonomy within my automation environment and extract the WBS from it. Thus I have elements of EA design, semantic correlation and project coordination all wrapped up within one activity. This makes it possible to track from:
  • Strategy to EA
  • EA to Functional Requirements (the 1st level or elementary taxonomy)
  • Functional Requirements to WBS (abstraction of 1st level taxonomy)
  • WBS to Technical Requirements (the 2nd level or detailed taxonomy)
  • Technical Requirements (precise) to Project Schedules (schedule & detailed requirements taxonomy should map nearly one to one)
  • Project Schedules to Roadmaps & What if Alternatives
  • and everything back to Strategy



Understanding functional requirements implies domain knowledge, both in terms of domain entities and relationships.

Copyright 2008, Semantech Inc.

Monday, August 11, 2008

PLM & Semantics - Part 2: Ontologies

What are Ontologies and What do they have to do with Program Management? Well, they are the hidden 'maps' that link together all aspects of process, data and system architecture. An Ontology in our context, refers to characterization of conceptual hierarchies and their relationships within the enterprise. ITIL for example is an Ontology.

An Ontology fits within a spectrum of terms used to define various levels within a Semantic framework. Many people consider the Ontology or a Shared Upper Level Ontology to represent the pinnacle of Semantic constructs, however this is not the case. As we have experienced in many enterprises 'forced to integrate,' many Ontologies from diverse communities often come together in "Sets."




The Semantic Hierarchy or "Spectrum" - Most of us don't realize when we're viewing these...


So, what can organizing our information within these "spectra" do for us as managers? The 1st thing it will do is to abstract your program information from the systems and sources where it currently resides. This is a much bigger issue than it seems - if your framework for running a complex enterprise is dependent on a set of unreconciled COTs tools and MS Office documents, it is hardly likely that your enterprise can ever be truly run through unified Lifecycle Management approach. The Semantic layer that you develop serves as a foundation for both solution design and oversight thereby unifying them from the start. This is a powerful bit of synergy.

In the Department of Defense, many programs have used something referred to as a Community of Interest (COI) for the last few years to help define the data paradigms behind each "functional" area of their programs. At first, these were viewed more like traditional data standardization efforts but increasingly they are being managed using Semantic technologies and integrated with Enterprise Architecture initiatives.



This example illustrates how various taxonomies are typically mapped together in an EA -like analysis.


Copyright 2008, Semantech Inc.

Monday, July 7, 2008

Program Lifecycle Management (PLM) & Semantics, part 1

We have discussed at some length that Program Lifecycle Management (PLM) is empowered by the ability to merge or otherwise integrate all program related data within a single instance database built atop a single schema that reflects the full spectrum of business processes present in most PMOs. It is important however to make a distinction between the technical implementation of such a solution and the underlying architecture premise. In this case, the true power of PLM is not in any one proprietary database but rather in the ability to define and merge the management of IT within a fluid, evolutionary set of definitions. This is semantics; semantics not as an arbitrary concept referring to the validation of symbolic meaning, but semantics as a facilitating technological medium, allowing for correlation between processes, data sets and application logic – all modifiable without development by end users.

In many ways, the notion of PLM is dependent upon Semantics and PLM can be considered one of a new family of practices that are “Semantically enabled” or empowered.” The amount of value inherent in these practices will become even more apparent as the amount or level of support for semantic interoperability increases. As PLM platforms extend feature-sets to include RDF and OWL transfer as well as visual mapping of taxonomies or Ontologies, the integration of program management with the projects entrusted to it will begin to occur in earnest for the first time. This includes and extends to enterprise architecture as well (including complex application design). Some PLM platforms already support UML Use Cases which can be used to help derive requirements taxonomies, project schedules test plans and so forth. There is also an initial level of integration occurring between PLM and EA tools. I see the eventual relationship as being a dependent one, i.e. the use of EA will be viewed as most relevant within the context of program oversight and management, thus EA artifacts or products will become part of a variety of PLM processes and made available through the PLM interfaces to all participations and stakeholders related within the context of an enterprise program (or programs).

I’ll try to provide a real-world example of what all of this means. Starting two years ago, I began evaluating a variety of requirements management and EA platforms to assess how well they might support a project of the scale say of the ECSS program. ECSS is the USAF’s logistics modernization effort and consists of a migration from several legacy systems to an Oracle ERP platform. Based on my previous experience as an AF IL (Logistics) PMO Chief Engineer, I estimated that there were perhaps several thousand ‘modernized’ or consolidated requirements to deal with and as many as 50,000 legacy requirements that still needed to managed and / or reconciled.

I focused on one product, Accept 360, because it had the most flexible database and web architecture, but soon noticed that was an interesting and unexpected capability in the tool. The software allowed me the opportunity to change all of the core definitions of various application modules within it as well other definitions, labels and data properties. I soon found that I was able to take an application that was developed for the commercial market and tailor it completed to a federal PMO. It also allowed me to adapt development lifecycles for the requirements by defining those lifecycles in the tool. None of this required development or scripting. I soon realized that many of the mostly costly aspects of systems I had previously managed was the relative inability for non-developers to make simple changes like this. Simple, yet in some cases sweeping changes in the significance for how the tools might be used.

Part 2 of this discussion will cover how PLM functions a semantic practice and part 3 will discuss how other PLM applications can become more “semantically enabled.”

Friday, April 11, 2008

Conformance & Expection Problems

Conformance problems consist of situations where system performance (process, inputs or outputs) does not match user expectations. In this context I refer to ‘system’ loosely as being any set of systems or organizational processes working in unison. I think that IT is perhaps the most problematic arena for dealing with conformance related issues. The reason for this is due to the tendency for user or sponsor expectations to become radically divergent from system capability or development outputs.

The critical factors surrounding why IT suffers more in this respect than other industry sectors includes but is not limited to:

  • The pace of technological change
  • The increasing level of complexity in IT solutions
  • The increasing number of variables (or supporting data)
  • The trend towards geographical distribution of workforce
  • The expectations for interoperability across radically different segments of IT

The example that comes to mind immediately is the set of conformance disappointments and expectations surrounding Services Oriented Architecture (SOA). There are many aspects of SOA viewed as a facilitating mechanism for enterprise integration that simply haven’t been worked out yet by anyone, yet leadership in many organizations seem to think that the solutions are mature and are surprised that their SOA initiatives aren’t producing the anticipated results. Some of this can be attributed to the typical technology ‘Hype Cycle.’ However, more it has to do with the ability to exploit new capabilities in existing IT environments.

The way I address these situations is the same or at least very similar to most other efforts I support in IT. I view it as a problem-solving exercise, and as such try to first diagnose what went wrong and work forward from there. What we’ve discovered in these types of investigations regarding SOA is that the definitions of interoperability built into the standards and vendors stacks don’t yet map to the interoperability expectations of most complex enterprises. The question is whether to build the remaining bridge between those expectations in any one enterprise or wait for industry to bridge that gap.

Copyright 2008, Semantech Inc.

Thursday, March 27, 2008

PLM Defined

Program Lifecycle Management is the recognition that specialization is not the only or even the best answer towards managing complexity. Often times, an excessive focus on specializing specific areas of expertise merely adds to the level of complexity and confusion that typical PMOs face every day. The truth is that many if not most of the people who support PMOs need to be generalists to fully grasp the breadth of topics that they are expected to deal with. It is very difficult to get work done if a parade of experts is required to fulfill everyday tasks and worse yet if that parade constantly changes as the nature of the industry expertise rapidly evolves.

The key to PLM is understanding that the PMO runs on information. That information must be easily accessible, transportable, translatable and must be available directly to the decision makers without going through layers of expert interpretation first. This doesn’t mean that other folks don’t add value to the information, there will always be a need for diverse skills in the PMO, however it means that EVM analyst is no longer primary interpreter of financial data and that the requirements analyst is not the only person who can produce requirements reports. The reality is that no how many specializations are created, the core processes are still all related within specific contexts. Those contexts then allow us to provide a holistic view of what’s happening in the PMO and more importantly illustrate why it is happening.

Copyright 2008, Semantech Inc.

Monday, March 24, 2008

Complexity & PLM

As one might imagine, this is a difficult philosophical question. According to Hayenga (2008), there is a significant lexical difference between what is ‘complex’ and what is ‘complicated.’ He posits that complex systems are not merely those with many moving parts but rather complexity is inherent in systems and scenarios that are dynamic in nature or difficult to predict. This is a reasonable and pragmatic way of viewing the terminology and thus also tends to imply that systems which are highly dependent upon human interactions are necessarily more complex in nature. Humans, being the irrational creatures that we are, often interject a high level of subjectivity into the mix.

There is no better illustration of the dynamic interaction of many subjective individuals than a typical PMO. This of course becomes even more fascinating if their scope of interaction is elevated to the enterprise level. This becomes somewhat ironic when one considers that the PMOs have been created and chartered to correct perceived issues of system complexity which must be better managed. The reality is that much of what we consider to be “IT” problems are not technical in nature at all.

Recognizing a problem or a challenge is not enough. Many folks have hit the nail on the head in being able to identify the PMO or its associated management processes as the likely culprit of much of the related failures of IT projects / programs; however to date, no one has presented a comprehensive solution for this seemingly obvious problem area. There is recognition now though that such problems are solvable using new enterprise integration technology and techniques.

Complexity is implicit within each element of PLM (the other mini-PLMs or Ps). Over the years, the notion of “Portfolio Management” migrated over from the financial world to IT and has now become a new process discipline. As noted previously, Project Portfolio Management (PPM) popped up about ten years ago to address the obvious need to consolidate PMO processes. Product Lifecycle Management emerged over the last decade as an IT practice to address the very tactical aspects of design and innovation. Process management has been interpreted many ways – some schools of thought have advocated fairly sophisticated methodologies such as CMMi , others are adopting an “Agile” more flexible approach.

Copyright 2008, Semantech Inc.

Saturday, March 22, 2008

A Troubling Trend

So why is PLM important, why is it necessary? The motivation behind PLM has been with us for decades and despite many attempts it remains largely unresolved. IT projects are getting more complicated, not less – and this trend is accelerating, not decelerating.

Only 34 percent of IT projects are considered to be truly successful, according to the Standish Group. Project Portfolio Management (PPM), which came on the scene in the late 1990s to help IT projects become more successful, has not measured up to its promise to solve the problem. Without a suitable alternative to PPM, should we be resigned to IT mediocrity? Carlson (2007)

IT projects have become more complex & difficult to manage

PLM directly addresses the root causes of this trend and has been developed to attack them in a comprehensive fashion. PPM or Project Portfolio Management was an early attempt to resolve the matter but it only addresses 2 of the 5 “P’s ” and only about half of the associated PMO processes.

Copyright 2008, Semantech Inc.