We have discussed at some length that Program Lifecycle Management (PLM) is empowered by the ability to merge or otherwise integrate all program related data within a single instance database built atop a single schema that reflects the full spectrum of business processes present in most PMOs. It is important however to make a distinction between the technical implementation of such a solution and the underlying architecture premise. In this case, the true power of PLM is not in any one proprietary database but rather in the ability to define and merge the management of IT within a fluid, evolutionary set of definitions. This is semantics; semantics not as an arbitrary concept referring to the validation of symbolic meaning, but semantics as a facilitating technological medium, allowing for correlation between processes, data sets and application logic – all modifiable without development by end users.
In many ways, the notion of PLM is dependent upon Semantics and PLM can be considered one of a new family of practices that are “Semantically enabled” or empowered.” The amount of value inherent in these practices will become even more apparent as the amount or level of support for semantic interoperability increases. As PLM platforms extend feature-sets to include RDF and OWL transfer as well as visual mapping of taxonomies or Ontologies, the integration of program management with the projects entrusted to it will begin to occur in earnest for the first time. This includes and extends to enterprise architecture as well (including complex application design). Some PLM platforms already support UML Use Cases which can be used to help derive requirements taxonomies, project schedules test plans and so forth. There is also an initial level of integration occurring between PLM and EA tools. I see the eventual relationship as being a dependent one, i.e. the use of EA will be viewed as most relevant within the context of program oversight and management, thus EA artifacts or products will become part of a variety of PLM processes and made available through the PLM interfaces to all participations and stakeholders related within the context of an enterprise program (or programs).
I’ll try to provide a real-world example of what all of this means. Starting two years ago, I began evaluating a variety of requirements management and EA platforms to assess how well they might support a project of the scale say of the ECSS program. ECSS is the USAF’s logistics modernization effort and consists of a migration from several legacy systems to an Oracle ERP platform. Based on my previous experience as an AF IL (Logistics) PMO Chief Engineer, I estimated that there were perhaps several thousand ‘modernized’ or consolidated requirements to deal with and as many as 50,000 legacy requirements that still needed to managed and / or reconciled.
I focused on one product, Accept 360, because it had the most flexible database and web architecture, but soon noticed that was an interesting and unexpected capability in the tool. The software allowed me the opportunity to change all of the core definitions of various application modules within it as well other definitions, labels and data properties. I soon found that I was able to take an application that was developed for the commercial market and tailor it completed to a federal PMO. It also allowed me to adapt development lifecycles for the requirements by defining those lifecycles in the tool. None of this required development or scripting. I soon realized that many of the mostly costly aspects of systems I had previously managed was the relative inability for non-developers to make simple changes like this. Simple, yet in some cases sweeping changes in the significance for how the tools might be used.
Part 2 of this discussion will cover how PLM functions a semantic practice and part 3 will discuss how other PLM applications can become more “semantically enabled.”
Monday, July 7, 2008
Subscribe to:
Posts (Atom)