Open Source Maturity Model

by James A J Wilson on 9 October 2006 , last updated

Archived This page has been archived. Its content will not be updated. Further details of our archive policy.

Introduction

The Navica Open Source Maturity Model (OSMM) was developed to help IT procurement managers to better compare and assess Open Source Software (OSS). Traditionally, firms producing proprietary software have employed sales staff to respond to tenders and answer questions about their products. OSS is not usually supported by large companies with the resources to provide such services. This can lead to open source solutions being overlooked. Even when managers are alert to the potential of OSS, choosing viable software can pose difficulties. The OSMM was developed to help determine whether a given OSS application had been developed to the point at which it was ready for use for a given task, and how it compared with it peers. Like other such models, it is not designed for comparing OSS with closed-source proprietary equivalents.

Open Source Maturity Model(s)

Navica’s maturity model is not the only one that an open source software evaluator might turn to. Rather confusingly, the commercial consultancy firm CapGemini developed a different assessment method, also called the Open Source Maturity Model. Navica’s Open Source Maturity Model grew from a book written by its CEO, Bernard Golden, entitled Succeeding with Open Source, which was published in 2004. Bernard Golden is also credited as one of the inspirations behind the rival Business Readiness Rating (BRR) evaluation method. Another similar model is the Method for Qualification and Selection of Open Source software (QSOS), developed by a French team working for the commercial outsourcing company Atos Origin.

Each of these assessment models shares certain common features:

  • Maturity tests: Each model sets out criteria against which software is to be assessed for maturity, usually a mixture of quantitative and evaluative questions. Depending upon how well the software meets each test a score is awarded.
  • Requirements weightings: The different tests are weighted according to the relevance of each test to the intended use (and users) of the software.
  • Final scores awarded: Each item of software examined is given an overall score, which indicates both comparative merit and whether the software is ready for deployment.

The assessment models differ in regard to the complexity of the process and the balance of the tests to be conducted.

The first phase of the Navica OSMM is to select the software that you are going to evaluate. As with equivalent systems, the OSMM does not provide much guidance on compiling the short list.

The second phase involves assigning weighting factors. The OSMM breaks down the key aspects of the software to evaluate into six categories. The weightings assigned to each of these categories must be decided according to what the software is to be used for, and who will be using it. The default weighting in each category is as follows:

Software 4
Support 2
Documentation 1
Training 1
Integration 1
Professional Services 1
TOTAL 10

Each of these categories comes with its own Template, which suggests what aspects within the category should be evaluated, and the maximum score that should be awarded to each aspect (again totalling 10). Assigning this score constitutes the third phase of the OSMM.

Taking the Documentation template as an example, up to two points should be awarded based on the documentation provided by the software creators themselves, up to three points based on the availability and extensiveness of Web postings, and up to five points based on the amount of commercial published documentation. So a piece of software with excellent built-in help files, a reasonably extensive online forum, but which has never had a book written about it, might score 4 out of 10: 2 points for creator documentation; 2 points for community documentation; but no points for commercial support.

If the quality of documentation is likely to be a very important factor in getting the most out of the software, one might want to change the weighting to reflect this greater significance. An organisation that does not have the resources to implement the software itself might, on the other hand, look to a professional services firm to help out, in which case they would give greater weighting to the Professional Services category.

The fourth phase consists of multiplying the score for each category by its weighting, to produce a final score between zero and one hundred. This is then measured upon a maturity table.

The maturity table takes into account whether the users are Early adopters or Pragmatists, and whether the application is intended for experimental or pilot study purposes or whether it is to be used in a production environment. The recommended minimum score for early adopters using software in a production environment is 60; 70 for more risk-averse pragmatists.

How the OSMM compares against the BRR

The OSMM and BRR are in many respects very similar but there are two key differences:

  • Development model: The OSMM is largely the work of one man, whereas the BRR favours an evolving community-development model.
  • Detail: The BRR model is more prescriptive, going into greater detail about what tests to carry out, and assigning specific scores to commensurable statistics. The OSMM leaves the scoring mechanism more open to interpretation.

Both models will take time if they are to be fully applied across a range of open source products, although the OSMM’s relative lack of detail leaves more of the assessment and scoring up to the user. The BRR tests, on the other hand, specify exactly how to score each item under consideration.

Whichever method is preferred, both make it clear that they are there to assist judgement rather than replace it with a simple score. The act of reviewing software using the frameworks is in itself likely to greatly improve one’s understanding of it, even leaving aside the scoring element.

The Open Source Maturity Model today

The Open Source Maturity Model has not seen the level of adoption that was hoped. It could be argued that the lack of interest in this, and related, approaches to open source project evaluation indicates a flaw in the approach. Certainly, the fact that these evaluation methods cannot be applied equally to closed source products makes the process less attractive to those wishing to evaluate open source alongside closed source products. Despite this, OSS Watch considers the Open Source Maturity Model and similar tools such as the Business Readiness Rating to be useful for those new to open source project evaluation.

As the Open Source Maturity Model wanes, the Qualification and Selection of Open Source Software (QSOS) project slowly grows in popularity, although even QSOS cannot yet be considered a mature project. Another alternative model is the Reuse Readiness Levels developed by the NASA Earth Science Data Systems Software Reuse Working Group. Whilst the OSMM considers non-technical criteria, QSoS considers both technical and non-technical criteria and the Reuse Readiness Rating considers technical criteria.

Further reading

Links:

Related information from OSS Watch: