Business Readiness Rating

by James A J Wilson on 3 July 2006 , last updated

Archived This page has been archived. Its content will not be updated. Further details of our archive policy.

Introduction

This document has been archived because the Business Readiness Rating website has been inactive for some time. There are several alternative models available, such as the Software Sustainability Maturity Model.

How does an institution or company decide which software it should use? One traditional approach would be to invite representatives from software companies to come and demonstrate their product’s capabilities. If the institution were looking for something more specific and customised, they might need to put their requirements out to tender. The process for procuring open source software from outsourcing companies is no different. However, open source offers many more options than closed source, ranging from fully outsourced to wholly internally supported.

Furthermore, some open source projects only offer a community support model but even without commercial support the project may be of significant interest to organisations willing to engage directly with the project.

The BRR framework

A number of frameworks exist to help IT and purchasing managers make informed choices about open source software (OSS). These include the Open Source Maturity Model(OSMM from Navica), Qualification and Selection of Open Source Software (QSOS), NASA’s Reuse Readiness Levels (RRL), and the Business Readiness Rating (BRR), which is the subject of this article. All frameworks examine issues of OSS maturity, i.e. whether software is ready for mission-critical enterprise use. There are hundreds of thousands of open source projects hosted on SourceForge and other such sites, all in various stages of development. Some have effectively been abandoned, some are at early stages of development, some are good enough to be used in limited ways in a commercial or institutional environment, others are mature enough to be used in a full production environment. The BRR helps determine which stage a project is currently at and whether it is likely to progress to a later stage.

BRR is sponsored by Carnegie Mellon West, O’Reilly, SpikeSource, and the Intel Corporation. It is designed along open source lines, encouraging feedback and community development. It is intended to provide a standard framework. It suggests that users report their evaluations back to the open source community. And it weighs success factors to suit specific settings and user groups.

How it works

The BRR evaluation model involves four steps: a quick assessment to draw up a short list of software packages to evaluate; the ranking and weighting of the selection criteria; data gathering for each criteria; calculation and publication of results.

The BRR posits twelve criteria that can be used to evaluate software once an initial shortlist of products has been established. It suggests that only the six or seven most important criteria are actually used in the assessment. The criteria are:

  • Functionality - does the software meet user requirements?
  • Usability - is the software intuitive / easy to install / easy to configure / easy to maintain?
  • Quality - is the software well designed, implemented, and tested?
  • Security - how secure is the software?
  • Performance - how does the software perform against standard benchmarks?
  • Scalability - can the software cope with high-volume use?
  • Architecture - is the software modular, portable, flexible, extensible, and open. Can it be integrated with other components?
  • Support - how extensive is the professional and community support available?
  • Documentation - is there good quality documentation?
  • Adoption - has the software been adopted by the community, the market, and the industry?
  • Community - is the community for the software active and lively?
  • Professionalism - what level of professionalism does the development process and project organisation exhibit?

The evaluator decides which of these criteria are most important for the software to be successful in the environment it is to be used and for the purpose which it is to fulfil, weighting the criteria accordingly. Next, the evaluator assesses each piece of software using about 20 specific tests (the precise number will depend on which criteria are being evaluated). Some of these tests rely on the kind of information that is readily available from most open source project communities, others require more work.

As an example: one of the tests that the BRR recommends involves awarding a piece of software 5 points to its community metric if the average volume of its general mailing list over the last six months has been greater than 720 messages per month. If this figure is less than 30, only 1 point is awarded.

Evaluators should be aware that undertaking a full BRR assessment takes time. Functionality testing requires the evaluator to carefully consider what the standard feature set required by an average user might be, and apply this rigorously. Each piece of software being evaluated needs to be installed, used, and in some instances performance tested. To properly calculate usability, the evaluator will, naturally enough, have to test the software on a real end user. Expect the full evaluation process to take several days.

Once the evaluator has worked their way through the relevant tests, and applied the appropriate weightings to the score for each criteria, they arrive at a final score for each product. The complete scoring chart, along with full best practice guidelines for each of the four steps are provided in the BRR white paper, available from the BRR wiki.

The BRR does not claim to be a finished, polished framework for software evaluation, although it does claim to be complete enough to allow IT staff from different companies and organizations to set their own assessment criteria, and perform their own full evaluations. Some of the tests that the BRR recommends cannot be applied to proprietary software, so it is only really viable for comparing and assessing OSS. Users are encouraged to contribute to the project’s discussion forum and feed back their findings.

An example from the Open University

The Open University used the BRR evaluation framework to assess which Virtual Learning Environment (VLE) they should adopt. VLEs are software packages that facilitate teaching and learning by providing access to a number of components via a single interface. Typically these components will include tools for course management, noticeboards, chat rooms, self-assessment quizzes, and repositories of learning objects such as texts or videos, etc. VLEs are of particular value to distance learners such as those enrolled at the Open University.

After assessing their specific needs and using the BRR accordingly to evaluate the various VLEs on the market, the Open University came to the conclusion that the open source VLE Moodle met their requirements far better than any of its rivals. The Open University has since demonstrated its commitment to Moodle by deciding to invest more than £4M in developing core Moodle components as well as customising it for their particular users.

To see how Moodle scored against its rivals using the BRR framework, see Niall Sclater’s presentation (pdf) to the OSS Watch Open Source and Sustainability Conference (slide 5)

The Business Readiness Rating today

The Business Readiness Rating has not seen the level of adoption that was hoped. As of July 2010, it has been taken offline, reportedly because the project has not been able to create the thriving community it hoped for. The creators are currently implementing plans to revitalise the community.

It could be argued that the lack of interest in this, and related, approaches to open source project evaluation indicates a flaw in the approach. Certainly, the fact that these evaluation methods cannot be applied equally to closed source products makes the process less attractive to those wishing to evaluate open source alongside closed source products. Despite this, OSS Watch considers the Business Readiness Rating and similar tools such as the Open Source Maturity Model to be useful for those new to open source project evaluation.

As the Business Readiness Rating wanes the Qualification and Selection of Open Source Software (QSoS) project slowly grows in popularity, although even QSoS cannot yet be considered a success.

Further reading

Links:

Related information from OSS Watch: