OpenStack DefCore: Review Time
Review Time for the DefCore Capabilities Scorecard & Core Identification Matrix
Attribution Note: This post was collaboratively edited by members of the DefCore committee.
The OpenStack Core definition process (aka DefCore) is moving steadily along and we’re looking for feedback from community as we move into the next phase. Until now, we’ve been mostly working out principles, criteria and processes that we will use to answer “what is core” in OpenStack. Now we are applying those processes and actually picking which capabilities will be used to identify Core.
TL;DR! We are now RUNNING WITH SCISSORS because we’ve reached the point where you can review early thoughts about what’s going to be considered Core (and what’s not). We now have a tangible draft list for community review.
While you will probably want to jump directly to the review draft matrix (red means needs input), it is important to understand how we got here because that’s how DefCore will resolve the inevitable conflicts. The very nature of defining core means that we have to say “not in” to a lot of capabilities. Since community consensus seems to favor a “small core” in principle, that means many capabilities that people consider important are not included.
The Core Capabilities Matrix attempts to find the right balance between quantitative detail and too much information. Each row represents an “OpenStack Capability” that is reflected by one or more individual tests. We scored each capability equally on a 100 point scale using 12 different criteria. These criteria were selected to respect different viewpoints and needs of the community ranging from popularity, technical longevity and quality of documentation.
While we’ve made the process more analytical, there’s still room for judgement. Eventually, we expect to weight some criteria more heavily than others. We will also be adjusting the score cut-off. Our goal is not to create a perfect evaluation tool – it should inform the board and facilitate discussion. In practice, we’ve found this approach to bring needed objectivity to the selection process.
So, where does this take us? The first matrix is, by design, old news. We focused on getting a score for Havana to give us a stable and known quantity; however, much of that effort will translate forward. Using Havana as the base, we are hoping to score Icehouse ninety days after the Juno summit and score Juno at the K Summit in Paris.
These are ambitious goals and there are challenges ahead of us. Since every journey starts with small steps, we’ve put ourselves on feet the path while keeping our eyes on the horizon.
Specifically, we know there are gaps in OpenStack test coverage. Important capabilities do not have tests and will not be included. Further, starting with a small core means that OpenStack will be enforcing an interoperability target that is relatively permissive and minimal. Universally, the community has expressed that including short-term or incomplete items is undesirable. It’s vital to remember that we are looking for evolutionary progress that accelerates our developer, user, operator and ecosystem communities.
How can you get involved? We are looking for community feedback on the DefCore list on this 1st pass – we do not think we have the scores 100% right. Of course, we’re happy to hear from you however you want to engage (twitter, ask.openstack.org, etc): we intentionally named the committee “defcore” to make it easier to cross-reference and search.
We will eventually use Refstack to collect voting/feedback on capabilities directly from OpenStack community members.