What is the scope of the kernel?

2009/12/18

Our first blog entry on the kernel has started to generate some great discussions!

As we look through the initial comments and read some of the published support materials (for instance Philippe Kruchten: http://www.semat.org/pub/Main/PubsandRefs/Kruchten_conceptual_model.pdf), it becomes clear that there are a number of things that are essential to all software development efforts. Unfortunately we don’t have a widely shared vocabulary to describe those essential things.  Establishing this basic vocabulary is one of the primary goals of the kernel, so please keep adding your thoughts to the discussions.   Even the creation of a widely shared vocabulary would be a valuable output from our community.

It is also clear that although the scope of the kernel is primarily software development and software engineering, it will have to touch on other related disciplines such as teamwork, project management and perhaps process improvement.  A successful kernel will also integrate with wider concepts of engineering, first from system engineering but also from other disciplines (civil, mechanical, etc.).

Software engineering is a team effort aimed at delivering value to customers (“users”), so it is no surprise to see suggestions in the blog that these elements be included in the kernel. The trick is going to be limiting ourselves to the truly essential, and keeping ourselves focused on the needs of the software development community.

There is a temptation with any effort of this kind to start to search for some universal, unifying theory of everything. The usual result of such a search is something so abstract and generic that it doesn’t really describe anything at all, and doesn’t help unify the field.  This is not the intention of the SEMAT kernel – to be effective the kernel must be kept concrete, focused and small.

Perhaps we can focus first on what the features of the kernel ought to be. Here are few of the candidate features we consider integral to any successful kernel.

A Candidate Feature List

  • Practice Independence – the goal of SEMAT is to enable people to use whatever practices they want or need to use, regardless of the source or heritage of those practices.  At this point it is not about identifying which practices are the best practices, but enabling people to share, use and evaluate practices.  To this end the kernel must be practice independent (“agnostic”).
  • Lifecycle Independence – the vast majority of methods seem to define some sort of lifecycle concept (waterfall, iterative, etc.). Therefore the kernel must be able to support all of these divergent concepts.
  • Language Independence – again the kernel needs to be unconstrained by and independent from the programming and/or modeling languages that a team chooses to use.
  • Concise – the kernel must only focus on the small set of elements that are truly essential, that would provide the framework to align, compare and combine our practices.
  • Scalable – as a framework for assembling practices the kernel must be scalable. Although it must support the very smallest of projects – one person developing one system for one customer – it must also support the largest of projects, in which there may be systems-of-systems, teams-of-teams and projects-of-projects.
  • Extensible – as the kernel is focused on those truly essential elements central to all software development, it necessarily will only cover a subset of the concepts that are needed to develop a system. The kernel needs to be extensible in a number of ways including:
    • the ability to add practices, to add detail and extend the coverage of the kernel
    • the ability to add lifecycle management, to govern how work is undertaken when applying the kernel and a set of associated practices
    • the ability to tailor the kernel itself to be more domain-specific, or to integrate the software development work into a larger endeavor.
    • Formally Specified – in order to conform to the above requirements, and form a unifying set of concepts for the practitioner, research & academic communities, the kernel must be rigorously defined.

Are there key concepts we missed?  Other features that you consider essential?

Ivar Jacobson, Bertrand Meyer, Richard Soley

Advertisements

28 Responses to “What is the scope of the kernel?”


  1. Hi Ivar,

    One comment I would make about the candidate feature list is why include waterfall? Or more accurately, the accidental waterfall. Does it not fly in the face of the nature of tacit knowledge extraction as it represents an open loop “project system” configuration? And if it does somehow make sense to use an open-loop configuration for trivial problems, is this really engineering?

    I would suggest as I have articulated in my blog ( http://www.fourth-medium.com/wordpress/?p=43 ) that it is time to retire the waterfall misinterpretation going back to 1970. If not, what science can proponents bring to explain why the delivery effects (and business results) are optimal? And if firms are forced somehow to continue its use, the corollary would be what risks are they incurring – ie. how suboptimal? The bottom line is “why” or “why not”; beyond “just because”.

    If we view knowledge as the aggregated system input, and software as the output, I would propose that systems theory suggests that a closed loop configuration is always leveraged. And if you look at queuing theory and Littles Law, batching is sub-optimal past critical WIP. These statements however are agnostic as to the specific practices that realize the closed-loop system (timeboxed iterative vs deliverable-based iterative vs micro-incremental).

    I think this issue represents very important common ground among the communities. Just a comment looking for what others think. Other than that looks like a good start on a feature list.

    Mark


    • Mark,

      I think that is exactly the opposite of what SEMAT should do.

      I believe that, to be truly accepted in the whole community SEMAT should stay neutral.

      You may, or you may not be right about the waterfall, but the fact remains that a lot of software is developed in a waterfall way, and a lot of software will be developed the same way in the future.

      If SEMAT does not allow to specify the “Waterfall Practice” within the constraints of its kernel then it has failed already.


      • Geert,

        Thanks for your comment. The points you make about the current reality in the industry – while I agree that there exists a large legacy of bad practices, why would we base guidance and forward thinking research/theory and explanation on such a foundation? And if it is to be included for the reasons you cite and as I already mentioned in my original post, then at the very least the essence of this practice should be explained as to the quantitative price you pay and the risks you incur when you use it. Otherwise, its inclusion without any reasoning or guidance would imply it as generally acceptable practice coming from SEMAT.

        Perhaps you could explain how you would include this practice in such a kernel.

        Mark


    • Since we are being descriptive (rather than prescriptive) we must include existing practices from waterfall lifecycles because a large number of teams still use them.


  2. Try reading Philippe Krutchen’s Conceptual Model (except the Revisited sections) with the word ‘software’ replaced by any other form of project.

    While this model appears proper (especially if we assume Cost as a component of Work and Value as a component of Intent), do we see anything particularly “Software” about this model except in the instance terminology of the Revisited section?

    The model seems to be a valid rediscovery of existing knowledge about projects in general — which would be an example of what we described in the problem statement. If a conceptual model of a project is a sufficient basis for our kernel, then ought we rather to search existing project management literature for an existing, validated model and adopt that as our own?


    • So … we probably need also adopt a working definition of software and to make our kernel dependent on that definition for its practical understanding. This would help to exclude practices unrelated to software development.


  3. re Practice Independence and Extensibility. Certainly there is a need for the kernel to be independent of the practices, but we also may require means (within the Extensible feature) for clustering interdependent practices and for filtering incompatibilities among practices.

  4. Chris Says:

    It is not clear what this “kernel” is supposed to be or what it is supposed to do. Is it just a collection of definitions to serve as a basis for later discussion?

    Regarding the earlier comment about how Kruchten’s conceptual model isn’t software-specific: yes, I totally agree.

    Question: do other non-software disciplines organized around projects have a guiding theoretical foundation? For example, do civil engineers have a theory of civil engineering? Do aerospace engineers have a theory of aerospace engineering? I think not, though I am certainly not an expert.

    I imagine that they have (physical) theories that predict the properties of the media that they work with. They have theories that describe aluminum and steel, for instance. But do they have theories of how humans work together as engineers with their tools and media to perform a project. I think not.

    Why should software engineering be any different?

    We have theories of our (virtual) media. We have, for instance, quite excellent theories of formal language, type theories, analysis of algorithms, and so forth. Just as a civil engineer can use theories describing steel to predict if his building will fall down, we have theories to predict the scalability of our systems. We are in the same boat as other engineering disciplines, insofar as we are guided by theory. So why are we worse off than they?


    • Chris,

      On the point regarding other engineering disciplines – each applied science (engineering) leverages a body of knowledge (science) to create new solutions to problems in society. So Civil Engineering has as their foundation statics, dynamics, and the like. Aerospace Engineering leverages control theory, fluid dynamics and the like. Software Engineering leverages Computer Science.

      In terms of the study of the project delivery system (project organization), MIT has studied this for quite some time – even in Construction and Civil problem spaces. For example, see http://web.mit.edu/jsterman/www/SDG/project.html . This is all derivative work at the System Dynamics Group founded by Jay W. Forrester. Another reference is more recent by Bernardo A. Huberman and Denis M. Wilkinson from HP Labs/Stanford University titled “Performance Variability and Project Dynamics”.

      I note that nothing in the meta-model description from Philippe describes dynamics. The types articulate the statics and structure of a delivery project organization. Study of system dynamics focuses on understanding temporal aspects of the system such that leading indicators can be identified so that practices can be applied to alter outcomes.

      On the issue of why we are different than say architecting bridges – bridge building has a much higher maturity in terms of leveraging patterns and reference architectures than software; and these reference architectures have been forged over a much longer time horizon. Therefore the uncertainty is lower in terms of outcomes – even though the impact of risk materialization is higher. But when precidentness is low for both problem spaces, I would argue that the gap is not so apparent. But we seem to learn better from the failures in bridges than the failures in software. And when a bridge fails, we have the mathematical toolkit to model and understand why. We haven’t yet leveraged as robust a toolkit to understand software failures. With software, when a project “fails”, the root cause is people – and tacit or hidden knowledge that has not been made explicit and known; meaning that the apparent convergence of requirements versus the actual convergence of requirements differed.

      Mark


    • A lot of engineering uses thermodynamics as one important foundation. Energy is present all over, and that’s why you deal with in chemistry, electricity, mechanics, mining, …

      Comparing thermodynamics and software engineering theories is quite hard. Thermodynamics is proven to work.

  5. John Siegrist Says:

    May I put forward this simple proposal for the development of the software engineering “kernel”:

    1) Define what work products a software engineering projects will have generated at its successful completion.

    2) Use a reverse planning process to work backward from these work products in as few logical steps as possible to the initial project inception.

    The list of final work products should be fairly straightforward to assemble and something around which consensus can quickly be built.

    From there, the questions of “What is needed to produce this work product?” and “Which work product does this kernel feature support?” will allow a reasonable basis for identifying and selecting the features of the sofware engineering kernel.


    • If we were to take a Lean Thinking approach to this, we would end up with “working, tested software” as the only work product that should be the output. All other outputs would either be Type 2 muda (un-necessary waste) or Type 1 muda (necessary waste based on how we work is currently performed). And anything that is not waste is what the customer is willing to pay for. If design artifacts or persisted requirements documents are of value in their eyes then great, but this varies based on circumstance.

      As such, I think we will have difficulty defining a set of work products that can be deemed Type 1, as this is not static and is contextual. Depending upon the practices that we instantiate for the circumstances on the ground, differing work products will be produced. And unless there is intrinsic value in the eyes of the customer, anything other than working software is waste. Therefore, with just one output it is highly improbable that we can arrive at a kernel articulation with just this.

      To my mind, we need a system model based on practices and their effect on the project delivery ecosystem. The work products are coincidental to the application of practices, as it is the practices/patterns that answer the question “why” we are producing something other than working software.

      • John Siegrist Says:

        Ok, “working, tested software” constitutes a reasonable starting point for discussion. What is required to demonstrate that some delivered software works? What is required to demonstrate to some level of confidence that the software is tested? Is ‘tested’ even the appropriate term? If by tested you mean that the software has been shown to provide some minimum assurance that it is free from errors, then I agree with that.

        You have now provided three primary work products that are essential to providing “working, tested software”.

        1) The complete set of source code needed to constitute the finished system.

        2) Functional acceptance criteria for the system. Basically a statement of all the things that the system must do to be considered “properly working”.

        3) Proof that quality control measures were applied to the software, and that this quality control process ensures a certain minimum level of quality. That is to say, that the quality controls prevented or eliminated defects in the software.

        To conclude, I can agree with your starting minimal definition because at first approximation it provides an adequate starting point for branching out into all the other “essential” issues of software engineering.

      • John Siegrist Says:

        By the way, I should say that I don’t disagree with looking at some form of ecosystem model. However, at the present time I don’t have a good idea about what the contours of that ecosystem might be. It seels like a reasonable first step to building such a model could be identifying all of the major work products around which all software engineering activities are organized.


        • John,

          Check out a new post on my blog:

          http://www.fourth-medium.com/wordpress

          From your above list of “outputs”, I would argue that tests do not represent outputs of the system, but rather implement the negative feedback mechanism of the project delivery system, leveraged by a compensator to “steer” or influence the outcome of the dynamics of the delivery organization.

          This would then bring us back down to one “output” work product of primary value to the customer; all else you mention is used to converge on actual scope, and bring steady-state error to within an acceptance range.


    • There are multiple products of value. Delivered software (data and logic), a source library (with some sort of understandability for reuse), a test library (for reuse on a future project), and a usage library (user/sysadmin/tech manuals) for operation and maintenance are all specific to the purpose of the project; the project history (in whatever form) becomes a resource for estimation and improvement of projects with other goals. Each of these work product sets can be scaled (back) to an appropriate (minimum essential) size and (in)formality. If external clients may not need all libraries, they offer value internally. I was told that when the Kendall facility for USCS was delivered they bought just the accepted software — and regretted it daily until Hurricane Andrew destroyed the place.

      An artifact-centric approach, however, may not get us to a framework. Work products (including the delivered software) exist to fill some need; other products might satisfy that need. We have all produced high quality software which didn’t meet the needs of the business. And order of work products is not determined; for example risk-based post hoc testing is good, unless we do risk-based development, with or without test driven development. Unit tests don’t resemble acceptance tests. The ‘test set’ satisfies a need which might be phrased as ‘to receive useful feedback on quality of runtime activity’ (or some better wording).

      Perhaps an artifact-centric approach would better serve as a method for testing whether the framework is complete enough to support production of standard work products.

      • John Siegrist Says:

        You’ve raised some good points about the limits to an artifact-based approach to building a framework/kernel. What then would constitute a more profitable approach to identifying the software engineering kernel? Use OOAD methods to build an object model for software engineering? Start from the existing literature and identify an existing model to use as the SEMAT kernel?


  6. Ivar seems to be following the UP optional task to develop a vision. OpenUP describes this as follows:
    – Identify Stakeholders
    – Gain agreement on the problem to be solved
    – Gather stakeholder requests
    – Define the scope of the solution
    – Define features of the system
    – Achieve concurrence
    – Capture a common vocabulary

    So it may be a little premature to define the approach. That said, I suggest we blend existing models (which, ideally, have each demonstrated some practical value) to wring (not abstract) the gist from them and provide some (probably new) terminology to articulate their commonalities. Because of derivation, the mapping from our new framework to these existing models is a given. This is comparable to Hersey & Blanchard’s Situational Leadership Theory mapping to earlier forms of management theory. We can make a fine blended cognac from multiple provenanced wines.

  7. Jarkko Viinamäki Says:

    I think this is a good initiative. I came up with similar kind of kernel concept few years ago (based on agile manifesto/DOI/lean values and principles, company business level PM steering model, essential core ideas + phase model of RUP, Scrum as iteration management model, FDD thinking, selected XP practices and some selected practices from libraries like Crystal Clear bound together as one complete integrated approach). However, although it’s quite well applicable to a number of different projects, it doesn’t meet your listed criteria.

    I think the Feature List is such that the end result will be so abstract and high level that you end up with something like ([] = optional): “[Using iterative and incremental approach] find out and prioritize what the users need, build and deliver a [part of the] system to support those needs and verify that it really satisfies the balanced needs of all stakeholders.”. Anything else easily violates some of your listed requirements.

    Before defining a kernel, you need to ask: “What is the problem?” Why do you want to define it? Really? Why? Why? Why? Why? Why?

    Software development is extremely complex and very highly context dependent. It is really hard to define process elements or guidelines that are applicable in all situations – regardless of the context, constraints and drivers.

    But there are certainly things that I consider almost “universally good”. For example the RUP phase model with different risk focus, risk/business value driven development, IID, walking skeleton pattern, feature teams etc. are all at least very useful – even if not always applicable.

    I would be more pragmatic with this issue. When you start a new project (or a new member joins), the first question is:

    “How are we going to carry out this project?” (process/methods point of view)

    That question breaks down into several smaller questions. Answers to those questions define the team’s more or less _context specific_ approach. I call this “FAQ driven process tailoring”.

    The best value can be provided by:

    a. Creating a model that will help the team to discover what the key questions are

    b. Creating guidelines how document answers to those questions and by providing examples (e.g. some prethought models for e.g. change/release management process).

    So in my mind the kernel may not necessarily tell how to run a project. It can outline the core values, principles, properties, requirements and general goals for the process but mainly it should be a tool that can help the team to quickly find and document a tailored approach (“kernel configuration”) for their project and outline a mechanism for continuous improvement.


    • Hi Jarkko,

      I agree with your perspective. In fact, with respect to your post, I have just finished a new book titled “SDLC 3.0: Beyond a Tacit Understanding of Agile.”

      You can order it now at http://www.fourth-medium.com/sdlc3_0.htm

      Anyone who has been poking around my blog will see such a domain model as you seem to have also come up with. And I believe many pragmatic practioners out there came to the same conclusions some time back.

      With respect to the issue of a model or foundation, in the book I put forth an application of the body of knowledge of Control Systems Engineering to begin the process of going beyond the rhetoric and anecodal “fashion industry” phenomenon.

      It is great to see similar “centrist” philosophies starting to emerge.

      Regards,

      Mark Kennaley
      President & Principal Consultant
      Fourth Medium Consulting Inc.

  8. scottwambler Says:

    I think that the discussion which we’ve read here points to the difficulty of the challenge that this effort faces. There is wide range of opinions as to how things should work, what’s important, what isn’t, what the scope should be, and so on.

  9. scottwambler Says:

    I believe that part of our scoping discussion must be to define the context in which practices will be applied, because the context determines the applicability of the practice as well as how it is tailored. An approach which works well for a medium-sized co-located team in a regulatory compliance situation is likely to fare poorly for a small distributed team developing an informational website. Furthermore, context is particularly important for the research behind the practices, because without a clear indication as to the context in which the practice was evaluated it will be very difficult for practitioners to identify which strategies are best suited for them. It borders on inane to argue if one practice or appr

    oach works better than another without defining the context. For example, religuous fervor aside, classical/traditional approaches are more effective than agile in some situations. Few mind you, but it is still an important observation to make.

    I recently wrote an IBM white paper entitled The Agile Scaling Model (ASM): Adapting Agile Methods for Complex Environments at ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/raw14204usen/RAW14204USEN.PDF which defines what I’ve been referring to the 1+8 scaling factors to communicate the context faced by project teams. The “1” is life cycle scope, minimally you should consider the full delivery life cycle and not just the construction life cycle. The eight other scaling factors are team size, geographical distribution, regulatory compliance, domain complexity, organizational distribution, technical complexity, organizational complexity, and enterprise discipline. A tenth factor, paradigm (e.g. agile vs classical vs. …), is implied.

    • John Siegrist Says:

      Hello Scott,

      This process appears not just hard but more or less impossible. Judging by the contributions here on the blog, there are only a few software developers who both have an interest in software methodology and know about this effort. Of these few participants, most either have already devised their own pet methods or otherwise have definite opinions about what successful software engineering processes look like. At this point, SEMAT would be better served if information about best practices were solicited from each of the signatories. Properly documented in a “patterns” format that includes the contextual information you describe as so essential, a collection of software practices might provide a better starting place for discussion. Perhaps with enough documented practices the SEMAT community may eventually build a much more general software engineernig “kernel” through induction.

      • scottwambler Says:

        Yes, this is going to be a hard process as it could potentially engender a significant shift in the way people think. However, the Agile Manifesto had a fairly large impact on the industry so who’s to say?

        Just putting together a list of best practices won’t help much, and pretty much anyone with access to a search engine and a few hours could do so pretty quickly. The pattern community has documented thousands of patterns over the years in more or less of a consistent format, yet how many practitioners are aware of more than the GoF design patterns plus a few more perhaps?

        Getting a collection of industry thought leaders to agree on something and then work towards it would likely be of significant value. Like the Agile Manifesto it could provide a significant leverage point for the industry.

        Or perhaps nothing will come of the effort. Time will tell.


  10. […] Ivar Jacobson, Bertrand Meyer, Richard Soley Possibly related posts: (automatically generated)What is the scope of the kernel?What is Scope of Software Engineering?The SEMAT Charter – A proposalTools for Building the Linux […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: