Establishing a Kernel?


One of the key foci of the SEMAT Call for Action is to agree “a kernel of widely-agreed elements, extensible for specific uses”.  In this discussion we will explore the idea of a kernel. The goal is to gather requirements related to the kernel.  Please join the debate by commenting on this entry.

What is the Kernel

An important step in fulfilling the promise of the SEMAT initiative to “refound software engineering” is to establish a framework for thinking about software development.

This framework must be concrete representations of the acts & artifacts of software development, representations that are applicable no matter the size or scale of the software under development, nor the size, scale or style of the team involved in the development.

We call this framework the “kernel” as it captures the set of elements that are inherent to all software development efforts. In essence , it provides a practice independent framework for thinking and reasoning about the practices we have and the practices we need.

The goal of the kernel is to establish a shared understanding of what is at the heart of software development.   We must discover the set of elements that are essential to all software development efforts, a shared body of knowledge for academics, researchers & practitioners.

The kernel will allow us to think about software development in a number of different styles.  It might comprise:

  • A list of the key elements that always need to be addressed, in all projects;
  • A map of the practice territory so that we understand how to build from the kernel
  • An understanding of the skills needed to build teams and software
  • A framework for using the kernel, both to develop extensions to the practice and to develop software
  • A set of definitions to be shared by all practices

The kernel is the core of the underlying theory for software development. What rigor is important, it is also usable; that is, it can be taught, measured and put into practice in real projects. To academics, the kernel might be seen as a language to describe practices.  To the developer the kernel underlies what he does every day.

It’s important to remember that SEMAT is focused on building theory that allows different practices to work together – not defining new or “best” practices.  As Leonardo da Vinci once said, “He who loves practice without theory is like the sailor who boards a ship without a rudder and compass and never knows where he may cast.”

So what are the key components of such a kernel?  What is extraneous, or unnecessary in all descriptions of practice?  What are the right starting points in existing research or practice?

– Ivar, Bertrand and Richard

53 Responses to “Establishing a Kernel?”

  1. Hassan Diab Says:

    At a very high level, what is a software engineer should learn or know more than, in comparison with, any other domain engineer?

    Perhaps part of the answer could be found when discussing the NEEDS that led to think about having and developing “new things” every year or almost. Many keys should be considered:

    A- Customer: knowledge and culture about software, application area (industry sector) and nature of software (real-time, embeded, AI, etc.), particularities of compagnies, implementation of new technological solutions
    B- Dev. companies: maturity, tools, methods and habits (bad and good), people.

  2. Small proffering for “A list of the key elements that always need to be addressed, in all projects.”

    Every project need address:

    – Owners of the project
    – Owners of the product
    – Development team
    – Others affected by project/product (stakeholders)
    Even if very small project
    o “I wrote this app for my phone to tell me when to refresh my phone book”

    – “Projects” cost money. This results in expectations relative to cost, which may affect practice selection.

    – Software requires host hardware (for development, test, and operation)

    – We have to do things somewhere, and geographic distribution may affect practice selection.

    – Most projects begin with two things: a name and a deadline

    Business purposes
    – Two sets of purposes for the project itself and for the software solution.
    o “A key purpose of this project is to reduce call waiting times for our cutomers”
    o “A key purpose of the IVR solution is to efficiently direct callers to automated responses vice CSRs whenever possible”

    Sequence of activities
    – The ‘instruction pointer’ for the business process
    o “The missile software acquires its designated target, then tracks, then detonates”

    Business Entities/Vocabulary
    – The named objects of the domain
    o “Each Division (geographic area) has Departments (functional areas) which operate from Branches (physical storefronts) ”

    Business Rules
    – Decision-making criteria
    o “Every loan must have a borrower”

    – Representation of some attributes of Business Entities

    – Someone wants to know how the project is going and how well the product is received.

  3. Johnny Hermann Says:

    I cannot see a unified S/w Eng way being generally accepted (at least in industry) without integrating the practices of project management, from both the traditional (PMI) and agile (Scrum) perspectives. Techies and management both need to be on-board.

    • Luc Pezet Says:

      That’s an interesting point Johnny. Do you think the kernel should be so specific to even specify its use and/or implementation?
      For example, why Scrum? If I use something else (e.g. non-iterative development), it’s no longer software engineering?
      Finding that boundary would help greatly in defining the kernel I believe.

      • Johnny Hermann Says:

        “PMI” and “Scrum” were actually meant as examples. Nothing particularly special about those methodologies – I just happen to have experience with them.

        So no, I would not presume to impose an impl. More like the concept of the PMI processes, a subset of which can be applied as the situation dictates (no enforced order).

        We may not get away with much more detail than that used by say the Agile Manifesto principles. And perhaps S/w Eng will never really be elevated to a first-class Eng discipline until personal legal liability is imposed.

        From my experience, it is difficult to inject processes into a commercial Eng department, esp. without top-down champions. But I figure the PM is a key role closest to where the rubber meets the road. PM processes are somewhat accepted, and perhaps could be piggy-backed to inject a S/w Eng way.

        • Luc Pezet Says:

          I agree with you.
          Just like Construction Engineers have to do some overall planning, co-ordination and control of a project from inception to completion (as I understand it, I’m not a Construction Engineer), some “management process” should be part of Software Engineering.

  4. June Park Says:

    One essential element of any software is the “value” that it creates for the users of the software. The value should be greater than the price that a user pays to use the software. The total sum of prices that all users pay to the developer of the software should be greater than the cost that the developer spends in developing and delivering it. Software engineering should be driven by, among other things, the value created to the intended users.

  5. John Keklak Says:

    My best wishes for success with this effort. I’ll be happy to contribute what I can.

    To respond to the question, “What are the right starting points in existing research or practice?”:

    It seems that the place to start is to acknowledge these two realities:

    (1) Whatever the kernel prescribes has to be enjoyable for programmers. It is almost universally true (but perhaps not generally acknowledged) that programmers actually *like* how they do things now; those who don’t go somewhere else. Programmers get to play with a lot of code that generally does neat things, and a check arrives regularly, and maybe not even that. Most programmers see software engineering as prescribed by SWEBOK, etc., as drudgery and definitely not fun. Programmers like what they have now, and they will have to like what the kernel prescribes.

    Moreover, the data shows that programmers are very good at dodging and doing badly (a) the things they don’t like to do, (b) the things they see as needless and just extra work. For instance, my experience with programmers being asked to do a task analysis and time estimate for an Agile story is that they don’t like to do it, so they dodge it, they do it badly, and make the bare minimum effort only because they have to.

    My experience with situations where, for instance, extreme programming practices stick correspond with situations where the programmers believe — through their own experience — that things like pair programming and continuous builds are the best way to get things done, and programmers actually *like* working in this way.

    (2) Whatever the kernel prescribes has to work better than what programmers do now. When the heat is on, what programmers do is what gets the job done the most quickly. The data shows generally that Agile melts away when the heat comes up.

    Can we do better than we are doing now? I do think so. However, changing practices will require meeting these two conditions.

    • As most comments so far have indicated, software engineering does not include “programming” per se. The distinction is between engineering and construction; by analogy, civil engineers understand concrete but do not mix or pour it. Given that perspective, could we amend the above to say that programmers (who consume the results of engineering) need to enjoy (i.e. find readily useful) the products from engineering?

      • Luc Pezet Says:

        That’s interesting Skip.
        As you say later here, things that are straightforward today, were not yesterday. So making kevlar came from trial and error in DuPont labs, and only the best scientists at DuPont could do it. Then with pattern recognition it became easier to do it, and today with quantum theory it’s down to just rules-based approach and anyone with a BS degree in chemistry can do it.
        The same sort of things happens (and happened) in software I believe, and make the engineering part moving up to “higher” grounds.

        Do you still think we should entirely disregard “programming” from Software Engineering?

      • John Keklak Says:

        Skip: By my first criterion, I mean that whatever process that people who create software are asked to use needs to be enjoyable. Perhaps a better way to say it whatever method or process one would choose from the “kernel” would need to be perceived as genuinely useful, i.e. does it get the job done when the heat is on? People who create software tend to enjoy using things that get the job done when the heat is on.

        With regard to the implied analogy to civil engineering: I think we can readily agree that creating software is quite a different business from civil engineering and construction. This should be no big surprise, since each engineering discipline has its unique characteristics.

        Can we exclude “programming”? Is it possible to design software, and expect someone to go off to write the code? It seems quite clearly the answer is ‘no’.

        • Luc Pezet Says:

          John: I agree with you the different engineering discipline involve different set of skills and sometimes very distinct domains of expertise.
          However, they are all about engineering and in such they all have some fundamentals in common.
          It just take couple minutes to get the details of multiple engineering disciplines and see that (1) perfect understanding of fundamentals of [Computer Science in this case] , (2) strong understanding of the particular domain (if any) and (3) management of projects in such domains are expected (if anything, look for Fundamentals of Engineering Exam).

          As for “programming”, I have yet to see Construction Engineers make concrete or dig foundations.
          It is true however that Software Engineers tend to do (or at least touch) everything during software development.
          Most likely it’s because we’re in the very early years of Software Engineering, and in that industry we have yet to see the clear distinction we can see today between Construction Engineers and Construction workers.
          We’ve been building things from 2000 BC and we can trace proper Civil Engineering from maybe the 18th century.

          Should the kernel (or meaning of Software Engineering) be defined for today or for tomorrow?

  6. Luc Pezet Says:

    When I think of *acts* in software development, the following come to mind (in no particular order):
    (1) Prototyping: creating proofs of concepts to validate initiatives before further investments.
    (2) Modeling: representation of the software to build to define the solution for the problem at hand.
    (3) Coding: creating or using instructions to implement the model at hand
    (4) Validating: testing the current solution against its requirements

    They all seem very “applicable no matter the size or scale of the software under development, nor the size, scale or style of the team involved in the development.”, but they are also very much related to the very definition of Engineering (besides coding ;)).

    Is the kernel intended to be an “implementation” of the definition of Engineering? (i.e. use Computer Science terms in place of generic ones in the Engineering definition)
    Would it be helpful to see “kernels” of other engineering disciplines?
    What would be the kernel for Civil Engineering for example?

  7. In producing software we have three layers. The first layer is that of user requirements (or product specifications) and the last layer is the code. The middle layer is the engineering. In my humble opinion, we have attained maturity in the top as well as the bottom layers but we are woefully short of satisfaction as far as the middle layer is concerned. While many attempts have been made to develop tools for software engineering, those tools are no where near the precision or clarity of engineering drawings used in other disciplines of engineering such as mechanical engineering or electrical engineering. What we ought to focus our effort is to develop a set of tools that match up with engineering drawings for software development.

    Engineer is one who converts product specifications or user requirements into a form that can assist the programmer to construct the necessary code to build the product. Engineering is the combination of processes and tools that assist us in so doing.

    The first step is to come up with a clear and unambiguous definition of software engineering from which we can build further.

    Engineering is defined thus, “the process by which the requirements of the customer are converted into drawings, parts lists, material specifications, process descriptions, and test specifications for use by the manufacturing organization in the fabrication, assembly, and testing the finished product” (The Handbook of Industrial Engineering and Management, by Eugene L. Grant and W. Grant Ireson, 1955).

    I personally like this definition and would like to substitute the words “drawings, parts lists, material specifications, process descriptions, and test specifications” with equally powerful software engineering tools and the words “fabrication, assembly” with “construction, building” in the above definition to come up with a definition of “software engineering”.

    Best wishes
    Murali Chemuturi

    • John Keklak Says:

      Creating software products is somewhat different from the process of creating mechanical products or building or bridges or petroleum cracking systems. There are even significant differences between producing software and computer hardware.

      One of the key differences is economic. In cases where the end product is a physical item, making changes requires significant expense. This leads to a natural motive to think things through, and prove with reasonable certainty that the idea will work before the expense is committed to producing the physical item. For instance, in the case of a bridge, a very significant amount of modeling is performed to verify the concept meets deflection, vibration and even aesthetic requirements.

      In software, the cost of making a correction, even late in the process of “building” the product, is relatively minor. The data, as evidenced by the widespread practice of code and fix, clearly illustrates the lack of a motive to prove that ideas are sound before committing them to the product.

      A second economic motive is legal liability and the need to adhere to regulations, but I will skip this rather significant consideration for the present moment.

      Another key difference is the level of complexity and the difficulty to capture the requirements completely at the start of a project. Each term, students who take my Boston University software engineering course must carry out a project to design and implement a game of some sort. Despite what I believe to be a very clear statement on my part for how I expect the game to appear, its rules, and how it is to be played, I marvel at how the working games differ from my original concept. My initial requirements, no matter how thoroughly stated, contains more than ample “wiggle room”. For this reason, the first few weeks bring many requests from students to clarify one point or another. Even so, the requirements still remain ambiguous to a significant degree.

      Moreover, my concepts usually contain “requirements bugs”, such as an inconsistency in the rules that becomes evident first to the students as they design, implement and play the games. In such cases, the requirements may have to be changed late in the project.

      Let’s look at these problems in terms of your “three levels”.

      I suspect that readers will agree that the game development scenario above roughly corresponds to their experience with requirements. Most of the time, most of the requirements are established in the early part of a project through rounds of discussion with the customer. Nearly always, some sort of “requirement bugs” become apparent as the software is written.

      The matter becomes much worse at your “second level”. Requirements usually look stellar compared to any sort of design specification when considered from the point of view of completeness, conceptual integrity, and outright correctness. Any attempt to specify a design, no matter how hard one tries, is always riddled and rife with errors, wildly mistaken notions, and missing pieces to the extent that it is virtually impossible to proceed to the “third level” to construct the code.

      Your “three-layer” idea can be made to work, however. The trick is to recognize that the concepts in level two must be tested, just as the concepts for a bridge, auto braking system, petroleum cracking process, or aircraft must be tested and revised when conceptual errors and omissions become evident.

      How does one test software concepts? By writing software exactly according to the concepts, of course. Usually only a few minutes of elapse before coding reveals painfully significant and obvious conceptual issues.

      A second trick is to merely take note of these issues, and not immediately fix them in the code (thereby escaping stepping onto the slippery slope of code and fix). Instead, armed with a list of conceptual errors, the programmer returns to the concepts, fixes the problems there, then returns to the code to update it, once again, to match the concepts. As before, only a few minutes will pass before additional conceptual errors become obvious. Repeating the process of returning to the concepts, fixing the problems there, then updating the code, the programmer evolves the concepts to a largely sound and proven state.

      As distateful as this may sound, in this process, the programmer becomes the “human concept compiler”. However, this is no less demeaning than the role of lab technician or finite-element engineer in other engineering disciplines.

      Note that the code being generated is not necessarily the product, but rather the experimental apparatus that serves to verify the concepts in the design. Of course, much of this code can ultimately be used for the end product as well. Thus, in effect, level two and much of level three are merged when taking this approach.

      There is a step that roughly corresponds to your “third level”. Once the concepts have been developed in the fashion above, it is usually necessary to refactor the code to a professional state. Additionally, “manufacturing” of the software consists of making copies, either onto physical media, or by downloading from a server (thus software is perhaps the least expensive type of product to manufacture).

      The procedure I describe above is not really anything new. We, as programmers, do this now, but with less accuracy, and in a way that makes it more difficult to discuss ideas with other programmers — most of the conceptual information is written only in the heads of the programmer(s) most familiar with the code, and is quite difficult to share accurately with colleagues.

      Space prevents me from providing details about the format for recording conceptual information. I will elaborate in follow-up posts if anyone is interested.

      Related to this, a little acknowledged statistic in the software industry is that 75% of resources applied to software development are wasted on the time that programmers merely *study* code to obtain a sufficient level of understanding and fluency so they can meaningfully fix a bug or add a feature. The by-product of the proposed merge of levels two and three (i.e. the concepts in written form) vastly reduce this “wasted 75%”. I know this from much experience.

      • John Keklak Says:

        To follow up on this comment and on an earlier comment (2009/12/11 at 14:13) to this post:

        (1) Is this process enjoyable for programmers?

        (2) Does this process work better than what programmers do now?

        Despite my clear bias, I believe the answer to both of these considerations is ‘yes’.

      • I come from the software development industry with about 20 years of software experience having worked on a few projects some of which were software maintenance projects. I worked both as a programmer and as a project manager. I spent 15 years in manufacturing industry before switching to IT and software. I handled data processing, training and consulting besides software development. I am still a hands-on programmer. Regarding validating software requirements and design, the explanation needs a lot of space. There are many scenarios where the design cannot be validated thru construction of even a bare-bones product except for mathematical simulation. However, I did document them in a book “Mastering Software Quality Assurance-Best Practices, Tools and Techniques” which is likely to be released in Feb 2010. I did also contrast the manufacturing discipline with that of the software industry in that book.

        It is a misconception to think that fixing code is relatively cheaper in software than in physical products. I agree that breaking and making code doesn’t generate the noise it generates in physical products. But think of a scenario that a requirements defect is uncovered when the product is in system testing stage! First the cost of testing and uncovering the defect, then analyzing it to trace the origin of the defect, then locating the defect in the code, then fixing the defect and finally the regression testing to ensure that the bug fix did not inject new defects. Regression testing is much more costlier in software development than in physical products. All in all, cost of fixing a defect is relatively similar either in in a hard-product or a soft-product.

        However presently I am advocating that the present set of software engineering tools need significant improvement before they can attain the precision and clarity of tools used in other engineering disciplines. I think that this is possible too.

        Best wishes
        Murali Chemuturi

      • Luc Pezet Says:

        Hi John,

        “One of the key differences is economic.”.
        What about software development at NASA? You think it could match the level of economic of building a bridge? Or maybe even beat it?
        What about Mechanical Engineering or other projects than bridges and other big things?
        On Wall Street in New York, they built some kind of rotating gates (2 giant wheels flat on the floor) to block cars for inspection (explosives and such). It’s a decent piece of engineering I believe yet it would not match the “economy” of building a bridge or skyscrapers. Guess what? It broke down a week or two when they started using it.
        I can’t believe Civil Engineers build bridges everyday, just like Software Engineers don’t build Nuclear Power Plant management/control systems or controllers and interfaces for brain-surgery/medical type of devices.

        I agree with you that SE and other engineering discipline have something a little different.
        Do you think it’s because all other Engineering disciplines have to deal with the laws of physics mostly?
        Isn’t software kinda dictated by the hardware it runs on?
        If laws of physics go wild, bridges won’t stay up very long, just like if hardware “explodes”, software won’t run well anymore.
        What do you think?

  8. I have been keenly following the SEMAT movement since its announcement. I feel it is a great initiative. While defining the kernel, we should not limit ourselves to find just the common denominator. I feel it is also important to take care of the following.

    While building and maintaining software, we need to take care of two diagramatically opposite systems, namely the production system and the innovation system.

    The production system needs to be predictable, repeatable, measurable, deterministic, hierarchial and low risk. On the other hand, the innovation systems are uncertain, exploratory, judgemental, ambiguous, cross-functional and high risk.

    By whatever name we call it, software engineering MUST help us balance these two systems in the most approrpiate way for a given situation.

  9. As we distinguish between engineering and coding, ought we also to distinguish between engineering and management? The phrasing “key elements that always need to be addressed, in all projects” implies a project orientation. The phrase “to build teams” implies managment orientation. The UP and Scrum suppose lifecycles, not designs. Are we to focus on the engineering of the (software) product or the (team) process? Is our puspose to define methgods for modules, objects, and aspects or for milestones, activities, and workflows? And having asked, does it make any difference since these are interdependent? Finally, if the two are considered separately, newer methods of engineering will continue to arise just as do newer methods of surgery. We knew nothing of objects before 1960, or of aspects before 1990. We speak of 3GL and 4GL; someday there may be a 7GL which supports engineering paradigms we cannot yet imagine. I wonder whether this initiative is more about SDLCs; the processes (or sets of practices) which employ software engineering methods.

    • sorry, “methgods” is a typo from methods (not an implication)

    • Luc Pezet Says:

      Are other Engineering disciplines “project management agnostic”?
      Or are you saying Software Engineering should be?
      Or are you saying project management should be like a “sub-discipline” of Software Engineering?

      For Construction Engineers for example, construction management seems to be a required knowledge in order to produce a functionally and financially viable project.
      Building a solution usually comes with many steps or sub-solutions, or trials, etc. Would you say putting it all together is part of project management?
      If so, don’t you think it’s then part of the solution?

      • I’m more interested in isolating the terms. Project management practices have been abstracted by PMI (among others). Looking at their ‘project of the year award’ list shows tremendous versatility in application of those practices far beyond software development. I think SEMAT is an effective acronym, but are we limiting ourselves to engineering or do we remain open to the entire scope of software ‘development’ which includes engineering, construction, and peculiarities of managing a software project? Trying to clarify the vocabulary.

        • John Keklak Says:

          Skip: My vote is for a broader scope that includes project management. There are some peculiarities to software projects that impinge on the project management, and it may be that there are software-specific “proven principles” that software project managers should master. However, project management and software engineering are two different things, and it seems that SEMAT’s “kernel” should keep them distinct.

  10. Agreeing on the very concept of a kernel is critical if this is going to work. We can define the kernel in very different ways. For example, I tend to think in terms of situational method engineering (SME), which constructs appropriate, situation-specific development methods by selecting relevant method components from a repository of reusable method components, tailoring them as appropriate, and integrating them together.

    A kernel could be the smallest set of method components (a.k.a., fragments, chunks) that we can get the majority of the SEMAT community to agree on: work products (e.g., models, documents, software) to be produced, work units (e.g., tasks, activities, techniques) to be performed, and workers (e.g., requirements engineers, architects, designers, and other roles played by people and teams) who perform the work units to produce the work products.

    A kernel could also be the process metamodel underlying these method components. (Both the OMG and ISO have metamodels, which naturally are inconsistent.)

    There are obviously more potential definitions that are less related to process / method engineering.

    There is also the issue of how the kernel is to be used. For example, there are two diametrically opposing philosophies:
    1) Create a minimal kernel and then stop, leaving it up to the user to extend it. This appears (at least to me) to be the philosophy behind RUP and OpenUP (based on the Eclipse Process Framework tool).
    2) Create the entire class library of reusable method components built on and consistent with the kernel (either metamodel or lowest-level abstract classes of method components) so that the user has everything they need and merely have to select, tailor, and integrate the relevant bits. This is the philosophy behind the OPEN Process Framework ( and The Method Framework for Engineering System Architectures (MFESA), as specified in this book:

    There are pros and cons to both philosophies, but the SEMAT community will have address such foundational issues.

  11. This initiative should not end with just another theory of how to develop software. We need to address this industry by industry. For e.g the Software engineering practice for creating ERP applications is totally different from creating an system software.
    Also we need more tools to develop applications. The new intiative should help in reducing documentation.

  12. Watts Humphrey Says:

    Here are some of my thoughts on software engineering and how we must proceed.

    Engineering in general concerns the application of scientific principles to the development, production, or support of products and services to meet society’s needs. Typically, engineers must consider the practical aspects of their work, including the cost, schedule, predictability, and quality of the engineering and the dependability, reliability, usability, maintainability, safety, and security of the resulting products and services.

    Development of the software engineering field has been seriously constrained by the lack of an agreed set of scientific principles on which to base our work. As a consequence, the software engineering community has no firm foundation on which to build and we have no agreed criteria by which to judge the validity, quality, or suitability of the concepts, tools, and methods we use. This also means that we have no framework for prioritizing the research and development needs of the field. Therefore, there has been a proliferation of methods and processes that do not build on each other and are typically viewed as competing. Without some agreed framework and evaluation system, we have no way to establish an accepted and robust family of tools and methods that have been proven to be effective. Some of the many examples of software engineering methods, life cycles, and processes that have been introduced over the years are the following.
    – The popularly-known life cycles are waterfall, prototyping, incremental, spiral, and RAD
    – Open source is also viewed by some as a combined form of life cycle and method.
    – Some popular methods are structured programming, SSADM, OOP, RAD, SCRUM, TSP, XP, RUP, AUP, and integrated methodology.

    The fact is that most of these “new” methods primarily consist of repackaged and renamed techniques that have been around for many years. However, our field uncritically accepts them as new without any rigorous examination of their derivation or any attempt to measure and evaluate their effectiveness. To enable our community to build a coherent and growing family of compatible processes, practices, tools, and methods, we should start basing our research and development efforts on the needs of the practicing software engineering community. While product topics must continue to be important, and while no development effort can be successful without a solid technical product foundation, the principal software engineering practice issues concern project and not product topics. These include the cost, schedule, predictability, and quality of the engineering work.

    To establish such a foundation, we must first define a set of long-term objectives for software engineering. Then we must agree on a set of measures to use in evaluating progress toward these objectives. The SEI’s experience with the PSP and TSP suggests a way to start, and it also indicates some of the principal challenges ahead. The principal measures used in the PSP are time, size, and defects.
    – The time spent on each product and by each engineer on each engineering task or phase
    – The size of every product produced
    – The defects injected into or removed from each product by each engineer during each engineering task or phase

    Each of these measures must be unambiguously defined and our community should reach consensus on these definitions. While there can be endless debates about every one of these measures, we have found that, by focusing on objective, repeatable, and precise measures that can be gathered, stored, and processed by automated tools, practicing engineers have no trouble using the PSP measures. In fact, we now have data on over 30,000 programs that were developed by practicing engineers during PSP training [Rombach].

    With an agreed set of measures, software engineering students should be required to gather these measures on their work and they should be urged to continue gathering these data, even after they graduate and are working in industry. While it may seem that gathering these data would be difficult, that turns out not to be the case. From experience training hundreds of students and thousands of practicing engineers, we have found that almost anyone who can program can quickly learn to accurately gather these data. As long as the faculty themselves know how and why to gather and use these data and as long as they require their students to properly gather and use these data, data quality is not a problem [Humphrey].

    While the needed data and analyses could be performed today for individual software engineers and the resulting data could be very useful, gathering such data on development projects would be much more difficult. The reason is that, in the real world, requirements change, commitments are renegotiated, and the complexity and difficulty of the work can vary enormously. For example, even for the same teams, the times required to develop the same sized program can vary as much as five or six times depending on the kind of program (application versus control) or on the status of the work (new code or modification) [Flaherty].

    Even with individual data, however, we would be much better off than today. We would be able to study the effects of education level, experience, language, development method, development environment, and supervision on the predictability, quality, and productivity of engineering work. These data could be of great value in guiding future software engineering research and development and in maintaining a relevant software engineering curriculum, even as our methods and technologies evolve.

    In conclusion, the development of the software engineering field has been severely hampered by the lack of an accepted theoretical foundation by which to judge the suitability of our methods. While developing such a foundation would be enormously helpful, it would likely take a considerable time. In the meantime, a helpful first step would be to start gathering and analyzing data on the work of software engineers and to then use these data to guide our future research and development work.


    [Flaherty] M. J. Flaherty, Programming Process Productivity Measurement System for System/370, IBM Systems Journal, vol. 24, no. 2, 1985.
    [Humphrey] Watts S. Humphrey, TSP: Coaching Development Teams, Reading, MA: Addison Wesley, 2006.
    [Rombach] Dieter Rombach et al, Teaching Disciplined Software Development, Journal of Systems and Software 81 (2008) 747-763.

    • Watts,

      Thanks for joining us in blogging. We need your feedback and proposals.

      Two questions:
      1) You want to limit the scope of the work to process and not to product. “While product topics must continue to be important, and while no development effort can be successful without a solid technical product foundation, the principal software engineering practice issues concern project and not product topics.”
      Maybe I am mistaken but wasn’t this a major concern people had with CMM and CMMI. An organization can be on level 5 (top level) and still develop poor products with poor architecture, poor components, poor contracts, etc.

      2) Continued: “These [issues] include the cost, schedule, predictability, and quality of the engineering work.” How can the engineering work have quality without resulting in quality products? I am not suggesting that the kernel should include particular guidelines on how to work or on what a good product is, but isn’t it necessary that projects and products both must be in the kernel, one way or the other.

      Moreover: “While developing such a foundation [an accepted theoretical foundation] would be enormously helpful, it would likely take a considerable time.” I am sure it could, if we don’t find a good path to follow. In the charter we have proposed (see, we believe we need/can can get tangible results within 12 months.

      A kernel can be developed within reasonable time (say 12 months) by studying existing methodologies. The kernel doesn’t need to identify best practices (patterns, methodologies, etc.), just make possible to define/measure them (good or bad). This is one of the unique ideas in the initiative.

      • Watts Humphrey Says:

        Ivar, I agree with you that work on product issues must continue. There is no question in my mind that sound technical work is essential or all the process ideas in the world will be wasted. However, that said, the research focus in computer science and software engineering to date has largely ignored process issues, and that is where I believe the field has made a serious mistake. With a little more knowledge, we have the opportunity to truly transform the practice of software engineering.

        There are many challenging issues concerning how to best manage, motivate, and utilize people and how to maximize their capabilities and performance. Based on experiences in other fields, human performance is essentially unlimited – word records are broken every year. Why don’t we see this is software engineering? How is it that the personal performance of people in our field not only seems to be static but it is unknown? There are many research opportunities there that have not been addressed or even discussed.

        So while I agree that product research must continue, I see no likelihood that anything we do will eliminate or even significantly reduce such work – it is too much fun. The problem is to build interest in the kinds of research work that have not been done. To do that, we must have measures and we must start using those measures to explore and to see where we can stretch the existing limits of our field.


        • Dima Semensky Says:

          Watts, I’m confused a bit about your direction. I thought that the goal of this effort is to “commoditize” software engineering. Just like during industrial age we transitioned from one offs, unique craftsmen to boring electrical, mechanical, etc engineering, we need to transform from one off development effort to predictable and repeatable process that applies to all situations and is extendable as needed.

          IMHO the people motivation is not part of Software Engineering – just like it’s not part of electrical engineering. There are specialized disciplines that are responsible for management and leadership and I don’t see why Software Engineering is so special. Yes, it’s different but only because the product is different – the process to get there is really the same: Get the specs, document, analyze, design, code, implement and support. All we have to do is to break down these steps in a set of elements that are common denominator of all other existing methods.

          I honestly do not see significant difference between automobile industry and software industry in context of this discussion. In first, customer community demands new type of vehicle and automobile companies executing on such demand, using variety of skills – primary of which is engineering. And we know that Ford, Toyota and Chrysler will go through very similar steps and produce similar artifacts. Same thing should be happening in our world and it does but it’s not consistent across the industry.


          • Would it be fair to say that what we are wishing to ground is the nature of the software delivery ecosystem. Or in other words, don’t we want to better and more concretely understand the “business system” that results in software products. Some instantiations of this ecosystem amount to engineering or include “architecture”. Some don’t. However, this system always has inputs, outputs and dynamics.

            The dynamics and therefore the results differ obviously depending on whether we are doing engineering, or whether we are reusing the engineering that preceeded the instance of the ecosystem. And we can influence the dynamics through our practices and thereby infuence
            product delivery outcomes. These practices are not limited to project management, but all disciplines that make up the software delivery
            ecosystem “village”. And as a system I think it would be fairly low-lying fruit to agree that we know that it has to be a closed-loop system
            and that a quick win is to even agree on “iteration” as it relates to negative feedback.

            If this seems reasonable, shouldn’t this initiative first and foremost be framing the right model to approach this system so we can answer intelligently questions like what are the effects on system dynamics when we change feedback frequency (ie iteration length)? Or which practices dampen or optimize our delivery response? Or which practices lead to instability? These questions span both project and process –
            you can change dynamics by changing the project system “compensator/controller” or you can do it through changing the “plant”.


          • Dima,
            I do not believe that your car analogy holds. We engineer software for every conceivable domain as well as level of size, complexity, and criticality. We don’t build just cars. We don’t even just build vehicles from go-carts to tanks or boats to aircraft. We build everything and software now provides the majority of the functionality of most of what gets built. Therefore, approaches that are similar will not always be best. We don’t need a single standard development method, no matter how tailorable. We need a framework for producing one or more methods that are appropriate for what we are specifically building.

          • Dima Semensky Says:


            Agree car building is simplystic analogy. What I’m saying that we need to look in the past and understabd the drivers and processes a bricks and mortar industry went through to transition from craftsmen to engineers and repeat this process for software.

            So, what I’m looking for is that common place and I believe this is what the intention of the kernel is.

            Here is OO analogy:

            Kernel is like an abstract class with methods and fields but you can’t instantiate it to use in real situation. However, you would inherit from it to produce your particular implementation – or method. May be we need many abstract classes like: “abstract class Waterfall”, “abstract class Agile” and then we can inherit for a particular project: “public class LeanMethod: Agile”…overwrite couple methods (like StartProject), etc… you get the idea…


          • Dima Semensky Says:

            Last post was addressed to Donald, sorry.

        • John Keklak Says:


          While I hear you saying that evaluation of techniques for developing products must continue (within SEMAT?), you also seem to be saying that a majority of the focus should be on process (“…the research focus in computer science and software engineering to date has largely ignored process issues, and that is where I believe the field has made a serious mistake”).

          Perhaps we should not prematurely determine the focus of SEMAT. There is an increasing consensus that programmers spend a very large fraction of their time merely studying code [Hallam], although opinions differ about how to enable programmers to more quickly understand the code they are working on [Atwood]. It is not clear even whether the roots of the problem lie in the processes programmers use, or in their programming techniques. However, it is clear that an effective solution to this problem would significantly increase the pace and effectiveness of software development. This is an area where research and quantitative assessment would be enormously helpful.

          [Atwood] Jeff Atwood. “When Understanding Means Rewriting.” Coding Horror. 16 Sep. 2006

          [Hallam] Peter Hallam. “What Do Programmers Really Do Anyway?” Peter Hallam’s WebLog. 4 Jan. 2006

  13. so in the issue of a kernel…

    I equate this pragmatically to the common ground of modern software engineering. This means common ground among Lean, Agile and the Unified
    Process. Not a meta-model like SPEM. We already have one of those. And we can say that SPEM application can sometimes lead to prescriptive
    work configurations – which is ridiculous because varying context and co-evolution of the project Complex Adaptive System requires situational practices. And many have probably seen “Business Use Case Diagrams” that describe the high-level business mission of software development and the various capabilities that realize it, so we are not talking about agreeing on the disciplines that exist. We already have that as well.

    What I describe in “SDLC 3.0: Beyond a Tacit Understanding of Agile” is the notion of a system of patterns. I know that others are swirling around this idea as well. These patterns come from experience from various splintered communities. It is high-time we integrate this experience rather than re-invent each others practices under different labels and brands. This system of patterns also implies that we evolve to something more fundamental – practice-orientation rather than wholesale method-orientation. Again…practice=pattern and so we are talking about reviving the pattern movement.

    But even more fundamental is a theoretical model for the software engineering “system”. And luckily, we have Control Systems Theory to ground this. Ultimately, the software engineering system is described from the body of knowledge of Adaptive, Non-Linear Control. The mathematical models that we leverage today (EVM, to some degree COCOMO) are totally insufficient to understand what we know is common ground – negative feedback and iterative/incremental development.

    See my recent blog postings leading up to the release of my book:

    • I strongly believe in patterns. For example, I created the PLOOT pattern language for testing OO software. I also believe in the value of process as well as product patterns. However, we have to remember that patterns that are appropriate in one context are not appropriate in another and can in fact limit progress as well as promote it. Let’s be careful and not think that patterns are more than a part of the solution, not the entire solution.

      • I agree – that is why I refer to this system of patterns as “Complex Adaptive”; we are dealing with a stochastic system, not deterministic and the “adaptive” means that not only does each instantiation of a delivery ecosystem result in the application of different patterns (definintion being solution to a problem within a context – meaning situational) but also the mix of patterns applied co-evolves with the environment.

  14. Dima Semensky Says:

    I’d like to start with the original question. Let’s define “software” and let’s define “engineering”.

    My simplistic definition:
    – Software – product that is “soft” and is open to interpretation, unlike most “bricks&mortar” products
    – Engineering – using existing methods, “formulas”, approaches to achieve goal of creating a software; it’s not inventing new approach for every project

    Theory implies abstraction and levels of abstraction. Above posters mentioned customer, project management, development techniques, etc – this is to specific for a kernel IMHO

    I see at least two core elements in my mind:

    – Context (e.g. customer, goals, rules, industry, stakeholders, intended use)
    – Approximation – most software I know is a discrete implementation of some non-linear, physical process and if I was asked to “imagine” a software – I would imagine a “projection” of a 3D object on a 2D plane.

    Context must have some special properties – like a fractal based entity, with various levels of abstraction and detail.

    Approximation – must be a recursive process to satisfy variety of real-life situations.

    To test this, here are few examples:

    – Any project management approach is simply a method to “approximate” certain business need (“goal context”) in certain time rage (“time context”) using certain budget (“resource context”).
    – Coding – is approximating specs (“context”) into a discrete model, that will execute in certain “context”

    As far as I understand kernel, there are additional possible elements:
    – Intention (“why are we doing this?”)
    – Validation (“are we there yet?”)

    Eventually I would not like to deal with “context”, “intention”, etc in the final “theory” – but in very specific terms of software patterns, PM methods, etc, but they must find a foundation in must more abstract world.

  15. Deepak D Rao Says:

    This is good idea and need of future.
    Keep it up.

  16. June Park Says:

    Kernel of software engineering should address at least four areas: value (what software is created for), process (how work is organized and progressed to create software), people (the only critical resource required to create software) and technology. The kernel should show fundamental relationship among these four elements (aknowledging that their actual interaction should be different in different circumstances.) People, using technology, play roles specified in the process which materializes artifacts composing the software that realize measurable values to the user.

  17. Jim Maher Says:

    Thus far (12/20), the body of comments here mirror my own thinking about software engineering: it’s all over the place. I find this to be an interesting discussion, but not productive.

    I suggest that Ivar, Bertrand and Richard need to clarify the requirements for this effort.


    1. There no agreement on the scope of context. Are we talking about the entirety of software development, or just what Murali Chemuturi calls the “middle layer”?

    2. It is unclear (at least to me) what deliverables we should expect from this conversation. Are we producing a list of “acts & artifacts”, or just talking? If the former, where’s the whiteboard that lists the major elements already proposed? If the latter, is someone else producing this deliverable and who is doing it and where is it being done?

    3. Who is the arbiter? Is it Ivar, Bertrand and Richard?

    My own interest is in DOing something. I’ve been TALKing about the lack of generally accepted principles of software development for 35 years. Most of that talk has been much like the comments here – unbounded and unproductive.

    This should proceed like any other project any of us have done. We need to work on definitions and scope containment continuously from the outset and we must produce physical deliverables with regularity and without fail. Otherwise, we risk wasting both time and money.

    Ivar, Bertrand and Richard?

  18. I found some work by Jiang and Eberlein which seems to parallel this initiative. They published a sketched framework (CHAPL) for comparing SE methodologies here( ).

  19. If you cannot access it, the paper is titled: Towards A Framework for Understanding
    the Relationships between Classical Software Engineering
    and Agile Methodologies

  20. I was formulating my ideas on software engineering for sometime and composed them in a document. It is about 7 pages. It enumerates the understanding of the term “engineering” in manufacturing and construction fields. It also details the present state of software engineering. If you wish you may download it from my web site – here is the link – 1 – Software Engineering.pdf – Please copy and paste the link ending with “pdf”. I welcome your comments / feedback on the same. Perhaps, we can use it as the starting point for our discussion.

    Best wishes

    Murali Chemuturi

  21. Deepak D Rao Says:

    The kernel of software engineering should have these five pillars:
    1. People skills
    2. Systems and Tools
    3. Process and guidelines
    4. Quality
    5. Objective, Goals, Scope

    So that means, kernel of software engineering should address these core areas and future directions for mature software engineering

    • One of the key goals of software engineering should be to ensure that the software has adequate quality in terms of quality characteristics (ilities – types of quality) and quality attributes (measurable parts of quality). This raises the need to have an appropriate (for the system and project) quality model.

      However, although we often refer to them as software quality characteristics/attributes, they are all a function of at least software and hardware, and are often a function of other system components such as people. Thus, they are really system qualities rather than software qualities, even though they are often most strongly impacted by software.

      Thus, if we want to greatly improve software engineering, we need to ensure that software engineering take its rightful place within systems engineering.

    • Carol Long Says:

      I also have reservations about Quality as a separate topic. Quality must be integral, as Donald Firesmith highlights. To use a metaphor: security when considered separately leads to architechures that lack integrity because the security bits get bolted on after. We cannot afford to have quality bolted on – it must be designed in from minute 1 (not just day 1!)

  22. Carol Long Says:

    Let’s face it, software is a fashion industry: software businesses make money by selling the next version. But we haven’t learned the lesson that “Quality is Free” – so many projects fail because they don’t deliver the requirement – getting it right will reduce costs and so improve margins (profit).

    There used to be university IT courses that taught engineering principals (and probably compared and contrasted COBOL Fortran Pascal Assembler and Lisp, for example) but these courses could not attract sufficient students or funding because the courses were not producing graduates that employers wanted (able to code in the latest languages, using the latest methods) to persue competitive advantage.

    Engineer education should be more than training. One of the disadvantages of training for PRINCE2, Agile, C# or ASP.Net is that those approaches don’t show the underlying principals. We need to stop the silos in academia so that we give graduates the underlying information theory, computing principals and skills to understand more than one method or language.

    I welcome this initiative to go a step or so further than

    • Jim Maher Says:


      “information theory, computing principals and skills”

      I agree, and I really hope this discussion moves in that type of direction.

      To me, the principles we need to delineate are the WHAT of software development, not the HOW. HOW is fashion-concious. WHAT is seminal.

      But, I’ve been watching SEMAT and I’m not seeing it move in that direction (yet).

  23. I agree with your post. We need to fix that situation. I think this initiative is a step in that direction. Recognizing the problem would pave the way towards correction.

    Best wishes

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: