I have been testing software since the mid 1990’s and I am constantly amazed at how far we have come.  This is from a slide presentation I did a few years back.  Most of it is based on a great online article but I couldn’t find the original site so I will have to go with my modified version.

1960’s   “Code and Ship”

Software began to find its way into systems procured by the US Dept. of Defense (DoD).  These projects were consistently behind schedule, over budget, and had many technical problems.

Frequently, software never worked as intended and many projects were cancelled before anything was delivered. The DoD was frequently unaware of schedule, budget, and technical problems until late into the program – when they were often unable to understand them and assess their impact.

An “independent software tester” was hired to “perform additional, unbiased testing of the software”. By employing someone who was totally separate from the software development contractor, the DoD hoped to get a more accurate and objective technical assessment of the project’s status.

1970 – 80’s  “Code and Ship Crisis”

During the 1970’s, as software development activity began to expand into the private sector, software development companies were experiencing the same poor results as did government agencies a decade earlier. Companies had difficulty delivering software within the constraints of schedule, budget, and quality. Most companies relied on developers to test their own code and report on issues.

During the 1980’s, we experienced what became known as the “software crisis” – the point in time when spending on software maintenance exceeded spending on creating new software products. The “software crisis” brought with it a host of changes and the emergence of SQA.

While SQA was viewed as the “poor stepchild” of software development, many enlightened companies saw measurable benefit from integrating SQA into the software development process.

1990’s   “Code, Test and Ship”

By the 1990’s, many software companies had SQA functions within their organizations. Yet, high-profile software failures continued to occur.

  • Complexity of software developed during the 90’s increased significantly.
  • Competitive business pressures also increased significantly.
  • Software was being used in many new areas – especially areas that were life-threatening.
Many people working in SQA received little formal training in SQA. SQA engineers were expected to learn their craft primarily from on-the-job training.

Universities failed to recognize that SQA is a legitimate discipline unto itself and that it requires specialized training.

2000’s  “SQA Processes adopted”

The SQA department is a standard role in any major software development lifecycle and mandated by any standard outsourcing contract for software

  • Government legislation requires that critical software has met standard SQA analysis, particularly regarding identity and security issues
  • Various standards organizations(such as ISO, IEEE, CMM) have been created to evaluate and improve an organization’s ability to produce quality software.
  • Universities offer a wide range of SQA courses and areas of specialization
  • SQA Certification (CSTE, CSQA, CMSQ) and/or a Computer Science degree from an accredited organization is a standard expectation for new engineers entering the SQA field
  • SQA User groups and conventions are common in major cities around the world.

 

***********************************************************

Additional information on the history of QA:

http://www.testingreferences.com/testinghistory.php

http://www.softwaretestpro.com/Item/4537/History-of-Ideas-in-Software-Testing/Agile-Performance-Automation-Metrics-Development-Acceptance-Exploratory-Functional-Integration-Process-Software-Strategy-Testing-Teams-Unit-Six-Sigma