Software Reliability Requirements

Posted on by

Used appropriately, software requirements specifications can help prevent software project failure. The software requirements specification document lists sufficient and necessary requirements for the project development. To derive the requirements, the developer needs to have clear and thorough understanding of the products under development. Programming for Reliability. As we have seen, squeezing the last few bugs out of a system can be very costly. – For systems that require high reliability, this may still be a necessity. – For most other systems, eventually you give up looking for faults and ship it. We will now consider several methods for. Topics in Software Reliability Material drawn from Somerville, Mancoridis. Software Reliability. It is difficult to define the term objectively. Software requirements Develop structured program Formally verify code Integrate increment Design statistical tests Test. IEEE 1633 Recommended Practices for Software Reliability. The open session course and the online course fee include the toolkit. Standard edition - Software reliability assessment, software reliability predictions. Manager's edition – All of the standard edition plus software reliability estimation in testing and more detailed.

  1. Reliability Metrics Definition
  2. Software Reliability Requirements Example
  3. Software Reliability Requirements For Business
  4. Software Robustness Requirements
  5. Hardware Reliability

Software reliability testing is a field of software testing that relates to testing a software's ability to function, given environmental conditions, for a particular amount of time. Software reliability testing helps discover many problems in the software design and functionality.

  • 3Objectives of reliability testing
  • 5Types of reliability testing
  • 6Test planning
  • 7Reliability enhancement through testing
  • 8Reliability evaluation based on operational testing

Overview[edit]

Software reliability is the probability that software will work properly in a specified environment and for a given amount of time. Using the following formula, the probability of failure is calculated by testing a sample of all available input states.Mean Time Between Failure(MTBF)=Mean Time To Failure(MTTF)+ Mean Time To Repair(MTTR)

Probability = Number of failing cases / Total number of cases under consideration

The set of all possible input states is called the input space. To find reliability of software, we need to find output space from given input space and software.[1]

For reliability testing, data is gathered from various stages of development, such as the design and operating stages. The tests are limited due to restrictions such as cost and time restrictions. Statistical samples are obtained from the software products to test for the reliability of the software. Once sufficient data or information is gathered, statistical studies are done. Time constraints are handled by applying fixed dates or deadlines for the tests to be performed. After this phase, design of the software is stopped and the actual implementation phase starts. As there are restrictions on costs and time, the data is gathered carefully so that each data has some purpose and gets its expected precision.[2] To achieve the satisfactory results from reliability testing one must take care of some reliability characteristics.For example, Mean Time to Failure (MTTF)[3] is measured in terms of three factors:

  1. operating time,
  2. number of on off cycles,
  3. and calendar time.

If the restrictions are on operation time or if the focus is on first point for improvement, then one can apply compressed time accelerations to reduce the testing time. If the focus is on calendar time (i.e. if there are predefined deadlines), then intensified stress testing is used.[2][4]

Measurement[edit]

Software availability is measured in terms of mean time between failures (MTBF).[5]

MTBF consists of mean time to failure (MTTF) and mean time to repair (MTTR). MTTF is the difference of time between two consecutive failures and MTTR is the time required to fix the failure[6].

MTBF=MTTF+MTTR{displaystyle MTBF=MTTF+MTTR}

Steady state availability represents the percentage the software is operational. Halo reach pc version.

A=MTTFMTTF+MTTR=MTTFMTBF{displaystyle A={frac {MTTF}{MTTF+MTTR}}={frac {MTTF}{MTBF}}}

For example, if MTTF = 1000 hours for a software, then the software should work for 1000 hours of continuous operations.

For the same software if the MTTR = 2 hours, then the MTBF=1000+2=1002{displaystyle MTBF=1000+2=1002}.

Accordingly, A=1000/10020.998{displaystyle A=1000/1002approx 0.998}


Software reliability is measured in terms of failure rate (λ{displaystyle lambda }).

λ=1MTTF{displaystyle lambda ={frac {1}{MTTF}}}
R(t)=eλt{displaystyle R(t)=e^{-lambda cdot t}}

Reliability for software is a number between 0 and 1. Reliability increases when errors or bugs from the program are removed[7]. There are many software reliability growth models (SRGM) (List_of_software_reliability_models) including, logarithmic, polynomial, exponential, power, and S-shaped

Objectives of reliability testing[edit]

The main objective of the reliability testing is to test software performance under given conditions without any type of corrective measure using known fixed procedures considering its specifications.

Secondary objectives[edit]

The secondary objectives of reliability testing is:

  1. To find perceptual structure of repeating failures.
  2. To find the number of failures occurring in a specified amount of time.
  3. To find the mean life of the software.
  4. To discover the main cause of failure.
  5. Checking the performance of different units of software after taking preventive actions.

Points for defining objectives[edit]

Some restrictions on creating objectives include:

  1. Behaviour of the software should be defined in given conditions.
  2. The objective should be feasible.
  3. Time constraints should be provided.[8]

Importance of reliability testing[edit]

The application of computer software has crossed into many different fields, with software being an essential part of industrial, commercial and military systems. Because of its many applications in safety critical systems, software reliability is now an important research area. Although software engineering is becoming the fastest developing technology of the last century, there is no complete, scientific, quantitative measure to assess them. Software reliability testing is being used as a tool to help assess these software engineering technologies.[9]

To improve the performance of software product and software development process, a thorough assessment of reliability is required. Testing software reliability is important because it is of great use for software managers and practitioners.[10]

To verify the reliability of the software via testing:

  1. A sufficient number of test cases should be executed for a sufficient amount of time to get a reasonable estimate of how long the software will execute without failure. Long duration tests are needed to identify defects (such as memory leakage and buffer overflows) that take time to cause a fault or failure to occur.
  2. The distribution of test cases should match the actual or planned operational profile of the software. The more often a function or subset of the software is executed, the greater the percentage of test cases that should be allocated to that function or subset.

Types of reliability testing[edit]

Software reliability testing includes feature testing, load testing, and regression testing.[11]

Feature test[edit]

Feature testing checks the features provided by the software and is conducted in the following steps:

  • Each operation in the software is executed once.
  • Interaction between the two operations is reduced and
  • Each operation is checked for its proper execution.

The feature test is followed by the load test.[11]

Load test[edit]

This test is conducted to check the performance of the software under maximum work load. Any software performs better up to some amount of workload, after which the response time of the software starts degrading. For example, a web site can be tested to see how many simultaneous users it can support without performance degradation. This testing mainly helps for Databases and Application servers. Load testing also requires software performance testing, which checks how well some software performs under workload.[11]

Regression test[edit]

Regression testing is used to check if any new bugs have been introduced through previous bug fixes. Regression testing is conducted after every change or update in the software features. This testing is periodic, depending on the length and features of the software.[11]

Test planning[edit]

Reliability testing is more costly compared to other types of testing. Thus while doing reliability testing, proper management and planning is required. This plan includes testing process to be implemented, data about its environment, test schedule, test points etc.

Problems in designing test cases[edit]

Some common problems that occur when designing test cases include:

  • Test cases can be designed simply by selecting only valid input values for each field in the software. When changes are made in a particular module, the previous values may not actually test the new features introduced after the older version of software.
  • There may be some critical runs in the software which are not handled by any existing test case. Therefore, it is necessary to ensure that all possible types of test cases are considered through careful test case selection.[11]

Reliability enhancement through testing[edit]

Studies during development and design of software help for improving the reliability of a product. Reliability testing is essentially performed to eliminate the failure mode of the software. Life testing of the product should always be done after the design part is finished or at least the complete design is finalized.[12] Failure analysis and design improvement is achieved through testings.

Reliability growth testing[edit]

[12] This testing is used to check new prototypes of the software which are initially supposed to fail frequently. The causes of failure are detected and actions are taken to reduce defects.Suppose T is total accumulated time for prototype. n(T) is number of failure from start to time T. The graph drawn for n(T)/T is a straight line. This graph is called Duane Plot. One can get how much reliability can be gained after all other cycles of test and fix it.

ln[n(T)T]=αln(T)+b;.....Eq:1{displaystyle {begin{alignedat}{5}lnleft[{frac {nleft(Tright)}{T}}right]=-alpha lnleft(Tright)+b; ...Eq:1end{alignedat}}}

solving eq.1 for n(T),

n(T)=KT1α;......Eq:2{displaystyle {begin{alignedat}{5}nleft(Tright)=KT^{1-alpha }; ...Eq:2end{alignedat}}}

where K is e^b.If the value of alpha in the equation is zero the reliability can not be improved as expected for given number of failure. For alpha greater than zero, cumulative time T increases. This explains that number of the failures doesn't depends on test lengths.

Designing test cases for current release[edit]

If we are adding new features to the current version of software,then writing a test case for that operation is done differently.

Reliability Metrics Definition

  • First plan how many new test cases are to be written for current version.
  • If the new feature is part of any existing feature, then share the test cases of new and existing features among them.
  • Finally combine all test cases from current version and previous one and record all the results.[11]

There is a predefined rule to calculate count of new test cases for the software.if N is the probability of occurrence of new operations for new release of the software, R is the probability of occurrence of used operations in the current release and T is the number of all previously used test cases then

Reliability evaluation based on operational testing[edit]

The method of operational testing is used to test the reliability of software. Here one checks how the software works in its relevant operational environment. The main problem with this type of evaluation is constructing such an operational environment. Such type of simulation is observed in some industries like nuclear industries, in aircraft etc. Predicting future reliability is a part of reliability evaluation.

There are two techniques used for operational testing to test the reliability of software:

Steady state reliability estimation
In this case, we use feedback from delivered software products. Depending on those results, we can predict the future reliability for the next version of product. This is similar to sample testing for physical products.
Reliability growth based prediction
This method uses documentation of the testing procedure. For example, consider a developed software and that we are creating different new versions of that software. We consider data on the testing of each version and based on the observed trend, we predict the reliability of the new version of software.[13]

Reliability growth assessment and prediction[edit]

In the assessment and prediction of software reliability, we use the reliability growth model. During operation of the software, any data about its failure is stored in statistical form and is given as input to the reliability growth model. Using this data, the reliability growth model can evaluate the reliability of software.Lots of data about reliability growth model is available with probability models claiming to represent failure process. But there is no model which is best suited for all conditions. Therefore, we must choose a model based on the appropriate conditions.

Reliability estimation based on failure-free working[edit]

Software Reliability Requirements Example

In this case, the reliability of the software is estimated with assumptions like the following:

  • If a defect is found, then is it going to be fixed by someone.
  • Fixing the defect will not have any effect on the reliability of the software.
  • Each fix in the software is accurate.[13]

See also[edit]

References[edit]

  1. ^Software Reliability. Hoang Pham.
  2. ^ abE.E.Lewis. Introduction to Reliability Engineering.
  3. ^'MTTF'.
  4. ^IEEE Recommended Practice on Software Reliability, IEEE, doi:10.1109/ieeestd.2017.7827907, ISBN978-1-5044-3648-9
  5. ^Roger Pressman. Software Engineering A Practitioner's Approach. McGrawHill.
  6. ^'Approaches to Reliability Testing & Setting of Reliability Test Objectives'.
  7. ^Aditya P. Mathur. Foundations of Software Testing. Pearson publications.
  8. ^Reliability and life testing handbook. Dimitri kececioglu.
  9. ^A Statistical Basis for Software Reliability Assessment. M. xie.
  10. ^Software Reliability modelling. M. Xie.
  11. ^ abcdefJohn D. Musa (2004). Software reliability engineering: more reliable software, faster and cheaper. McGraw-Hill. ISBN0-07-060319-7.
  12. ^ abE.E.Liwis (1995-11-15). Introduction to Reliability Engineering. ISBN0-471-01833-3.
  13. ^ ab'Problem of Assessing reliability'. CiteSeerX10.1.1.104.9831.Missing or empty url= (help)

External links[edit]

Retrieved from 'https://en.wikipedia.org/w/index.php?title=Software_reliability_testing&oldid=910397255'
IEEE software life cycle
  • SQA – Software quality assuranceIEEE 730
  • SCM – Software configuration managementIEEE 828
  • STD – Software test documentationIEEE 829
  • SRS – Software requirements specificationIEEE 830
  • V&V – Software verification and validationIEEE 1012
  • SDD – Software design descriptionIEEE 1016
  • SPM – Software project managementIEEE 1058
  • SUD – Software user documentationIEEE 1063

A software requirements specification (SRS) is a description of a software system to be developed. It is modeled after business requirements specification(CONOPS), also known as a stakeholder requirements specification (StRS).[citation needed] The software requirements specification lays out functional and non-functional requirements, and it may include a set of use cases that describe user interactions that the software must provide to the user for perfect interaction.

Software requirements specification establishes the basis for an agreement between customers and contractors or suppliers on how the software product should function (in a market-driven project, these roles may be played by the marketing and development divisions). Software requirements specification is a rigorous assessment of requirements before the more specific system design stages, and its goal is to reduce later redesign. It should also provide a realistic basis for estimating product costs, risks, and schedules.[1] Used appropriately, software requirements specifications can help prevent software project failure.[2]

The software requirements specification document lists sufficient and necessary requirements for the project development.[3] To derive the requirements, the developer needs to have clear and thorough understanding of the products under development. This is achieved through detailed and continuous communications with the project team and customer throughout the software development process.

The SRS may be one of a contract's deliverabledata item descriptions[4] or have other forms of organizationally-mandated content.

Structure[edit]

An example organization of an SRS is as follows:[5]

Software Reliability Requirements For Business

  1. Purpose
    1. Background
    2. System overview
  2. Overall description
    1. Product perspective
      1. Communication Interfaces
    2. Design constraints
      1. Site Adaptation Requirements
    3. Product functions
    4. User characteristics
    5. Constraints, assumptions and dependencies
  3. Specific requirements
    1. External interface requirements
    2. Logical database requirement
    3. Software System Attributes
      1. Portability.
    4. Functional requirements
    5. Environment characteristics
    6. others.

Goals[edit]

The Software Requirements Specification (SRS) is a communication tool between users and software designers. The specific goals of the SRS are as follows:

  • Facilitating reviews
  • Describing the scope of work
  • Providing a reference to software designers (i.e. navigation aids, document structure)
  • Providing a framework for testing primary and secondary use cases
  • Including features to customer requirements
  • Providing a platform for ongoing refinement (via incomplete specs or questions)

Requirements smell[edit]

Following the idea of code smells, the notion of requirements smell has been proposed to describe issues in requirements specification where the requirement is not necessarily wrong but could be problematic. /spanish-verb-list-pdf-printable.html.

[6] Examples of requirements smells are Subjective Language, Ambiguous Adverbs and Adjectives, Superlatives and Negative Statements.[6]

See also[edit]

  • Software Engineering Body of Knowledge (SWEBOK)

Software Robustness Requirements

References[edit]

  1. ^Bourque, P.; Fairley, R.E. (2014). 'Guide to the Software Engineering Body of Knowledge (SWEBOK)'. IEEE Computer Society. Retrieved 17 July 2014.
  2. ^'Software requirements specification helps to protect IT projects from failure'. Retrieved 19 December 2016.
  3. ^Pressman, Roger (2010). Software Engineering: A Practitioner's Approach. Boston: McGraw Hill. p. 123. ISBN9780073375977.
  4. ^'DI-IPSC-81433A, DATA ITEM DESCRIPTION SOFTWARE REQUIREMENTS SPECIFICATION (SRS)'. everyspec.com. 1999-12-15. Retrieved 2013-04-04.
  5. ^Stellman, Andrew & Greene, Jennifer (2005). Applied software project management. O'Reilly Media, Inc. p. 308. ISBN978-0596009489.
  6. ^ abFemmer, Henning; Méndez Fernández, Daniel; Wagner, Stefan; Eder, Sebastian (2017). 'Rapid quality assurance with Requirements Smells'. Journal of Systems and Software. 123: 190–213. arXiv:1611.08847. doi:10.1016/j.jss.2016.02.047.

External links[edit]

  • 830-1984 — IEEE Guide to Software Requirements Specifications. 1984. doi:10.1109/IEEESTD.1984.119205. ISBN978-0-7381-4418-4.
  • 830-1993 — IEEE Recommended Practice for Software Requirements Specifications. 1994. doi:10.1109/IEEESTD.1994.121431. ISBN978-0-7381-4723-9.
  • 830-1998 — IEEE Recommended Practice for Software Requirements Specifications. 1998. doi:10.1109/IEEESTD.1998.88286. ISBN978-0-7381-0332-7.
  • 29148-2011 - Systems and software engineering — Life cycle processes — Requirements engineering. ISO/IEC/IEEE 29148:2011(E). 2011. pp. 1–94. doi:10.1109/IEEESTD.2011.6146379. ISBN978-0-7381-6591-2.('This standard replaces IEEE 830-1998, IEEE 1233-1998, IEEE 1362-1998 - http://standards.ieee.org/findstds/standard/29148-2011.html')
  • Leffingwell, Dean; Widrig, Don (2003). Managing Software Requirements: A Use Case Approach (2nd ed.). Addison-Wesley. ISBN978-0321122476.
  • Gottesdiener, Ellen (2009). The Software Requirements Memory Jogger: A Desktop Guide to Help Business and Technical Teams Develop and Manage Requirements. Addison-Wesley. ISBN978-1576811146.
  • Wiegers, Karl; Beatty, Joy (2013). Software Requirements, Third Edition. Microsoft Press. ISBN9780735679665.
  • 'IEEE SRS Template - rick4470/IEEE-SRS-Tempate'. Retrieved 27 Dec 2017.
  1. ^Taaffe, Ed. 'Mr'. thebridger. Retrieved 2019-02-02.

Hardware Reliability

Retrieved from 'https://en.wikipedia.org/w/index.php?title=Software_requirements_specification&oldid=917932146'