FIT4004 Lecture 11: Week 11 Notes

37 views6 pages
Week 11 - Metrics
In order to make sure software meets the needs of users need measure different
aspects of software and process. We would like to:
predict aspects of the product we deliver before we have completed that part.
assess the product or process to make decisions about what to do next
Types of quality metrics [Refer to week 1]
Functionality
Reliability
Usability
Efficiency
Maintainability
Portability
In many cases, often tradeoff between validity of metric and feasibility of collection
Eg. in an ongoing project, maintainability can be assessed by keeping statistics on
the effort (in terms of programmer time, or clock time) required to fix bugs, or
add features vs when we're choosing the architecture of the system, or some
aspect of the system, we can't collect those statistics from software that doesn't
even exist yet
Solution: use proxy metrics - things that we can measure that correlate to thing
that we would like to be able to measure
Testing metrics
There is a correlation between simple coverage metrics (statement + branch
coverage) and effectiveness of test suites
Smaller correlation remains if you factor out test suite size.
if you have two test suites with 50 tests in them, the one with
higher coverage is likely to detect more faults
Code coverage is quick to compute but no guarantee x% coverage is
sufficient to give you y reliability
Mutation analysis
Unlock document

This preview shows pages 1-2 of the document.
Unlock all 6 pages and 3 million more documents.

Already have an account? Log in
mutation analysis: involves automatically seeding a change into your source code,
running the modified code with the test suite, and seeing whether the test suite reports
a fault. You repeat this lots of times with different changes to different parts of the
code, and eventually get a mutation score from 0 to 1.
- Mutation tools not widely used, running a test suite multiple times can be
expensive
+ While not perfect, gives better indication of if your test suite is any good beyond
coverage
Design and Code Quality Metrics - Proxy metrics
Code size - LOC (Lines of code)
Different definitions of counting lines of code
To include comments/blank lines style dependent on programmers
General rules of thumb to stop programmers from writing an if statement in 100
lines
Generally, larger methods more error-prone
Halstead's Software Science metrics
Operators = the things that do stuff to variables
{arithmetic('+'), equality/inequality('<'), assignment('+='), shift ('>>'), logical ('&&'),
and unary ('*') operators. Reserved words for specifying control points ('while')
and control infrastructure ('else'), type ('double'), and storage ('extern'). Function
calls, array references}
Operands = the containers which hold information
identifiers, literals, labels, and function names. Each literal is treated as a distinct
operand
**In python, operators are everything that gets indented/dedented and operands are
everything else (including while statements) vs in C/C++ (using above definition) while is
considered an operator
Unlock document

This preview shows pages 1-2 of the document.
Unlock all 6 pages and 3 million more documents.

Already have an account? Log in

Document Summary

In order to make sure software meets the needs of users need measure different aspects of software and process. Predict aspects of the product we deliver before we have completed that part. Assess the product or process to make decisions about what to do next. Types of quality metrics [refer to week 1] In many cases, often tradeoff between validity of metric and feasibility of collection. Solution: use proxy metrics - things that we can measure that correlate to thing that we would like to be able to measure. There is a correlation between simple coverage metrics (statement + branch coverage) and effectiveness of test suites. Smaller correlation remains if you factor out test suite size. If you have two test suites with 50 tests in them, the one with higher coverage is likely to detect more faults. Code coverage is quick to compute but no guarantee x% coverage is sufficient to give you y reliability.

Get access

Grade+20% off
$8 USD/m$10 USD/m
Billed $96 USD annually
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
40 Verified Answers
Class+
$8 USD/m
Billed $96 USD annually
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
30 Verified Answers

Related Documents