unit testing - What can be alternative metrics to code coverage? -


Code coverage is probably the most controversial code metric. Some say that you have to go to 80% code coverage, others say it Is superficial and does not say anything about the quality of your test (see.)

People measure everything, they need comparisons, benchmarks, etc. Project teams require an indicator, how good they are to test.

So what are the options for code coverage? What would be better metrics than "I touched this code of this code"?
What are the alternatives?

If you are looking for some useful metrics that give you the quality (or lack thereof) of your code, , You should look at the following metrics:

  1. Chakrabarti complexity
    • This is a complex method of method
    • Generally 10 And less good, 11-25 is bad, high is terrible.
    • The nesting depth
      • This one is
      • Generally 4 and less good, 5-8 is bad, the high is terrible.
      • Relational suction
        • This is a measurement of how to relate to a package or assembly.
        • Relational integration is a part of a relative metric, but none is useful.
        • Depends on acceptable level formula.
        • H: Number of types in package / assembly
        • H: Collection of relationships between types
        • Formula: H = (
        • Lack of method combination (LCOM)
          • This is a measurement of how a class is united.
          • The formation of a class is a measurement of how many
          • formula: LCOM = 1 - (amount (mf) / m * f)
            • this thing Good sign of whether your class fulfills the Principal of Single Liability. >
            • M: Number of methods in the class
            • F: Number of instance fields in class
            • MF: Number of ways to access a particular instance field in class
          • A class that is completely united will be LCOM of 0.
          • A class that is completely non-co-ordinating will have 1 lcom.
          • Losing C can help you to make your class more coordinator and maintain.

These are just some of the key metrics that NDF, NAT matrix and dependency mapping utility can provide for you. I've worked a lot with code metrics recently, and these 4 metrics are key key metrics that have proven to be most useful to us. NDEP offers several other useful metrics, however, Effich & amp; Casual Coupling & Abstraction & amp; Volatility, which provide a good way to keep your code (and whether or not the NDPS tells you the area of ​​pain or the area of ​​waste).

Even if you are not working with the .NET platform, I want to take a look at this. It has very useful information that you can be able to calculate these metrics based on the matrix you develop.


Comments

Popular posts from this blog

c++ - Linux and clipboard -

What is expire header and how to achive them in ASP.NET and PHP? -

sql server - How can I determine which of my SQL 2005 statistics are unused? -