2

This is a follow-up to Is it a good practice to run functional tests with coverage? which made me confused a little bit.

I was under the impression that typically a test run with enabled coverage does not affect the functionality under test in any functional way. The only implication of enabling coverage reports is that it slows things down.

For example and to make things more specific - coverage.py in Python puts the special read-only "tracers" to collect the program execution information:

At the heart of the execution phase is a Python trace function. This is a function that the Python interpreter invokes for each line executed in a program. Coverage.py implements a trace function that records each file and line number as it is executed.

Can enabling coverage potentially have an effect on an application-under-test?

If the answer depends on an underlying tool, have you ever encountered a situation when a coverage tracer had an impact on the functionality under test?

asked Jul 7, 2017 at 15:22

2 Answers 2

1

Running coverage will use up some CPU resources, RAM etc, which will not be available to tested application. But it should not matter, because you are (should be) running coverage on your QA instance, not real production, so unlikely you have same load on QA instance as you have on PROD instance, and adding little additional load on the system to log coverage should not have any measurable effect. If extra load does have effect, you just detected load/performance bug before deploying new release to production. :-)

In the very unlikely event you detected any side effect, you likely found a bug in coverage.py.

In comments, @ernie mentioned another tiny corner case: overhead of coverage.py might change relative timing of the calls, exposing (or masking) a race condition.

To answer your second question, I am not aware of any such impact of coverage.py on tested system, but we are so swamped with other tasks that unfortunately checking coverage is no more on our daily task list.

answered Jul 7, 2017 at 16:14
2
  • 1
    I would think there might be a tiny corner case that coverage could change some relative timings enough to mask or expose a race condition. More of an issue if there's a lot of a async, multi-threading, etc, Commented Jul 10, 2017 at 1:00
  • @ernie - valid point, added to answer Commented Jul 10, 2017 at 14:25
1

Yes, running tests with or without coverage is the same in terms of testing application functionality and there should be no side affects.

The coverage is usually calculated separately from test running and instead examines and analyses test and application code itself.

The only side-effects I would anticipate are slightly longer runs times - though probably only a few seconds at most. If the application gets complicated enough it's possible the extra task could give you out of memory, etc. but that is pretty speculative and memory is pretty big these days.

answered Jul 7, 2017 at 15:43

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.