[Python-checkins] CVS: python/dist/src/Lib/test README,1.1,1.2
Skip Montanaro
python-dev@python.org
2000年7月19日 10:19:51 -0700
Update of /cvsroot/python/python/dist/src/Lib/test
In directory slayer.i.sourceforge.net:/tmp/cvs-serv17819
Modified Files:
README
Log Message:
restructured a bit and added some more content...
Index: README
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/test/README,v
retrieving revision 1.1
retrieving revision 1.2
diff -C2 -r1.1 -r1.2
*** README 2000年06月30日 06:08:35 1.1
--- README 2000年07月19日 17:19:49 1.2
***************
*** 1,40 ****
! Writing Python Test Cases
! -------------------------
Skip Montanaro
If you add a new module to Python or modify the functionality of an existing
! module, it is your responsibility to write one or more test cases to test
! that new functionality. The mechanics of the test system are fairly
! straightforward. If you are writing test cases for module zyzzyx, you need
! to create a file in .../Lib/test named test_zyzzyx.py and an expected output
! file in .../Lib/test/output named test_zyzzyx ("..." represents the
! top-level directory in the Python source tree, the directory containing the
! configure script). Generate the initial version of the test output file by
! executing:
! cd .../Lib/test
! python regrtest.py -g test_zyzzyx.py
! Any time you modify test_zyzzyx.py you need to generate a new expected
output file. Don't forget to desk check the generated output to make sure
it's really what you expected to find! To run a single test after modifying
a module, simply run regrtest.py without the -g flag:
! cd .../Lib/test
! python regrtest.py test_zyzzyx.py
To run the entire test suite, make the "test" target at the top level:
- cd ...
make test
- Test cases generate output based upon computed values and branches taken in
- the code. When executed, regrtest.py compares the actual output generated
- by executing the test case with the expected output and reports success or
- failure. It stands to reason that if the actual and expected outputs are to
- match, they must not contain any machine dependencies. This means
- your test cases should not print out absolute machine addresses or floating
- point numbers with large numbers of significant digits.
Writing good test cases is a skilled task and is too complex to discuss in
detail in this short document. Many books have been written on the subject.
--- 1,73 ----
! Writing Python Regression Tests
! -------------------------------
Skip Montanaro
+ (skip@mojam.com)
+
+ Introduction
+
If you add a new module to Python or modify the functionality of an existing
! module, you should write one or more test cases to exercise that new
! functionality. The mechanics of how the test system operates are fairly
! straightforward. When a test case is run, the output is compared with the
! expected output that is stored in .../Lib/test/output. If the test runs to
! completion and the actual and expected outputs match, the test succeeds, if
! not, it fails. If an ImportError is raised, the test is not run.
!
! You will be writing unit tests (isolated tests of functions and objects
! defined by the module) using white box techniques. Unlike black box
! testing, where you only have the external interfaces to guide your test case
! writing, in white box testing you can see the code being tested and tailor
! your test cases to exercise it more completely. In particular, you will be
! able to refer to the C and Python code in the CVS repository when writing
! your regression test cases.
!
!
! Executing Test Cases
!
! If you are writing test cases for module spam, you need to create a file
! in .../Lib/test named test_spam.py and an expected output file in
! .../Lib/test/output named test_spam ("..." represents the top-level
! directory in the Python source tree, the directory containing the configure
! script). From the top-level directory, generate the initial version of the
! test output file by executing:
! ./python Lib/test/regrtest.py -g test_spam.py
! Any time you modify test_spam.py you need to generate a new expected
output file. Don't forget to desk check the generated output to make sure
it's really what you expected to find! To run a single test after modifying
a module, simply run regrtest.py without the -g flag:
+
+ ./python Lib/test/regrtest.py test_spam.py
+
+ While debugging a regression test, you can of course execute it
+ independently of the regression testing framework and see what it prints:
! ./python Lib/test/test_spam.py
To run the entire test suite, make the "test" target at the top level:
make test
+
+ On non-Unix platforms where make may not be available, you can simply
+ execute the two runs of regrtest (optimized and non-optimized) directly:
+
+ ./python Lib/test/regrtest.py
+ ./python -O Lib/test/regrtest.py
+
+
+ Test cases generate output based upon values computed by the test code.
+ When executed, regrtest.py compares the actual output generated by executing
+ the test case with the expected output and reports success or failure. It
+ stands to reason that if the actual and expected outputs are to match, they
+ must not contain any machine dependencies. This means your test cases
+ should not print out absolute machine addresses (e.g. the return value of
+ the id() builtin function) or floating point numbers with large numbers of
+ significant digits (unless you understand what you are doing!).
+ Test Case Writing Tips
+
Writing good test cases is a skilled task and is too complex to discuss in
detail in this short document. Many books have been written on the subject.
***************
*** 47,77 ****
or find it used (around 20ドル), I strongly urge you to pick up a copy.
- As an author of at least part of a module, you will be writing unit tests
- (isolated tests of functions and objects defined by the module) using white
- box techniques. (Unlike black box testing, where you only have the external
- interfaces to guide your test case writing, in white box testing you can see
- the code being tested and tailor your test cases to exercise it more
- completely).
-
The most important goal when writing test cases is to break things. A test
! case that doesn't uncover a bug is less valuable than one that does. In
! designing test cases you should pay attention to the following:
! 1. Your test cases should exercise all the functions and objects defined
! in the module, not just the ones meant to be called by users of your
! module. This may require you to write test code that uses the module
! in ways you don't expect (explicitly calling internal functions, for
! example - see test_atexit.py).
!
! 2. You should consider any boundary values that may tickle exceptional
! conditions (e.g. if you were testing a division module you might well
! want to generate tests with numerators and denominators at the limits
! of floating point and integer numbers on the machine performing the
! tests as well as a denominator of zero).
!
! 3. You should exercise as many paths through the code as possible. This
! may not always be possible, but is a goal to strive for. In
! particular, when considering if statements (or their equivalent), you
! want to create test cases that exercise both the true and false
! branches. For while and for statements, you should create test cases
! that exercise the loop zero, one and multiple times.
--- 80,166 ----
or find it used (around 20ドル), I strongly urge you to pick up a copy.
The most important goal when writing test cases is to break things. A test
! case that doesn't uncover a bug is much less valuable than one that does.
! In designing test cases you should pay attention to the following:
!
! * Your test cases should exercise all the functions and objects defined
! in the module, not just the ones meant to be called by users of your
! module. This may require you to write test code that uses the module
! in ways you don't expect (explicitly calling internal functions, for
! example - see test_atexit.py).
!
! * You should consider any boundary values that may tickle exceptional
! conditions (e.g. if you were writing regression tests for division,
! you might well want to generate tests with numerators and denominators
! at the limits of floating point and integer numbers on the machine
! performing the tests as well as a denominator of zero).
!
! * You should exercise as many paths through the code as possible. This
! may not always be possible, but is a goal to strive for. In
! particular, when considering if statements (or their equivalent), you
! want to create test cases that exercise both the true and false
! branches. For loops, you should create test cases that exercise the
! loop zero, one and multiple times.
!
! * You should test with obviously invalid input. If you know that a
! function requires an integer input, try calling it with other types of
! objects to see how it responds.
!
! * You should test with obviously out-of-range input. If the domain of a
! function is only defined for positive integers, try calling it with a
! negative integer.
!
! * If you are going to fix a bug that wasn't uncovered by an existing
! test, try to write a test case that exposes the bug (preferably before
! fixing it).
!
!
! Regression Test Writing Rules
!
! Each test case is different. There is no "standard" form for a Python
! regression test case, though there are some general rules:
!
! * If your test case detects a failure, raise TestFailed (found in
! test_support).
!
! * Import everything you'll need as early as possible.
!
! * If you'll be importing objects from a module that is at least
! partially platform-dependent, only import those objects you need for
! the current test case to avoid spurious ImportError exceptions that
! prevent the test from running to completion.
!
! * Print all your test case results using the print statement. For
! non-fatal errors, print an error message (or omit a successful
! completion print) to indicate the failure, but proceed instead of
! raising TestFailed.
!
!
! Miscellaneous
!
! There is a test_support module you can import from your test case. It
! provides the following useful objects:
!
! * TestFailed - raise this exception when your regression test detects a
! failure.
!
! * findfile(file) - you can call this function to locate a file somewhere
! along sys.path or in the Lib/test tree - see test_linuxaudiodev.py for
! an example of its use.
!
! * verbose - you can use this variable to control print output. Many
! modules use it. Search for "verbose" in the test_*.py files to see
! lots of examples.
!
! * fcmp(x,y) - you can call this function to compare two floating point
! numbers when you expect them to only be approximately equal withing a
! fuzz factor (test_support.FUZZ, which defaults to 1e-6).
!
! Python and C statement coverage results are currently available at
!
! http://www.musi-cal.com/~skip/python/Python/dist/src/
! As of this writing (July, 2000) these results are being generated nightly.
! You can refer to the summaries and the test coverage output files to see
! where coverage is adequate or lacking and write test cases to beef up the
! coverage.