This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: test_tools takes longer than 5 minutes on some buildbots
Type: performance Stage: resolved
Components: Tests Versions: Python 3.7
process
Status: closed Resolution: out of date
Dependencies: Superseder:
Assigned To: Nosy List: serhiy.storchaka, vstinner
Priority: normal Keywords: buildbot

Created on 2017-04-26 14:43 by vstinner, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Messages (4)
msg292355 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-04-26 14:43
On x86-64 El Capitan 3.x buildbot, test_tools takes longer than 5 minutes, whereas the overall test suite took 31 min. Is someone wrong in test_tools?

http://buildbot.python.org/all/builders/x86-64%20El%20Capitan%203.x/builds/92/steps/test/logs/stdio

Run tests in parallel using 2 child processes
...
10 slowest tests:
- test_tools: 5 min 3 sec
- test_tokenize: 4 min 52 sec
- test_multiprocessing_spawn: 4 min 4 sec
- test_datetime: 3 min 32 sec
- test_lib2to3: 3 min 14 sec
- test_mmap: 2 min 43 sec
- test_multiprocessing_forkserver: 2 min 11 sec
- test_multiprocessing_fork: 1 min 58 sec
- test_io: 1 min 54 sec
- test_subprocess: 1 min 11 sec
...
Total duration: 31 min 20 sec
...
msg292378 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2017-04-26 17:30
This is expected. test_tools just runs a large number of subprocesses. Some subtests proceeds a large number of Python source files as test_tokenize and test_lib2to3.

Maybe it would be better to run and report tests for different tools as separate tests (test_tools.test_pindent, test_tools.test_unparse, etc), but allowing to run a group as a whole. Does unittest or regrtest support this?
msg293038 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-05-05 01:57
I modified regrtest to log the system load: bpo-30263.

> On x86-64 El Capitan 3.x buildbot, test_tools takes longer than 5 minutes, whereas the overall test suite took 31 min. Is someone wrong in test_tools?

On this buildbot which has 2 CPUs, the system load starts at 3.33, but goes up 35.40:

0:04:48 load avg: 35.40 [ 41/405] test_tokenize passed (257 sec) -- running: test_lib2to3 (171 sec)

The system load is between 3.88 and 17.47 while test_tools is running.

running: test_multiprocessing_spawn (158 sec), test_tools (30 sec)
running: test_multiprocessing_spawn (188 sec), test_tools (60 sec)
running: test_multiprocessing_spawn (218 sec), test_tools (90 sec)
0:27:38 load avg: 4.04 [380/405] test_multiprocessing_spawn passed (247 sec) -- running: test_tools (120 sec)
0:27:39 load avg: 4.04 [381/405] test_telnetlib passed -- running: test_tools (121 sec)
0:27:43 load avg: 3.88 [382/405] test_dis passed -- running: test_tools (125 sec)
0:27:44 load avg: 3.88 [383/405] test_script_helper passed -- running: test_tools (126 sec)
0:27:45 load avg: 3.88 [384/405] test_glob passed -- running: test_tools (127 sec)
0:27:46 load avg: 4.21 [385/405] test_ensurepip passed -- running: test_tools (128 sec)
0:27:51 load avg: 4.59 [386/405] test_descr passed -- running: test_tools (133 sec)
0:27:52 load avg: 4.59 [387/405] test_sax passed -- running: test_tools (134 sec)
0:27:53 load avg: 4.59 [388/405] test_unicode_file_functions passed -- running: test_tools (135 sec)
0:27:54 load avg: 4.59 [389/405] test_xml_dom_minicompat passed -- running: test_tools (136 sec)
0:27:57 load avg: 4.54 [390/405] test___all__ passed -- running: test_tools (139 sec)
0:27:58 load avg: 4.54 [391/405] test_grammar passed -- running: test_tools (140 sec)
0:27:59 load avg: 4.54 [392/405] test_unpack passed -- running: test_tools (141 sec)
0:28:17 load avg: 4.48 [393/405] test_weakref passed -- running: test_tools (159 sec)
0:28:19 load avg: 4.48 [394/405] test_parser passed -- running: test_tools (160 sec)
0:28:25 load avg: 4.52 [395/405] test_difflib passed -- running: test_tools (167 sec)
0:28:26 load avg: 4.52 [396/405] test_compare passed -- running: test_tools (167 sec)
0:28:47 load avg: 9.68 [397/405] test_long passed -- running: test_tools (189 sec)
0:28:48 load avg: 9.68 [398/405] test_mailcap passed -- running: test_tools (190 sec)
0:28:49 load avg: 9.68 [399/405] test_rlcompleter passed -- running: test_tools (191 sec)
running: test_pickle (30 sec), test_tools (221 sec)
0:29:38 load avg: 17.47 [400/405] test_pickle passed (49 sec) -- running: test_tools (240 sec)
0:29:49 load avg: 14.94 [401/405] test_nntplib passed -- running: test_tools (251 sec)
0:29:50 load avg: 14.94 [402/405] test_with passed -- running: test_tools (252 sec)
0:29:52 load avg: 14.22 [403/405] test_enum passed -- running: test_tools (254 sec)
0:29:55 load avg: 14.22 [404/405] test_source_encoding passed -- running: test_tools (257 sec)
running: test_tools (287 sec)
running: test_tools (317 sec)
0:30:58 load avg: 5.59 [405/405] test_tools passed (319 sec)


Is it ok to have such system load? Is there something wrong on this buildbot?
msg297089 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-06-28 00:53
I failed to take a decision nor any concrete action on this issue, so I just close it ;-)
History
Date User Action Args
2022-04-11 14:58:45adminsetgithub: 74358
2017-06-28 00:53:54vstinnersetstatus: open -> closed
resolution: out of date
messages: + msg297089

stage: resolved
2017-05-05 01:57:48vstinnersetmessages: + msg293038
2017-04-26 17:30:07serhiy.storchakasetmessages: + msg292378
2017-04-26 14:43:35vstinnercreate