[PATCH 00/18] buildman: Improve test coverage for builder.py
From: Simon Glass <simon.glass@canonical.com> This series improves the test coverage for builder.py from 11% to 97% by adding unit tests for various functions. It extracts several functions to make them more testable: - _print_build_summary() from build_boards() - _check_output_for_loop() from make() It also fixes a bug in _calc_config_changes() where value changes in existing config options are not detected. New unit tests cover: - Function size detail output (bloat-o-meter style) - Thread preparation and working space setup - Output space management and cleanup - Build loop detection in make output - The make() method execution flow - Build summary message formatting - Detection of boards not built due to missing toolchains Functional tests are added for -K, -E, -U flags, IDE mode output, branch summary display, and toolchain error reporting. Simon Glass (18): buildman: Add missing return documentation in builder.py buildman: Add tests for builder helper functions buildman: Add test for branch summary display buildman: Add unit tests for print_func_size_detail() buildman: Add test for -K flag (show config changes) buildman: Fix config value change detection buildman: Add a test for -E flag (warnings as errors) buildman: Add test for -U flag (show environment changes) buildman: Add test for IDE mode output buildman: Add unit tests for _prepare_thread() buildman: Add unit tests for _prepare_working_space() buildman: Add unit tests for _prepare_output_space() buildman: Test _show_not_built() and toolchain errors buildman: Detect toolchain errors in _show_not_built() buildman: Add unit tests for _check_output_for_loop() buildman: Add unit tests for make() buildman: Extract _print_build_summary() from build_boards() buildman: Add unit tests for _print_build_summary() tools/buildman/builder.py | 158 ++++-- tools/buildman/func_test.py | 257 +++++++++- tools/buildman/main.py | 12 +- tools/buildman/test.py | 185 +++++++ tools/buildman/test_builder.py | 854 +++++++++++++++++++++++++++++++++ 5 files changed, 1411 insertions(+), 55 deletions(-) create mode 100644 tools/buildman/test_builder.py -- 2.43.0 base-commit: 83f0a1987b538f0d5ea527489e62d159798fcf8b branch: bmp
From: Simon Glass <simon.glass@canonical.com> Add return type documentation to functions that were missing it, fixing pylint W9011/W9012 warnings. The affected functions are: - make(): Returns CommandResult - check_output() (nested): Returns bool - get_output_dir(): Returns str path - get_done_file(): Returns str path - get_sizes_file(): Returns str path - get_func_sizes_file(): Returns str path - get_objdump_file(): Returns str path - get_err_file(): Returns str path - get_thread_dir(): Returns str path Co-developed-by: Claude <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/builder.py | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/tools/buildman/builder.py b/tools/buildman/builder.py index b7343b57c0d..4bcde84adca 100644 --- a/tools/buildman/builder.py +++ b/tools/buildman/builder.py @@ -592,9 +592,20 @@ class Builder: cwd (str): Directory where make should be run args: Arguments to pass to make kwargs: Arguments to pass to command.run_one() + + Returns: + CommandResult: Result of the make operation """ def check_output(_stream, data): + """Check output for config restart loops + + Args: + data (bytes): Output data to check + + Returns: + bool: True to terminate the command, False to continue + """ if b'Restart config' in data: self._restarting_config = True @@ -690,6 +701,9 @@ class Builder: Args: commit_upto (int): Commit number to use (0..self.count-1) + + Returns: + str: Path to the output directory """ if self.work_in_output: return self._working_dir @@ -729,6 +743,9 @@ class Builder: Args: commit_upto (int): Commit number to use (0..self.count-1) target (str): Target name + + Returns: + str: Path to the done file """ return os.path.join(self.get_build_dir(commit_upto, target), 'done') @@ -738,6 +755,9 @@ class Builder: Args: commit_upto (int): Commit number to use (0..self.count-1) target (str): Target name + + Returns: + str: Path to the sizes file """ return os.path.join(self.get_build_dir(commit_upto, target), 'sizes') @@ -748,6 +768,9 @@ class Builder: commit_upto (int): Commit number to use (0..self.count-1) target (str): Target name elf_fname (str): Filename of elf image + + Returns: + str: Path to the funcsizes file """ return os.path.join(self.get_build_dir(commit_upto, target), f"{elf_fname.replace('/', '-')}.sizes") @@ -759,6 +782,9 @@ class Builder: commit_upto (int): Commit number to use (0..self.count-1) target (str): Target name elf_fname (str): Filename of elf image + + Returns: + str: Path to the objdump file """ return os.path.join(self.get_build_dir(commit_upto, target), f"{elf_fname.replace('/', '-')}.objdump") @@ -769,6 +795,9 @@ class Builder: Args: commit_upto (int): Commit number to use (0..self.count-1) target (str): Target name + + Returns: + str: Path to the err file """ output_dir = self.get_build_dir(commit_upto, target) return os.path.join(output_dir, 'err') @@ -1983,6 +2012,9 @@ class Builder: Args: thread_num (int): Number of thread to check (-1 for main process, which is treated as 0) + + Returns: + str: Path to the thread's working directory """ if self.work_in_output: return self._working_dir -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add TestBuilderFuncs class with 9 new tests to improve coverage of builder.py methods that were previously uncovered: - Parsing nm output for function sizes - Combining static function symbols with the same name - Parsing .config style defconfig files - Parsing autoconf.h header files - Handling missing config files gracefully - Squashing y-values in config parsing - Parsing uboot.env environment files - Handling missing environment files - Handling malformed environment entries Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/main.py | 3 +- tools/buildman/test.py | 185 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 187 insertions(+), 1 deletion(-) diff --git a/tools/buildman/main.py b/tools/buildman/main.py index d85e8dd428d..04993dcd87b 100755 --- a/tools/buildman/main.py +++ b/tools/buildman/main.py @@ -49,7 +49,8 @@ def run_tests(skip_net_tests, debug, verbose, args): result = test_util.run_test_suites( 'buildman', debug, verbose, False, False, args.threads, test_name, [], [test.TestBuildOutput, test.TestBuildBoards, test.TestBuild, - test.TestBuildConfig, test.TestBuildMisc, func_test.TestFunctional, + test.TestBuildConfig, test.TestBuildMisc, test.TestBuilderFuncs, + func_test.TestFunctional, test_boards.TestBoards, test_bsettings.TestBsettings, 'buildman.toolchain']) diff --git a/tools/buildman/test.py b/tools/buildman/test.py index fb5dcd09555..828d05781cc 100644 --- a/tools/buildman/test.py +++ b/tools/buildman/test.py @@ -1269,5 +1269,190 @@ class TestBuildMisc(TestBuildBase): 'other_board')) +class TestBuilderFuncs(TestBuildBase): + """Tests for individual Builder methods""" + + def test_read_func_sizes(self): + """Test read_func_sizes() function""" + build = builder.Builder(self.toolchains, self.base_dir, None, 0, 2) + + # Create test data simulating 'nm' output + # NM_SYMBOL_TYPES = 'tTdDbBr' - text, data, bss, rodata + nm_output = '''00000100 T func_one +00000200 t func_two +00000050 T func_three +00000030 d data_var +00000010 W weak_func +''' + with tempfile.NamedTemporaryFile(mode='w', delete=False) as tmp: + tmp.write(nm_output) + tmp_name = tmp.name + + try: + with open(tmp_name, 'r', encoding='utf-8') as fhandle: + sizes = build.read_func_sizes(tmp_name, fhandle) + + # T, t, d are in NM_SYMBOL_TYPES, W is not + self.assertEqual(0x100, sizes['func_one']) + self.assertEqual(0x200, sizes['func_two']) + self.assertEqual(0x50, sizes['func_three']) + self.assertEqual(0x30, sizes['data_var']) + self.assertNotIn('weak_func', sizes) + finally: + os.unlink(tmp_name) + + def test_read_func_sizes_static(self): + """Test read_func_sizes() with static function symbols""" + build = builder.Builder(self.toolchains, self.base_dir, None, 0, 2) + + # Test static functions (have . in name after first char) + nm_output = '''00000100 t func.1234 +00000050 t func.5678 +''' + with tempfile.NamedTemporaryFile(mode='w', delete=False) as tmp: + tmp.write(nm_output) + tmp_name = tmp.name + + try: + with open(tmp_name, 'r', encoding='utf-8') as fhandle: + sizes = build.read_func_sizes(tmp_name, fhandle) + + # Static functions should be combined under 'static.func' + self.assertEqual(0x100 + 0x50, sizes['static.func']) + finally: + os.unlink(tmp_name) + + def test_process_config_defconfig(self): + """Test _process_config() with .config style file""" + build = builder.Builder(self.toolchains, self.base_dir, None, 0, 2) + + config_data = '''# This is a comment +CONFIG_OPTION_A=y +CONFIG_OPTION_B="string" +CONFIG_OPTION_C=123 +# CONFIG_OPTION_D is not set +''' + with tempfile.NamedTemporaryFile(mode='w', delete=False, + suffix='.config') as tmp: + tmp.write(config_data) + tmp_name = tmp.name + + try: + config = build._process_config(tmp_name) + + self.assertEqual('y', config['CONFIG_OPTION_A']) + self.assertEqual('"string"', config['CONFIG_OPTION_B']) + self.assertEqual('123', config['CONFIG_OPTION_C']) + # Comments should be ignored + self.assertNotIn('CONFIG_OPTION_D', config) + finally: + os.unlink(tmp_name) + + def test_process_config_autoconf_h(self): + """Test _process_config() with autoconf.h style file""" + build = builder.Builder(self.toolchains, self.base_dir, None, 0, 2) + + config_data = '''/* Auto-generated header */ +#define CONFIG_OPTION_A 1 +#define CONFIG_OPTION_B "value" +#define CONFIG_OPTION_C +#define NOT_CONFIG 1 +''' + with tempfile.NamedTemporaryFile(mode='w', delete=False, + suffix='.h') as tmp: + tmp.write(config_data) + tmp_name = tmp.name + + try: + config = build._process_config(tmp_name) + + self.assertEqual('1', config['CONFIG_OPTION_A']) + self.assertEqual('"value"', config['CONFIG_OPTION_B']) + # #define without value gets empty string (squash_config_y=False) + self.assertEqual('', config['CONFIG_OPTION_C']) + # Non-CONFIG_ defines should be ignored + self.assertNotIn('NOT_CONFIG', config) + finally: + os.unlink(tmp_name) + + def test_process_config_nonexistent(self): + """Test _process_config() with non-existent file""" + build = builder.Builder(self.toolchains, self.base_dir, None, 0, 2) + + config = build._process_config('/nonexistent/path/config') + self.assertEqual({}, config) + + def test_process_config_squash_y(self): + """Test _process_config() with squash_config_y enabled""" + build = builder.Builder(self.toolchains, self.base_dir, None, 0, 2) + build.squash_config_y = True + + config_data = '''CONFIG_OPTION_A=y +CONFIG_OPTION_B=n +#define CONFIG_OPTION_C +''' + with tempfile.NamedTemporaryFile(mode='w', delete=False) as tmp: + tmp.write(config_data) + tmp_name = tmp.name + + try: + config = build._process_config(tmp_name) + + # y should be squashed to 1 + self.assertEqual('1', config['CONFIG_OPTION_A']) + # n should remain n + self.assertEqual('n', config['CONFIG_OPTION_B']) + # Empty #define should get '1' when squash_config_y is True + self.assertEqual('1', config['CONFIG_OPTION_C']) + finally: + os.unlink(tmp_name) + + def test_process_environment(self): + """Test _process_environment() function""" + build = builder.Builder(self.toolchains, self.base_dir, None, 0, 2) + + # Environment file uses null-terminated strings + env_data = 'bootcmd=run bootm\x00bootdelay=3\x00console=ttyS0\x00' + with tempfile.NamedTemporaryFile(mode='w', delete=False) as tmp: + tmp.write(env_data) + tmp_name = tmp.name + + try: + env = build._process_environment(tmp_name) + + self.assertEqual('run bootm', env['bootcmd']) + self.assertEqual('3', env['bootdelay']) + self.assertEqual('ttyS0', env['console']) + finally: + os.unlink(tmp_name) + + def test_process_environment_nonexistent(self): + """Test _process_environment() with non-existent file""" + build = builder.Builder(self.toolchains, self.base_dir, None, 0, 2) + + env = build._process_environment('/nonexistent/path/uboot.env') + self.assertEqual({}, env) + + def test_process_environment_invalid_lines(self): + """Test _process_environment() handles invalid lines gracefully""" + build = builder.Builder(self.toolchains, self.base_dir, None, 0, 2) + + # Include lines without '=' which should be ignored + env_data = 'valid=value\x00invalid_no_equals\x00another=good\x00' + with tempfile.NamedTemporaryFile(mode='w', delete=False) as tmp: + tmp.write(env_data) + tmp_name = tmp.name + + try: + env = build._process_environment(tmp_name) + + self.assertEqual('value', env['valid']) + self.assertEqual('good', env['another']) + # Invalid line should be silently ignored + self.assertNotIn('invalid_no_equals', env) + finally: + os.unlink(tmp_name) + + if __name__ == "__main__": unittest.main() -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add testBranchSummary() to test the -s, -S, and -B flags which show a summary of build results without triggering any new builds. The test first builds a branch, then: - Runs with -s to verify the summary output - Runs with -sS to test size display - Runs with -sSB to test bloat (function size changes) display Update the command handlers to return realistic output: - _HandleCommandSize() returns size output that varies between calls - _HandleCommandObjdump() returns output with a .rodata section - _HandleCommandNm() returns nm output with function sizes that vary This improves coverage of show_summary(), print_result_summary(), print_size_detail(), print_func_size_detail() and related code paths in builder.py and builderthread.py. Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/func_test.py | 75 +++++++++++++++++++++++++++++++++++-- 1 file changed, 72 insertions(+), 3 deletions(-) diff --git a/tools/buildman/func_test.py b/tools/buildman/func_test.py index 3ecbd22613f..69f6b25fa5a 100644 --- a/tools/buildman/func_test.py +++ b/tools/buildman/func_test.py @@ -367,16 +367,46 @@ class TestFunctional(unittest.TestCase): sys.exit(1) def _HandleCommandNm(self, args): - return command.CommandResult(return_code=0) + # Return nm --size-sort output with function sizes that vary between + # calls to simulate changes between commits + self._nm_calls = getattr(self, '_nm_calls', 0) + 1 + base = self._nm_calls * 0x10 + stdout = f'''{0x100 + base:08x} T main +{0x80 + base:08x} T board_init +{0x50 + base:08x} t local_func +{0x30:08x} d data_var +''' + return command.CommandResult(return_code=0, stdout=stdout) def _HandleCommandObjdump(self, args): - return command.CommandResult(return_code=0) + # Return objdump -h output with .rodata section + stdout = ''' +u-boot: file format elf32-littlearm + +Sections: +Idx Name Size VMA LMA File off Algn + 0 .text 00010000 00000000 00000000 00001000 2**2 + 1 .rodata 00001000 00010000 00010000 00011000 2**2 + 2 .data 00000100 00011000 00011000 00012000 2**2 +''' + return command.CommandResult(return_code=0, stdout=stdout) def _HandleCommandObjcopy(self, args): return command.CommandResult(return_code=0) def _HandleCommandSize(self, args): - return command.CommandResult(return_code=0) + # Return size output - vary the size based on call count to simulate + # changes between commits + self._size_calls = getattr(self, '_size_calls', 0) + 1 + text = 10000 + self._size_calls * 100 + data = 1000 + bss = 500 + total = text + data + bss + fname = args[-1] if args else 'u-boot' + stdout = f''' text data bss dec hex filename + {text} {data} {bss} {total} {total:x} {fname} +''' + return command.CommandResult(return_code=0, stdout=stdout) def _HandleCommandCpp(self, args): # args ['-nostdinc', '-P', '-I', '/tmp/tmp7f17xk_o/src', '-undef', @@ -549,6 +579,45 @@ Some images are invalid''' self.assertEqual(self._builder.count, self._total_builds) self.assertEqual(self._builder.fail, 0) + def testBranchSummary(self): + """Test building a branch and then showing a summary""" + self._RunControl('-b', TEST_BRANCH, '-o', self._output_dir) + self.assertEqual(self._builder.count, self._total_builds) + self.assertEqual(self._builder.fail, 0) + + # Now run with -s to show summary + self._make_calls = 0 + self._RunControl('-b', TEST_BRANCH, '-s', '-o', self._output_dir, + clean_dir=False) + # Summary should not trigger any builds + self.assertEqual(self._make_calls, 0) + lines = terminal.get_print_test_lines() + self.assertIn('(no errors to report)', lines[-1].text) + + # Now run with -S to show sizes as well + self._make_calls = 0 + self._RunControl('-b', TEST_BRANCH, '-sS', '-o', self._output_dir, + clean_dir=False) + self.assertEqual(self._make_calls, 0) + lines = terminal.get_print_test_lines() + # Check that size information is displayed (arch names with size deltas) + text = '\n'.join(line.text for line in lines) + self.assertIn('arm', text) + self.assertIn('sandbox', text) + self.assertIn('(no errors to report)', lines[-1].text) + + # Now run with -B to show bloat (function size changes) + self._make_calls = 0 + self._RunControl('-b', TEST_BRANCH, '-sSB', '-o', self._output_dir, + clean_dir=False) + self.assertEqual(self._make_calls, 0) + lines = terminal.get_print_test_lines() + text = '\n'.join(line.text for line in lines) + # Check function names appear in the bloat output + self.assertIn('main', text) + self.assertIn('board_init', text) + self.assertIn('(no errors to report)', lines[-1].text) + def testCount(self): """Test building a specific number of commitst""" self._RunControl('-b', TEST_BRANCH, '-c2', '-o', self._output_dir) -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add a new test_builder.py file with unit tests for the print_func_size_detail() method in Builder class. This method displays detailed function size changes between builds. Tests cover: - No size changes (no output expected) - Function grows in size - Function shrinks in size - New function added - Function removed - Mixed changes (add, remove, grow, shrink) - Empty dictionaries Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/main.py | 2 + tools/buildman/test_builder.py | 144 +++++++++++++++++++++++++++++++++ 2 files changed, 146 insertions(+) create mode 100644 tools/buildman/test_builder.py diff --git a/tools/buildman/main.py b/tools/buildman/main.py index 04993dcd87b..70547b2991d 100755 --- a/tools/buildman/main.py +++ b/tools/buildman/main.py @@ -39,6 +39,7 @@ def run_tests(skip_net_tests, debug, verbose, args): from buildman import test from buildman import test_boards from buildman import test_bsettings + from buildman import test_builder test_name = args.terms and args.terms[0] or None if skip_net_tests: @@ -52,6 +53,7 @@ def run_tests(skip_net_tests, debug, verbose, args): test.TestBuildConfig, test.TestBuildMisc, test.TestBuilderFuncs, func_test.TestFunctional, test_boards.TestBoards, test_bsettings.TestBsettings, + test_builder.TestPrintFuncSizeDetail, 'buildman.toolchain']) return (0 if result.wasSuccessful() else 1) diff --git a/tools/buildman/test_builder.py b/tools/buildman/test_builder.py new file mode 100644 index 00000000000..5d003de1966 --- /dev/null +++ b/tools/buildman/test_builder.py @@ -0,0 +1,144 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2025 Google LLC +# Written by Simon Glass <sjg@chromium.org> + +"""Unit tests for builder.py""" + +import unittest + +from buildman import builder +from u_boot_pylib import terminal + + +class TestPrintFuncSizeDetail(unittest.TestCase): + """Tests for Builder.print_func_size_detail()""" + + def setUp(self): + """Set up test fixtures""" + # Create a minimal Builder for testing + self.builder = builder.Builder( + toolchains=None, base_dir='/tmp', git_dir=None, num_threads=0, + num_jobs=1) + terminal.set_print_test_mode() + + def tearDown(self): + """Clean up after tests""" + terminal.set_print_test_mode(False) + + def test_no_change(self): + """Test with no size changes""" + old = {'func_a': 100, 'func_b': 200} + new = {'func_a': 100, 'func_b': 200} + + terminal.get_print_test_lines() # Clear + self.builder.print_func_size_detail('u-boot', old, new) + lines = terminal.get_print_test_lines() + + # No output when there are no changes + self.assertEqual(len(lines), 0) + + def test_function_grows(self): + """Test when a function grows in size""" + old = {'func_a': 100} + new = {'func_a': 150} + + terminal.get_print_test_lines() # Clear + self.builder.print_func_size_detail('u-boot', old, new) + lines = terminal.get_print_test_lines() + + text = '\n'.join(line.text for line in lines) + self.assertIn('u-boot', text) + self.assertIn('func_a', text) + self.assertIn('grow:', text) + # Check old, new and delta values appear + self.assertIn('100', text) + self.assertIn('150', text) + self.assertIn('+50', text) + + def test_function_shrinks(self): + """Test when a function shrinks in size""" + old = {'func_a': 200} + new = {'func_a': 150} + + terminal.get_print_test_lines() # Clear + self.builder.print_func_size_detail('u-boot', old, new) + lines = terminal.get_print_test_lines() + + text = '\n'.join(line.text for line in lines) + self.assertIn('func_a', text) + self.assertIn('-50', text) + + def test_function_added(self): + """Test when a new function is added""" + old = {'func_a': 100} + new = {'func_a': 100, 'func_b': 200} + + terminal.get_print_test_lines() # Clear + self.builder.print_func_size_detail('u-boot', old, new) + lines = terminal.get_print_test_lines() + + text = '\n'.join(line.text for line in lines) + self.assertIn('func_b', text) + self.assertIn('add:', text) + # New function shows '-' for old value + self.assertIn('-', text) + self.assertIn('200', text) + + def test_function_removed(self): + """Test when a function is removed""" + old = {'func_a': 100, 'func_b': 200} + new = {'func_a': 100} + + terminal.get_print_test_lines() # Clear + self.builder.print_func_size_detail('u-boot', old, new) + lines = terminal.get_print_test_lines() + + text = '\n'.join(line.text for line in lines) + self.assertIn('func_b', text) + # Removed function shows '-' for new value + self.assertIn('-200', text) + + def test_mixed_changes(self): + """Test with a mix of added, removed, grown and shrunk functions""" + old = { + 'unchanged': 100, + 'grown': 200, + 'shrunk': 300, + 'removed': 400, + } + new = { + 'unchanged': 100, + 'grown': 250, + 'shrunk': 250, + 'added': 150, + } + + terminal.get_print_test_lines() # Clear + self.builder.print_func_size_detail('u-boot', old, new) + lines = terminal.get_print_test_lines() + + text = '\n'.join(line.text for line in lines) + # Check all changed functions appear + self.assertIn('grown', text) + self.assertIn('shrunk', text) + self.assertIn('removed', text) + self.assertIn('added', text) + # unchanged should not appear (no delta) + # Check the header line appears + self.assertIn('function', text) + self.assertIn('old', text) + self.assertIn('new', text) + self.assertIn('delta', text) + + def test_empty_dicts(self): + """Test with empty dictionaries""" + terminal.get_print_test_lines() # Clear + self.builder.print_func_size_detail('u-boot', {}, {}) + lines = terminal.get_print_test_lines() + + # No output when both dicts are empty + self.assertEqual(len(lines), 0) + + +if __name__ == '__main__': + unittest.main() -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Extend testBranchSummary() to test the -K flag which shows configuration-changes between commits. Create u-boot.cfg files with varying content in each commit's output directory to simulate config changes. This significantly improves coverage of the config change display code in builder.py including _show_config_changes(), _calc_config_changes(), and _print_arch_config_summary() Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/func_test.py | 39 ++++++++++++++++++++++++++++++++++++- 1 file changed, 38 insertions(+), 1 deletion(-) diff --git a/tools/buildman/func_test.py b/tools/buildman/func_test.py index 69f6b25fa5a..817cd6e7a7f 100644 --- a/tools/buildman/func_test.py +++ b/tools/buildman/func_test.py @@ -490,7 +490,18 @@ Idx Name Size VMA LMA File off Algn return command.CommandResult(return_code=0) elif stage == 'config': fname = os.path.join(cwd or '', out_dir, '.config') - tools.write_file(fname, b'CONFIG_SOMETHING=1') + # Vary config based on commit to simulate config changes + seq = commit.sequence if hasattr(commit, 'sequence') else 0 + config = f'CONFIG_SOMETHING={seq + 1}\n' + if seq > 0: + config += 'CONFIG_NEW_OPTION=y\n' + tools.write_file(fname, config.encode('utf-8')) + # Also create u-boot.cfg which buildman reads for -K flag + cfg_fname = os.path.join(cwd or '', out_dir, 'u-boot.cfg') + cfg_content = f'#define CONFIG_VALUE {seq + 100}\n' + if seq > 0: + cfg_content += '#define CONFIG_EXTRA 1\n' + tools.write_file(cfg_fname, cfg_content.encode('utf-8')) return command.CommandResult(return_code=0, combined='Test configuration complete') elif stage == 'oldconfig': @@ -618,6 +629,32 @@ Some images are invalid''' self.assertIn('board_init', text) self.assertIn('(no errors to report)', lines[-1].text) + # Now run with -K to show config changes + # First, create config files in the output directory to simulate + # varying configs between commits. Use the builder to get correct paths. + for commit_num in range(self._commits): + for brd in BOARDS: + target = brd[6] # target name is 7th element + board_dir = self._builder.get_build_dir(commit_num, target) + cfg_fname = os.path.join(board_dir, 'u-boot.cfg') + cfg_content = f'#define CONFIG_VALUE {commit_num + 100}\n' + if commit_num == 0: + # Add a config that will be removed in later commits + cfg_content += '#define CONFIG_OLD_OPTION 1\n' + if commit_num > 0: + cfg_content += '#define CONFIG_NEW_OPTION 1\n' + tools.write_file(cfg_fname, cfg_content.encode('utf-8')) + + self._make_calls = 0 + self._RunControl('-b', TEST_BRANCH, '-sK', '-o', self._output_dir, + clean_dir=False) + self.assertEqual(self._make_calls, 0) + lines = terminal.get_print_test_lines() + text = '\n'.join(line.text for line in lines) + # Check config options appear in the output + self.assertIn('CONFIG_', text) + self.assertIn('(no errors to report)', lines[-1].text) + def testCount(self): """Test building a specific number of commitst""" self._RunControl('-b', TEST_BRANCH, '-c2', '-o', self._output_dir) -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Config value changes are never detected because tconfig.config.get(key) looks up the config key in the filename-keyed dict instead of the config dict for the current file. Change to tconfig.config[name].get(key) to correctly look up the config value in the current file's config dict. Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/builder.py | 2 +- tools/buildman/func_test.py | 6 ++++-- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/tools/buildman/builder.py b/tools/buildman/builder.py index 4bcde84adca..3cb14af279f 100644 --- a/tools/buildman/builder.py +++ b/tools/buildman/builder.py @@ -1624,7 +1624,7 @@ class Builder: config_minus[key] = value all_config_minus[key] = value for key, value in base.items(): - new_value = tconfig.config.get(key) + new_value = tconfig.config[name].get(key) if new_value and value != new_value: desc = f'{value} -> {new_value}' config_change[key] = desc diff --git a/tools/buildman/func_test.py b/tools/buildman/func_test.py index 817cd6e7a7f..b80779017b4 100644 --- a/tools/buildman/func_test.py +++ b/tools/buildman/func_test.py @@ -651,8 +651,10 @@ Some images are invalid''' self.assertEqual(self._make_calls, 0) lines = terminal.get_print_test_lines() text = '\n'.join(line.text for line in lines) - # Check config options appear in the output - self.assertIn('CONFIG_', text) + # Check config options appear in the output - both additions and + # value changes should be detected + self.assertIn('CONFIG_NEW_OPTION', text) # Addition + self.assertIn('CONFIG_VALUE', text) # Value change self.assertIn('(no errors to report)', lines[-1].text) def testCount(self): -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add testWarningsAsErrors() to verify the -E flag adds KCFLAGS=-Werror and HOSTCFLAGS=-Werror to make arguments. Also add infrastructure to capture make arguments for inspection by tests. Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/func_test.py | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/tools/buildman/func_test.py b/tools/buildman/func_test.py index b80779017b4..f224a80a3f9 100644 --- a/tools/buildman/func_test.py +++ b/tools/buildman/func_test.py @@ -482,6 +482,9 @@ Idx Name Size VMA LMA File off Algn kwargs: Arguments to pass to command.run_one() """ self._make_calls += 1 + # Capture args for tests that need to inspect them + if hasattr(self, '_captured_make_args') and stage == 'build': + self._captured_make_args.append(args) out_dir = '' for arg in args: if arg.startswith('O='): @@ -657,6 +660,21 @@ Some images are invalid''' self.assertIn('CONFIG_VALUE', text) # Value change self.assertIn('(no errors to report)', lines[-1].text) + def testWarningsAsErrors(self): + """Test the -E flag adds -Werror to make arguments""" + self._captured_make_args = [] + self._RunControl('-o', self._output_dir, '-E') + + # Check that at least one build had -Werror flags + found_werror = False + for args in self._captured_make_args: + args_str = ' '.join(args) + if 'KCFLAGS=-Werror' in args_str: + found_werror = True + self.assertIn('HOSTCFLAGS=-Werror', args_str) + break + self.assertTrue(found_werror, 'KCFLAGS=-Werror not found in make args') + def testCount(self): """Test building a specific number of commitst""" self._RunControl('-b', TEST_BRANCH, '-c2', '-o', self._output_dir) -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Extend testBranchSummary() to test the -U flag which shows environment variable changes between commits. Create uboot.env files with varying content in each commit's output directory to simulate environment changes. This improves coverage of the environment change display code in builder.py including _show_environment_changes() and related functions. Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/func_test.py | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/tools/buildman/func_test.py b/tools/buildman/func_test.py index f224a80a3f9..0ee52d16703 100644 --- a/tools/buildman/func_test.py +++ b/tools/buildman/func_test.py @@ -660,6 +660,32 @@ Some images are invalid''' self.assertIn('CONFIG_VALUE', text) # Value change self.assertIn('(no errors to report)', lines[-1].text) + # Now run with -U to show environment changes + # Create uboot.env files with varying content between commits + for commit_num in range(self._commits): + for brd in BOARDS: + target = brd[6] # target name is 7th element + board_dir = self._builder.get_build_dir(commit_num, target) + env_fname = os.path.join(board_dir, 'uboot.env') + # Environment uses null-terminated strings + env_content = f'bootdelay={commit_num + 1}\x00' + if commit_num == 0: + # Add a variable that will be removed in later commits + env_content += 'oldvar=removed\x00' + if commit_num > 0: + env_content += 'newvar=value\x00' + tools.write_file(env_fname, env_content.encode('utf-8')) + + self._make_calls = 0 + self._RunControl('-b', TEST_BRANCH, '-sU', '-o', self._output_dir, + clean_dir=False) + self.assertEqual(self._make_calls, 0) + lines = terminal.get_print_test_lines() + text = '\n'.join(line.text for line in lines) + # Check environment variables appear in the output + self.assertIn('bootdelay', text) + self.assertIn('(no errors to report)', lines[-1].text) + def testWarningsAsErrors(self): """Test the -E flag adds -Werror to make arguments""" self._captured_make_args = [] -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add testCurrentSourceIde() to verify that IDE mode (-I flag) correctly outputs build errors to stderr and suppresses progress output to stdout. Also update _HandleMake() to support error injection for current source builds where the commit parameter is the string 'current'. Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/func_test.py | 74 +++++++++++++++++++++++++++++++++++++ 1 file changed, 74 insertions(+) diff --git a/tools/buildman/func_test.py b/tools/buildman/func_test.py index 0ee52d16703..efd66796a65 100644 --- a/tools/buildman/func_test.py +++ b/tools/buildman/func_test.py @@ -525,6 +525,9 @@ Some images are invalid''' stderr = "binman: Filename 'fsp.bin' not found in input path" elif type(commit) is not str: stderr = self._error.get((brd.target, commit.sequence)) + else: + # For current source builds, commit is 'current' + stderr = self._error.get((brd.target, commit)) if stderr: return command.CommandResult(return_code=2, stderr=stderr) @@ -593,6 +596,77 @@ Some images are invalid''' self.assertEqual(self._builder.count, self._total_builds) self.assertEqual(self._builder.fail, 0) + def testCurrentSourceIde(self): + """Test building current source with IDE mode enabled + + This tests that: + - Build errors are written to stderr + - Progress output does not go to stdout in IDE mode + - Summary mode (-s) shows output again + """ + # Set up a build error for sandbox board (board4) which has toolchain + # For current source builds, commit is 'current' string + error_msg = 'test_error_msg.c:123: error: test failure\n' + self._error['board4', 'current'] = error_msg + + # Capture stderr during the build + captured_stderr = io.StringIO() + old_stderr = sys.stderr + try: + sys.stderr = captured_stderr + terminal.get_print_test_lines() # Clear any previous output + self._RunControl('-o', self._output_dir, '-I') + finally: + sys.stderr = old_stderr + + # Verify there is a build failure + self.assertEqual(self._builder.fail, 1) + + # Check stderr has exactly the expected error + self.assertEqual(captured_stderr.getvalue(), + 'test_error_msg.c:123: error: test failure\n') + + # In IDE mode, there should be no stdout output at all + self.assertEqual(terminal.get_print_test_lines(), []) + + # Now run with -s to show summary - output should appear again + terminal.get_print_test_lines() # Clear + self._RunControl('-o', self._output_dir, '-s', clean_dir=False) + lines = terminal.get_print_test_lines() + self.assertEqual(len(lines), 2) + self.assertIn('Summary of', lines[0].text) + self.assertIn('board4', lines[1].text) + + def testBranchIde(self): + """Test building a branch with IDE mode and summary + + This tests _print_ide_output() which outputs errors to stderr during + the summary phase for branch builds. + """ + # Set up error for commit 1 on sandbox board + error_msg = 'branch_error.c:456: error: branch failure\n' + self._error['board4', 1] = error_msg + + # Build branch normally first (writes results to disk) + terminal.get_print_test_lines() + self._RunControl('-b', TEST_BRANCH, '-o', self._output_dir) + self.assertEqual(self._builder.fail, 1) + + # Run summary with IDE mode - errors go to stderr via _print_ide_output + captured_stderr = io.StringIO() + old_stderr = sys.stderr + try: + sys.stderr = captured_stderr + terminal.get_print_test_lines() + self._RunControl('-b', TEST_BRANCH, '-o', self._output_dir, '-sI', + clean_dir=False) + finally: + sys.stderr = old_stderr + + # Check stderr has the error from _print_ide_output + self.assertEqual(captured_stderr.getvalue(), + 'branch_error.c:456: error: branch failure\n') + def testBranchSummary(self): """Test building a branch and then showing a summary""" self._RunControl('-b', TEST_BRANCH, '-o', self._output_dir) -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add TestPrepareThread class to test_builder.py with tests covering: - No git setup (setup_git=None) - Existing clone (fetches updates) - Existing worktree (no action needed) - Invalid git_dir (neither file nor directory) - Creating new worktree - Creating new clone - Clone with setup_git=True - Invalid setup_git value Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/main.py | 1 + tools/buildman/test_builder.py | 118 +++++++++++++++++++++++++++++++++ 2 files changed, 119 insertions(+) diff --git a/tools/buildman/main.py b/tools/buildman/main.py index 70547b2991d..6e780b9e07e 100755 --- a/tools/buildman/main.py +++ b/tools/buildman/main.py @@ -54,6 +54,7 @@ def run_tests(skip_net_tests, debug, verbose, args): func_test.TestFunctional, test_boards.TestBoards, test_bsettings.TestBsettings, test_builder.TestPrintFuncSizeDetail, + test_builder.TestPrepareThread, 'buildman.toolchain']) return (0 if result.wasSuccessful() else 1) diff --git a/tools/buildman/test_builder.py b/tools/buildman/test_builder.py index 5d003de1966..042dacc2f34 100644 --- a/tools/buildman/test_builder.py +++ b/tools/buildman/test_builder.py @@ -4,9 +4,13 @@ """Unit tests for builder.py""" +import os import unittest +from unittest import mock from buildman import builder +from buildman import builderthread +from u_boot_pylib import gitutil from u_boot_pylib import terminal @@ -140,5 +144,119 @@ class TestPrintFuncSizeDetail(unittest.TestCase): self.assertEqual(len(lines), 0) +class TestPrepareThread(unittest.TestCase): + """Tests for Builder._prepare_thread()""" + + def setUp(self): + """Set up test fixtures""" + self.builder = builder.Builder( + toolchains=None, base_dir='/tmp/test', git_dir='/src/repo', + num_threads=4, num_jobs=1) + terminal.set_print_test_mode() + + def tearDown(self): + """Clean up after tests""" + terminal.set_print_test_mode(False) + + @mock.patch.object(builderthread, 'mkdir') + def test_no_setup_git(self, mock_mkdir): + """Test with setup_git=None (no git setup needed)""" + self.builder._prepare_thread(0, None) + mock_mkdir.assert_called_once() + + @mock.patch.object(gitutil, 'fetch') + @mock.patch.object(os.path, 'isdir', return_value=True) + @mock.patch.object(builderthread, 'mkdir') + def test_existing_clone(self, mock_mkdir, mock_isdir, mock_fetch): + """Test with existing git clone (fetches updates)""" + terminal.get_print_test_lines() # Clear + self.builder._prepare_thread(0, 'clone') + + mock_fetch.assert_called_once() + lines = terminal.get_print_test_lines() + self.assertEqual(len(lines), 1) + self.assertIn('Fetching repo', lines[0].text) + + @mock.patch.object(os.path, 'isfile', return_value=True) + @mock.patch.object(os.path, 'isdir', return_value=False) + @mock.patch.object(builderthread, 'mkdir') + def test_existing_worktree(self, mock_mkdir, mock_isdir, mock_isfile): + """Test with existing worktree (no action needed)""" + terminal.get_print_test_lines() # Clear + self.builder._prepare_thread(0, 'worktree') + + # No git operations should be called + lines = terminal.get_print_test_lines() + self.assertEqual(len(lines), 0) + + @mock.patch.object(os.path, 'exists', return_value=True) + @mock.patch.object(os.path, 'isfile', return_value=False) + @mock.patch.object(os.path, 'isdir', return_value=False) + @mock.patch.object(builderthread, 'mkdir') + def test_invalid_git_dir(self, mock_mkdir, mock_isdir, mock_isfile, + mock_exists): + """Test with git_dir that exists but is neither file nor directory""" + with self.assertRaises(ValueError) as ctx: + self.builder._prepare_thread(0, 'clone') + self.assertIn('exists, but is not a file or a directory', + str(ctx.exception)) + + @mock.patch.object(gitutil, 'add_worktree') + @mock.patch.object(os.path, 'exists', return_value=False) + @mock.patch.object(os.path, 'isfile', return_value=False) + @mock.patch.object(os.path, 'isdir', return_value=False) + @mock.patch.object(builderthread, 'mkdir') + def test_create_worktree(self, mock_mkdir, mock_isdir, mock_isfile, + mock_exists, mock_add_worktree): + """Test creating a new worktree""" + terminal.get_print_test_lines() # Clear + self.builder._prepare_thread(0, 'worktree') + + mock_add_worktree.assert_called_once() + lines = terminal.get_print_test_lines() + self.assertEqual(len(lines), 1) + self.assertIn('Checking out worktree', lines[0].text) + + @mock.patch.object(gitutil, 'clone') + @mock.patch.object(os.path, 'exists', return_value=False) + @mock.patch.object(os.path, 'isfile', return_value=False) + @mock.patch.object(os.path, 'isdir', return_value=False) + @mock.patch.object(builderthread, 'mkdir') + def test_create_clone(self, mock_mkdir, mock_isdir, mock_isfile, + mock_exists, mock_clone): + """Test creating a new clone""" + terminal.get_print_test_lines() # Clear + self.builder._prepare_thread(0, 'clone') + + mock_clone.assert_called_once() + lines = terminal.get_print_test_lines() + self.assertEqual(len(lines), 1) + self.assertIn('Cloning repo', lines[0].text) + + @mock.patch.object(gitutil, 'clone') + @mock.patch.object(os.path, 'exists', return_value=False) + @mock.patch.object(os.path, 'isfile', return_value=False) + @mock.patch.object(os.path, 'isdir', return_value=False) + @mock.patch.object(builderthread, 'mkdir') + def test_create_clone_with_true(self, mock_mkdir, mock_isdir, mock_isfile, + mock_exists, mock_clone): + """Test creating a clone when setup_git=True""" + terminal.get_print_test_lines() # Clear + self.builder._prepare_thread(0, True) + + mock_clone.assert_called_once() + + @mock.patch.object(os.path, 'exists', return_value=False) + @mock.patch.object(os.path, 'isfile', return_value=False) + @mock.patch.object(os.path, 'isdir', return_value=False) + @mock.patch.object(builderthread, 'mkdir') + def test_invalid_setup_git(self, mock_mkdir, mock_isdir, mock_isfile, + mock_exists): + """Test with invalid setup_git value""" + with self.assertRaises(ValueError) as ctx: + self.builder._prepare_thread(0, 'invalid') + self.assertIn("Can't setup git repo", str(ctx.exception)) + + if __name__ == '__main__': unittest.main() -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add TestPrepareWorkingSpace class to test_builder.py with tests covering: - No git setup (setup_git=False) - Worktree available (uses worktree and prunes) - Worktree not available (falls back to clone) - Zero threads (should still prepare 1 thread) - No git_dir set (skips git operations) Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/main.py | 1 + tools/buildman/test_builder.py | 80 ++++++++++++++++++++++++++++++++++ 2 files changed, 81 insertions(+) diff --git a/tools/buildman/main.py b/tools/buildman/main.py index 6e780b9e07e..74da226c2cf 100755 --- a/tools/buildman/main.py +++ b/tools/buildman/main.py @@ -55,6 +55,7 @@ def run_tests(skip_net_tests, debug, verbose, args): test_boards.TestBoards, test_bsettings.TestBsettings, test_builder.TestPrintFuncSizeDetail, test_builder.TestPrepareThread, + test_builder.TestPrepareWorkingSpace, 'buildman.toolchain']) return (0 if result.wasSuccessful() else 1) diff --git a/tools/buildman/test_builder.py b/tools/buildman/test_builder.py index 042dacc2f34..d7359477ef2 100644 --- a/tools/buildman/test_builder.py +++ b/tools/buildman/test_builder.py @@ -258,5 +258,85 @@ class TestPrepareThread(unittest.TestCase): self.assertIn("Can't setup git repo", str(ctx.exception)) +class TestPrepareWorkingSpace(unittest.TestCase): + """Tests for Builder._prepare_working_space()""" + + def setUp(self): + """Set up test fixtures""" + self.builder = builder.Builder( + toolchains=None, base_dir='/tmp/test', git_dir='/src/repo', + num_threads=4, num_jobs=1) + terminal.set_print_test_mode() + + def tearDown(self): + """Clean up after tests""" + terminal.set_print_test_mode(False) + + @mock.patch.object(builder.Builder, '_prepare_thread') + @mock.patch.object(builderthread, 'mkdir') + def test_no_setup_git(self, mock_mkdir, mock_prepare_thread): + """Test with setup_git=False""" + self.builder._prepare_working_space(2, False) + + mock_mkdir.assert_called_once() + # Should prepare 2 threads with setup_git=False + self.assertEqual(mock_prepare_thread.call_count, 2) + mock_prepare_thread.assert_any_call(0, False) + mock_prepare_thread.assert_any_call(1, False) + + @mock.patch.object(builder.Builder, '_prepare_thread') + @mock.patch.object(gitutil, 'prune_worktrees') + @mock.patch.object(gitutil, 'check_worktree_is_available', return_value=True) + @mock.patch.object(builderthread, 'mkdir') + def test_worktree_available(self, mock_mkdir, mock_check_worktree, + mock_prune, mock_prepare_thread): + """Test when worktree is available""" + self.builder._prepare_working_space(3, True) + + mock_check_worktree.assert_called_once() + mock_prune.assert_called_once() + # Should prepare 3 threads with setup_git='worktree' + self.assertEqual(mock_prepare_thread.call_count, 3) + mock_prepare_thread.assert_any_call(0, 'worktree') + mock_prepare_thread.assert_any_call(1, 'worktree') + mock_prepare_thread.assert_any_call(2, 'worktree') + + @mock.patch.object(builder.Builder, '_prepare_thread') + @mock.patch.object(gitutil, 'check_worktree_is_available', return_value=False) + @mock.patch.object(builderthread, 'mkdir') + def test_worktree_not_available(self, mock_mkdir, mock_check_worktree, + mock_prepare_thread): + """Test when worktree is not available (falls back to clone)""" + self.builder._prepare_working_space(2, True) + + mock_check_worktree.assert_called_once() + # Should prepare 2 threads with setup_git='clone' + self.assertEqual(mock_prepare_thread.call_count, 2) + mock_prepare_thread.assert_any_call(0, 'clone') + mock_prepare_thread.assert_any_call(1, 'clone') + + @mock.patch.object(builder.Builder, '_prepare_thread') + @mock.patch.object(builderthread, 'mkdir') + def test_zero_threads(self, mock_mkdir, mock_prepare_thread): + """Test with max_threads=0 (should still prepare 1 thread)""" + self.builder._prepare_working_space(0, False) + + # Should prepare at least 1 thread + self.assertEqual(mock_prepare_thread.call_count, 1) + mock_prepare_thread.assert_called_with(0, False) + + @mock.patch.object(builder.Builder, '_prepare_thread') + @mock.patch.object(builderthread, 'mkdir') + def test_no_git_dir(self, mock_mkdir, mock_prepare_thread): + """Test with no git_dir set""" + self.builder.git_dir = None + self.builder._prepare_working_space(2, True) + + # setup_git should remain True but git operations skipped + self.assertEqual(mock_prepare_thread.call_count, 2) + mock_prepare_thread.assert_any_call(0, True) + mock_prepare_thread.assert_any_call(1, True) + + if __name__ == '__main__': unittest.main() -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add TestPrepareOutputSpace class to test_builder.py with tests: - _get_output_space_removals() with no commits - _get_output_space_removals() with no old directories - _get_output_space_removals() identifying old directories by pattern - _prepare_output_space() with nothing to remove - _prepare_output_space() removing directories and printing message The test verifies that the 'Removing' message is printed with newline=False so it can be overwritten by subsequent output. Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/main.py | 1 + tools/buildman/test_builder.py | 84 ++++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+) diff --git a/tools/buildman/main.py b/tools/buildman/main.py index 74da226c2cf..449263b48e8 100755 --- a/tools/buildman/main.py +++ b/tools/buildman/main.py @@ -56,6 +56,7 @@ def run_tests(skip_net_tests, debug, verbose, args): test_builder.TestPrintFuncSizeDetail, test_builder.TestPrepareThread, test_builder.TestPrepareWorkingSpace, + test_builder.TestPrepareOutputSpace, 'buildman.toolchain']) return (0 if result.wasSuccessful() else 1) diff --git a/tools/buildman/test_builder.py b/tools/buildman/test_builder.py index d7359477ef2..b17d7942e1d 100644 --- a/tools/buildman/test_builder.py +++ b/tools/buildman/test_builder.py @@ -5,6 +5,7 @@ """Unit tests for builder.py""" import os +import shutil import unittest from unittest import mock @@ -338,5 +339,88 @@ class TestPrepareWorkingSpace(unittest.TestCase): mock_prepare_thread.assert_any_call(1, True) +class TestPrepareOutputSpace(unittest.TestCase): + """Tests for Builder._prepare_output_space() and _get_output_space_removals()""" + + def setUp(self): + """Set up test fixtures""" + self.builder = builder.Builder( + toolchains=None, base_dir='/tmp/test', git_dir='/src/repo', + num_threads=4, num_jobs=1) + terminal.set_print_test_mode() + + def tearDown(self): + """Clean up after tests""" + terminal.set_print_test_mode(False) + + def test_get_removals_no_commits(self): + """Test _get_output_space_removals with no commits""" + self.builder.commits = None + result = self.builder._get_output_space_removals() + self.assertEqual(result, []) + + @mock.patch.object(builder.Builder, 'get_output_dir') + @mock.patch('glob.glob') + def test_get_removals_no_old_dirs(self, mock_glob, mock_get_output_dir): + """Test _get_output_space_removals with no old directories""" + self.builder.commits = [mock.Mock()] # Non-empty to trigger logic + self.builder.commit_count = 1 + mock_get_output_dir.return_value = '/tmp/test/01_gabcdef1_test' + mock_glob.return_value = [] + + result = self.builder._get_output_space_removals() + self.assertEqual(result, []) + + @mock.patch.object(builder.Builder, 'get_output_dir') + @mock.patch('glob.glob') + def test_get_removals_with_old_dirs(self, mock_glob, mock_get_output_dir): + """Test _get_output_space_removals identifies old directories""" + self.builder.commits = [mock.Mock()] # Non-empty to trigger logic + self.builder.commit_count = 1 + mock_get_output_dir.return_value = '/tmp/test/01_gabcdef1_current' + # Simulate old directories with buildman naming pattern + mock_glob.return_value = [ + '/tmp/test/01_gabcdef1_current', # Current - should not remove + '/tmp/test/02_g1234567_old', # Old - should remove + '/tmp/test/random_dir', # Not matching pattern - keep + ] + + result = self.builder._get_output_space_removals() + self.assertEqual(result, ['/tmp/test/02_g1234567_old']) + + @mock.patch.object(builder.Builder, '_get_output_space_removals') + def test_prepare_output_space_nothing_to_remove(self, mock_get_removals): + """Test _prepare_output_space with nothing to remove""" + mock_get_removals.return_value = [] + terminal.get_print_test_lines() # Clear + + self.builder._prepare_output_space() + + lines = terminal.get_print_test_lines() + self.assertEqual(len(lines), 0) + + @mock.patch.object(shutil, 'rmtree') + @mock.patch.object(builder.Builder, '_get_output_space_removals') + def test_prepare_output_space_removes_dirs(self, mock_get_removals, + mock_rmtree): + """Test _prepare_output_space removes old directories""" + mock_get_removals.return_value = ['/tmp/test/old1', '/tmp/test/old2'] + terminal.get_print_test_lines() # Clear + + self.builder._prepare_output_space() + + # Check rmtree was called for each directory + self.assertEqual(mock_rmtree.call_count, 2) + mock_rmtree.assert_any_call('/tmp/test/old1') + mock_rmtree.assert_any_call('/tmp/test/old2') + + # Check 'Removing' message was printed + lines = terminal.get_print_test_lines() + self.assertEqual(len(lines), 1) + self.assertIn('Removing 2 old build directories', lines[0].text) + # Check newline=False was used (message should be overwritten) + self.assertFalse(lines[0].newline) + + if __name__ == '__main__': unittest.main() -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add testToolchainErrors() functional test to verify that boards with missing toolchains are reported in the summary display, grouped by architecture. Add TestShowNotBuilt unit test class with tests for: - All boards built (no output) - Some boards not built (reports missing boards) - No boards built (reports all boards) Update the docstring for _show_not_built() to clarify that it reports boards missing from board_dict. Note that in practice this is unlikely to trigger since get_result_summary() creates an outcome for every board. Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/builder.py | 5 ++++ tools/buildman/func_test.py | 25 ++++++++++++++++ tools/buildman/main.py | 1 + tools/buildman/test_builder.py | 53 ++++++++++++++++++++++++++++++++++ 4 files changed, 84 insertions(+) diff --git a/tools/buildman/builder.py b/tools/buildman/builder.py index 3cb14af279f..99aac80d95e 100644 --- a/tools/buildman/builder.py +++ b/tools/buildman/builder.py @@ -1938,6 +1938,11 @@ class Builder: def _show_not_built(board_selected, board_dict): """Show boards that were not built + This reports boards that are in board_selected but have no outcome in + board_dict. In practice this is unlikely to happen since + get_result_summary() creates an outcome for every board, even if just + OUTCOME_UNKNOWN. + Args: board_selected (dict): Dict of selected boards, keyed by target board_dict (dict): Dict of boards that were built, keyed by target diff --git a/tools/buildman/func_test.py b/tools/buildman/func_test.py index efd66796a65..21c700aa073 100644 --- a/tools/buildman/func_test.py +++ b/tools/buildman/func_test.py @@ -590,6 +590,31 @@ Some images are invalid''' f"No tool chain found for arch '{brd.arch}'"]) fd.close() + def testToolchainErrors(self): + """Test that toolchain errors are reported in the summary + + When toolchains are missing, boards fail to build. The summary + should report which boards had errors, grouped by architecture. + """ + self.setupToolchains() + # Build with missing toolchains - only sandbox will succeed + self._RunControl('-b', TEST_BRANCH, '-o', self._output_dir) + + # Now show summary - should report boards with errors + terminal.get_print_test_lines() # Clear + self._RunControl('-b', TEST_BRANCH, '-o', self._output_dir, '-s', + clean_dir=False) + lines = terminal.get_print_test_lines() + text = '\n'.join(line.text for line in lines) + + # Check that boards with missing toolchains are shown with errors + # The '+' prefix indicates new errors for these boards + self.assertIn('arm:', text) + self.assertIn('board0', text) + self.assertIn('board1', text) + self.assertIn('powerpc:', text) + self.assertIn('board2', text) + def testBranch(self): """Test building a branch with all toolchains present""" self._RunControl('-b', TEST_BRANCH, '-o', self._output_dir) diff --git a/tools/buildman/main.py b/tools/buildman/main.py index 449263b48e8..18809d843c6 100755 --- a/tools/buildman/main.py +++ b/tools/buildman/main.py @@ -56,6 +56,7 @@ def run_tests(skip_net_tests, debug, verbose, args): test_builder.TestPrintFuncSizeDetail, test_builder.TestPrepareThread, test_builder.TestPrepareWorkingSpace, + test_builder.TestShowNotBuilt, test_builder.TestPrepareOutputSpace, 'buildman.toolchain']) diff --git a/tools/buildman/test_builder.py b/tools/buildman/test_builder.py index b17d7942e1d..d31c0080863 100644 --- a/tools/buildman/test_builder.py +++ b/tools/buildman/test_builder.py @@ -339,6 +339,59 @@ class TestPrepareWorkingSpace(unittest.TestCase): mock_prepare_thread.assert_any_call(1, True) +class TestShowNotBuilt(unittest.TestCase): + """Tests for Builder._show_not_built()""" + + def setUp(self): + """Set up test fixtures""" + terminal.set_print_test_mode() + + def tearDown(self): + """Clean up after tests""" + terminal.set_print_test_mode(False) + + def test_all_boards_built(self): + """Test when all selected boards were built""" + board_selected = {'board1': None, 'board2': None} + board_dict = {'board1': None, 'board2': None} + + terminal.get_print_test_lines() # Clear + builder.Builder._show_not_built(board_selected, board_dict) + lines = terminal.get_print_test_lines() + + # No output when all boards were built + self.assertEqual(len(lines), 0) + + def test_some_boards_not_built(self): + """Test when some boards were not built""" + board_selected = {'board1': None, 'board2': None, 'board3': None} + board_dict = {'board1': None} # Only board1 was built + + terminal.get_print_test_lines() # Clear + builder.Builder._show_not_built(board_selected, board_dict) + lines = terminal.get_print_test_lines() + + self.assertEqual(len(lines), 1) + self.assertIn('Boards not built', lines[0].text) + self.assertIn('2', lines[0].text) # Count of not-built boards + self.assertIn('board2', lines[0].text) + self.assertIn('board3', lines[0].text) + + def test_no_boards_built(self): + """Test when no boards were built""" + board_selected = {'board1': None, 'board2': None} + board_dict = {} # No boards built + + terminal.get_print_test_lines() # Clear + builder.Builder._show_not_built(board_selected, board_dict) + lines = terminal.get_print_test_lines() + + self.assertEqual(len(lines), 1) + self.assertIn('Boards not built', lines[0].text) + self.assertIn('board1', lines[0].text) + self.assertIn('board2', lines[0].text) + + class TestPrepareOutputSpace(unittest.TestCase): """Tests for Builder._prepare_output_space() and _get_output_space_removals()""" -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> The _show_not_built() function does not actually show boards that fail to build due to missing toolchains. This is because toolchain errors create a done file with return_code=10, resulting in OUTCOME_ERROR rather than OUTCOME_UNKNOWN. Update the function to detect toolchain errors by checking for "Tool chain error" in the error lines of OUTCOME_ERROR results, in addition to checking for OUTCOME_UNKNOWN. Update the unit tests to use mock outcomes with err_lines, and add tests for the new toolchain error detection. Update the functional test to verify the "Boards not built" message appears in the summary. Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/builder.py | 23 ++++++--- tools/buildman/func_test.py | 12 ++--- tools/buildman/test_builder.py | 89 +++++++++++++++++++++++++++++++--- 3 files changed, 102 insertions(+), 22 deletions(-) diff --git a/tools/buildman/builder.py b/tools/buildman/builder.py index 99aac80d95e..307249b5e13 100644 --- a/tools/buildman/builder.py +++ b/tools/buildman/builder.py @@ -1938,19 +1938,28 @@ class Builder: def _show_not_built(board_selected, board_dict): """Show boards that were not built - This reports boards that are in board_selected but have no outcome in - board_dict. In practice this is unlikely to happen since - get_result_summary() creates an outcome for every board, even if just - OUTCOME_UNKNOWN. + This reports boards that couldn't be built due to toolchain issues. + These have OUTCOME_UNKNOWN (no result file) or OUTCOME_ERROR with + "Tool chain error" in the error lines. Args: board_selected (dict): Dict of selected boards, keyed by target board_dict (dict): Dict of boards that were built, keyed by target """ not_built = [] - for brd in board_selected: - if brd not in board_dict: - not_built.append(brd) + for target in board_selected: + if target not in board_dict: + not_built.append(target) + else: + outcome = board_dict[target] + if outcome.rc == OUTCOME_UNKNOWN: + not_built.append(target) + elif outcome.rc == OUTCOME_ERROR: + # Check for toolchain error in the error lines + for line in outcome.err_lines: + if 'Tool chain error' in line: + not_built.append(target) + break if not_built: tprint(f"Boards not built ({len(not_built)}): " f"{', '.join(not_built)}") diff --git a/tools/buildman/func_test.py b/tools/buildman/func_test.py index 21c700aa073..fa946c55645 100644 --- a/tools/buildman/func_test.py +++ b/tools/buildman/func_test.py @@ -593,26 +593,24 @@ Some images are invalid''' def testToolchainErrors(self): """Test that toolchain errors are reported in the summary - When toolchains are missing, boards fail to build. The summary - should report which boards had errors, grouped by architecture. + When toolchains are missing, boards cannot be built. The summary + should report which boards were not built. """ self.setupToolchains() # Build with missing toolchains - only sandbox will succeed self._RunControl('-b', TEST_BRANCH, '-o', self._output_dir) - # Now show summary - should report boards with errors + # Now show summary - should report boards not built terminal.get_print_test_lines() # Clear self._RunControl('-b', TEST_BRANCH, '-o', self._output_dir, '-s', clean_dir=False) lines = terminal.get_print_test_lines() text = '\n'.join(line.text for line in lines) - # Check that boards with missing toolchains are shown with errors - # The '+' prefix indicates new errors for these boards - self.assertIn('arm:', text) + # Check that boards with missing toolchains are reported as not built + self.assertIn('Boards not built', text) self.assertIn('board0', text) self.assertIn('board1', text) - self.assertIn('powerpc:', text) self.assertIn('board2', text) def testBranch(self): diff --git a/tools/buildman/test_builder.py b/tools/buildman/test_builder.py index d31c0080863..78c80aa6c43 100644 --- a/tools/buildman/test_builder.py +++ b/tools/buildman/test_builder.py @@ -350,10 +350,20 @@ class TestShowNotBuilt(unittest.TestCase): """Clean up after tests""" terminal.set_print_test_mode(False) + def _make_outcome(self, rc, err_lines=None): + """Create a mock outcome with a given return code""" + outcome = mock.Mock() + outcome.rc = rc + outcome.err_lines = err_lines if err_lines else [] + return outcome + def test_all_boards_built(self): - """Test when all selected boards were built""" + """Test when all selected boards were built successfully""" board_selected = {'board1': None, 'board2': None} - board_dict = {'board1': None, 'board2': None} + board_dict = { + 'board1': self._make_outcome(builder.OUTCOME_OK), + 'board2': self._make_outcome(builder.OUTCOME_OK), + } terminal.get_print_test_lines() # Clear builder.Builder._show_not_built(board_selected, board_dict) @@ -362,10 +372,14 @@ class TestShowNotBuilt(unittest.TestCase): # No output when all boards were built self.assertEqual(len(lines), 0) - def test_some_boards_not_built(self): - """Test when some boards were not built""" + def test_some_boards_unknown(self): + """Test when some boards have OUTCOME_UNKNOWN (e.g. missing toolchain)""" board_selected = {'board1': None, 'board2': None, 'board3': None} - board_dict = {'board1': None} # Only board1 was built + board_dict = { + 'board1': self._make_outcome(builder.OUTCOME_OK), + 'board2': self._make_outcome(builder.OUTCOME_UNKNOWN), + 'board3': self._make_outcome(builder.OUTCOME_UNKNOWN), + } terminal.get_print_test_lines() # Clear builder.Builder._show_not_built(board_selected, board_dict) @@ -377,10 +391,13 @@ class TestShowNotBuilt(unittest.TestCase): self.assertIn('board2', lines[0].text) self.assertIn('board3', lines[0].text) - def test_no_boards_built(self): - """Test when no boards were built""" + def test_all_boards_unknown(self): + """Test when all boards have OUTCOME_UNKNOWN""" board_selected = {'board1': None, 'board2': None} - board_dict = {} # No boards built + board_dict = { + 'board1': self._make_outcome(builder.OUTCOME_UNKNOWN), + 'board2': self._make_outcome(builder.OUTCOME_UNKNOWN), + } terminal.get_print_test_lines() # Clear builder.Builder._show_not_built(board_selected, board_dict) @@ -391,6 +408,62 @@ class TestShowNotBuilt(unittest.TestCase): self.assertIn('board1', lines[0].text) self.assertIn('board2', lines[0].text) + def test_build_error_not_counted(self): + """Test that build errors (not toolchain) are not counted as 'not built'""" + board_selected = {'board1': None, 'board2': None} + board_dict = { + 'board1': self._make_outcome(builder.OUTCOME_OK), + 'board2': self._make_outcome(builder.OUTCOME_ERROR, + ['error: some build error']), + } + + terminal.get_print_test_lines() # Clear + builder.Builder._show_not_built(board_selected, board_dict) + lines = terminal.get_print_test_lines() + + # Build errors are still "built", just with errors + self.assertEqual(len(lines), 0) + + def test_toolchain_error_counted(self): + """Test that toolchain errors are counted as 'not built'""" + board_selected = {'board1': None, 'board2': None, 'board3': None} + board_dict = { + 'board1': self._make_outcome(builder.OUTCOME_OK), + 'board2': self._make_outcome(builder.OUTCOME_ERROR, + ['Tool chain error for arm: not found']), + 'board3': self._make_outcome(builder.OUTCOME_ERROR, + ['error: some build error']), + } + + terminal.get_print_test_lines() # Clear + builder.Builder._show_not_built(board_selected, board_dict) + lines = terminal.get_print_test_lines() + + # Only toolchain errors count as "not built" + self.assertEqual(len(lines), 1) + self.assertIn('Boards not built', lines[0].text) + self.assertIn('1', lines[0].text) + self.assertIn('board2', lines[0].text) + self.assertNotIn('board3', lines[0].text) + + def test_board_not_in_dict(self): + """Test that boards missing from board_dict are counted as 'not built'""" + board_selected = {'board1': None, 'board2': None, 'board3': None} + board_dict = { + 'board1': self._make_outcome(builder.OUTCOME_OK), + # board2 and board3 are not in board_dict + } + + terminal.get_print_test_lines() # Clear + builder.Builder._show_not_built(board_selected, board_dict) + lines = terminal.get_print_test_lines() + + self.assertEqual(len(lines), 1) + self.assertIn('Boards not built', lines[0].text) + self.assertIn('2', lines[0].text) + self.assertIn('board2', lines[0].text) + self.assertIn('board3', lines[0].text) + class TestPrepareOutputSpace(unittest.TestCase): """Tests for Builder._prepare_output_space() and _get_output_space_removals()""" -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Extract the check_output() inner function from make() as a separate method _check_output_for_loop() so it can be unit-tested. This function detects Kconfig restart loops caused by missing defaults. Add unit tests covering: - Normal output (no restart message) - Restart message sets the flag - Single NEW item after restart (no loop) - Different NEW items (no loop) - Duplicate items trigger loop detection - Duplicates without restart flag (no loop) - Multiple items with one duplicate Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/builder.py | 64 ++++++++++++++------------- tools/buildman/main.py | 1 + tools/buildman/test_builder.py | 79 ++++++++++++++++++++++++++++++++++ 3 files changed, 114 insertions(+), 30 deletions(-) diff --git a/tools/buildman/builder.py b/tools/buildman/builder.py index 307249b5e13..d24fad9a550 100644 --- a/tools/buildman/builder.py +++ b/tools/buildman/builder.py @@ -585,50 +585,54 @@ class Builder: if checkout and self.checkout: gitutil.checkout(commit.hash) - def make(self, _commit, _brd, _stage, cwd, *args, **kwargs): - """Run make + def _check_output_for_loop(self, data): + """Check output for config restart loops + + This detects when Kconfig enters a restart loop due to missing + defaults. It looks for 'Restart config' followed by multiple + occurrences of the same Kconfig item with no default. Args: - cwd (str): Directory where make should be run - args: Arguments to pass to make - kwargs: Arguments to pass to command.run_one() + data (bytes): Output data to check Returns: - CommandResult: Result of the make operation + bool: True to terminate the command, False to continue """ + if b'Restart config' in data: + self._restarting_config = True - def check_output(_stream, data): - """Check output for config restart loops + # If we see 'Restart config' followed by multiple errors + if self._restarting_config: + matches = RE_NO_DEFAULT.findall(data) - Args: - data (bytes): Output data to check - - Returns: - bool: True to terminate the command, False to continue - """ - if b'Restart config' in data: - self._restarting_config = True + # Number of occurrences of each Kconfig item + multiple = [matches.count(val) for val in set(matches)] - # If we see 'Restart config' following by multiple errors - if self._restarting_config: - m = RE_NO_DEFAULT.findall(data) + # If any of them occur more than once, we have a loop + if [val for val in multiple if val > 1]: + self._terminated = True + return True + return False - # Number of occurences of each Kconfig item - multiple = [m.count(val) for val in set(m)] + def make(self, _commit, _brd, _stage, cwd, *args, **kwargs): + """Run make - # If any of them occur more than once, we have a loop - if [val for val in multiple if val > 1]: - self._terminated = True - return True - return False + Args: + cwd (str): Directory where make should be run + args: Arguments to pass to make + kwargs: Arguments to pass to command.run_one() + Returns: + CommandResult: Result of the make operation + """ self._restarting_config = False self._terminated = False cmd = [self.gnu_make] + list(args) - result = command.run_one(*cmd, capture=True, capture_stderr=True, - cwd=cwd, raise_on_error=False, - infile='/dev/null', output_func=check_output, - **kwargs) + result = command.run_one( + *cmd, capture=True, capture_stderr=True, cwd=cwd, + raise_on_error=False, infile='/dev/null', + output_func=lambda stream, data: self._check_output_for_loop(data), + **kwargs) if self._terminated: # Try to be helpful diff --git a/tools/buildman/main.py b/tools/buildman/main.py index 18809d843c6..dadfd629506 100755 --- a/tools/buildman/main.py +++ b/tools/buildman/main.py @@ -58,6 +58,7 @@ def run_tests(skip_net_tests, debug, verbose, args): test_builder.TestPrepareWorkingSpace, test_builder.TestShowNotBuilt, test_builder.TestPrepareOutputSpace, + test_builder.TestCheckOutputForLoop, 'buildman.toolchain']) return (0 if result.wasSuccessful() else 1) diff --git a/tools/buildman/test_builder.py b/tools/buildman/test_builder.py index 78c80aa6c43..09809a07706 100644 --- a/tools/buildman/test_builder.py +++ b/tools/buildman/test_builder.py @@ -548,5 +548,84 @@ class TestPrepareOutputSpace(unittest.TestCase): self.assertFalse(lines[0].newline) +class TestCheckOutputForLoop(unittest.TestCase): + """Tests for Builder._check_output_for_loop()""" + + def setUp(self): + """Set up test fixtures""" + self.builder = builder.Builder( + toolchains=None, base_dir='/tmp/test', git_dir='/src/repo', + num_threads=4, num_jobs=1) + # Reset state before each test + self.builder._restarting_config = False + self.builder._terminated = False + + def test_no_restart_message(self): + """Test that normal output does not trigger termination""" + result = self.builder._check_output_for_loop(b'Building target...') + + self.assertFalse(result) + self.assertFalse(self.builder._restarting_config) + self.assertFalse(self.builder._terminated) + + def test_restart_message_sets_flag(self): + """Test that 'Restart config' sets the restarting flag""" + result = self.builder._check_output_for_loop(b'Restart config...') + + self.assertFalse(result) # No loop detected yet + self.assertTrue(self.builder._restarting_config) + self.assertFalse(self.builder._terminated) + + def test_single_new_item_no_loop(self): + """Test that a single NEW item after restart is not a loop""" + self.builder._restarting_config = True + + result = self.builder._check_output_for_loop( + b'(CONFIG_ITEM) [] (NEW)') + + self.assertFalse(result) + self.assertFalse(self.builder._terminated) + + def test_different_new_items_no_loop(self): + """Test that different NEW items do not trigger a loop""" + self.builder._restarting_config = True + + result = self.builder._check_output_for_loop( + b'(CONFIG_A) [] (NEW)\n(CONFIG_B) [] (NEW)') + + self.assertFalse(result) + self.assertFalse(self.builder._terminated) + + def test_duplicate_items_triggers_loop(self): + """Test that duplicate NEW items trigger loop detection""" + self.builder._restarting_config = True + + result = self.builder._check_output_for_loop( + b'(CONFIG_ITEM) [] (NEW)\n(CONFIG_ITEM) [] (NEW)') + + self.assertTrue(result) + self.assertTrue(self.builder._terminated) + + def test_no_loop_without_restart(self): + """Test that duplicates without restart flag do not trigger loop""" + # _restarting_config is False by default + + result = self.builder._check_output_for_loop( + b'(CONFIG_ITEM) [] (NEW)\n(CONFIG_ITEM) [] (NEW)') + + self.assertFalse(result) + self.assertFalse(self.builder._terminated) + + def test_multiple_items_one_duplicate(self): + """Test loop detection with multiple items, one duplicated""" + self.builder._restarting_config = True + + result = self.builder._check_output_for_loop( + b'(CONFIG_A) [] (NEW)\n(CONFIG_B) [] (NEW)\n(CONFIG_A) [] (NEW)') + + self.assertTrue(result) + self.assertTrue(self.builder._terminated) + + if __name__ == '__main__': unittest.main() -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add unit tests for the make() method covering: - Basic make execution with correct arguments - Loop detection appends helpful message to stderr - Verbose build mode prepends command to output - State flags are reset at the start of each call Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/main.py | 1 + tools/buildman/test_builder.py | 92 ++++++++++++++++++++++++++++++++++ 2 files changed, 93 insertions(+) diff --git a/tools/buildman/main.py b/tools/buildman/main.py index dadfd629506..044f7a32ebb 100755 --- a/tools/buildman/main.py +++ b/tools/buildman/main.py @@ -59,6 +59,7 @@ def run_tests(skip_net_tests, debug, verbose, args): test_builder.TestShowNotBuilt, test_builder.TestPrepareOutputSpace, test_builder.TestCheckOutputForLoop, + test_builder.TestMake, 'buildman.toolchain']) return (0 if result.wasSuccessful() else 1) diff --git a/tools/buildman/test_builder.py b/tools/buildman/test_builder.py index 09809a07706..1ec371e7821 100644 --- a/tools/buildman/test_builder.py +++ b/tools/buildman/test_builder.py @@ -627,5 +627,97 @@ class TestCheckOutputForLoop(unittest.TestCase): self.assertTrue(self.builder._terminated) +class TestMake(unittest.TestCase): + """Tests for Builder.make()""" + + def setUp(self): + """Set up test fixtures""" + self.builder = builder.Builder( + toolchains=None, base_dir='/tmp/test', git_dir='/src/repo', + num_threads=4, num_jobs=1) + + @mock.patch('buildman.builder.command.run_one') + def test_make_basic(self, mock_run_one): + """Test basic make execution""" + mock_result = mock.Mock() + mock_result.stdout = 'build output' + mock_result.stderr = '' + mock_result.combined = 'build output' + mock_run_one.return_value = mock_result + + result = self.builder.make(None, None, None, '/tmp/build', 'all') + + self.assertEqual(result, mock_result) + mock_run_one.assert_called_once() + # Check make was called with correct args + call_args = mock_run_one.call_args + self.assertEqual(call_args[0][0], 'make') + self.assertEqual(call_args[0][1], 'all') + self.assertEqual(call_args[1]['cwd'], '/tmp/build') + + @mock.patch('buildman.builder.command.run_one') + def test_make_with_loop_detection(self, mock_run_one): + """Test make adds helpful message when loop is detected""" + mock_result = mock.Mock() + mock_result.stdout = '' + mock_result.stderr = 'config error' + mock_result.combined = 'config error' + mock_run_one.return_value = mock_result + + # Simulate loop detection by setting _terminated during the call + def side_effect(*args, **kwargs): + # Simulate output_func being called with loop data + output_func = kwargs.get('output_func') + if output_func: + self.builder._restarting_config = True + output_func(None, b'(CONFIG_X) [] (NEW)\n(CONFIG_X) [] (NEW)') + return mock_result + + mock_run_one.side_effect = side_effect + + result = self.builder.make(None, None, None, '/tmp/build', 'defconfig') + + # Check helpful message was appended + self.assertIn('did you define an int/hex Kconfig', result.stderr) + + @mock.patch('buildman.builder.command.run_one') + def test_make_verbose_build(self, mock_run_one): + """Test make prepends command in verbose mode""" + mock_result = mock.Mock() + mock_result.stdout = 'output' + mock_result.stderr = '' + mock_result.combined = 'output' + mock_run_one.return_value = mock_result + + self.builder.verbose_build = True + + result = self.builder.make(None, None, None, '/tmp/build', 'all', '-j4') + + # Check command was prepended to stdout and combined + self.assertIn('make all -j4', result.stdout) + self.assertIn('make all -j4', result.combined) + + @mock.patch('buildman.builder.command.run_one') + def test_make_resets_state(self, mock_run_one): + """Test make resets _restarting_config and _terminated flags""" + mock_result = mock.Mock() + mock_result.stdout = '' + mock_result.stderr = '' + mock_result.combined = '' + mock_run_one.return_value = mock_result + + # Set flags to non-default values + self.builder._restarting_config = True + self.builder._terminated = True + + self.builder.make(None, None, None, '/tmp/build', 'all') + + # Flags should be reset at the start of make() + # (they may be set again by output_func, but start fresh) + # Since mock doesn't call output_func, they stay False + self.assertFalse(self.builder._restarting_config) + self.assertFalse(self.builder._terminated) + + if __name__ == '__main__': unittest.main() -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Move the summary printing code from build_boards() into a separate _print_build_summary() method. This improves readability and makes the summary logic easier to test independently. Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/builder.py | 56 ++++++++++++++++++++++----------------- 1 file changed, 32 insertions(+), 24 deletions(-) diff --git a/tools/buildman/builder.py b/tools/buildman/builder.py index d24fad9a550..3c0af07d624 100644 --- a/tools/buildman/builder.py +++ b/tools/buildman/builder.py @@ -2214,29 +2214,37 @@ class Builder: # Wait until we have processed all output self.out_queue.join() if not self._ide: - tprint() - - msg = f'Completed: {self.count} total built' - if self.already_done or self.kconfig_reconfig: - parts = [] - if self.already_done: - parts.append(f'{self.already_done} previously') - if self.already_done != self.count: - parts.append(f'{self.count - self.already_done} newly') - if self.kconfig_reconfig: - parts.append(f'{self.kconfig_reconfig} reconfig') - msg += ' (' + ', '.join(parts) + ')' - duration = datetime.now() - self._start_time - if duration > timedelta(microseconds=1000000): - if duration.microseconds >= 500000: - duration = duration + timedelta(seconds=1) - duration -= timedelta(microseconds=duration.microseconds) - rate = float(self.count) / duration.total_seconds() - msg += f', duration {duration}, rate {rate:1.2f}' - tprint(msg) - if self.thread_exceptions: - tprint( - f'Failed: {len(self.thread_exceptions)} thread exceptions', - colour=self.col.RED) + self._print_build_summary() return (self.fail, self.warned, self.thread_exceptions) + + def _print_build_summary(self): + """Print a summary of the build results + + Show the number of boards built, how many were already done, duration + and build rate. Also show any thread exceptions that occurred. + """ + tprint() + + msg = f'Completed: {self.count} total built' + if self.already_done or self.kconfig_reconfig: + parts = [] + if self.already_done: + parts.append(f'{self.already_done} previously') + if self.already_done != self.count: + parts.append(f'{self.count - self.already_done} newly') + if self.kconfig_reconfig: + parts.append(f'{self.kconfig_reconfig} reconfig') + msg += ' (' + ', '.join(parts) + ')' + duration = datetime.now() - self._start_time + if duration > timedelta(microseconds=1000000): + if duration.microseconds >= 500000: + duration = duration + timedelta(seconds=1) + duration -= timedelta(microseconds=duration.microseconds) + rate = float(self.count) / duration.total_seconds() + msg += f', duration {duration}, rate {rate:1.2f}' + tprint(msg) + if self.thread_exceptions: + tprint( + f'Failed: {len(self.thread_exceptions)} thread exceptions', + colour=self.col.RED) -- 2.43.0
From: Simon Glass <simon.glass@canonical.com> Add unit tests for the _print_build_summary() method covering: - Count-only messages when duration is too short to show rate - Distinguishing previously-done builds from newly-built ones - Kconfig reconfiguration count display - Thread exception warnings - Duration and build-rate display for long builds - Rounding of duration when microseconds >= 500000 Co-developed-by: Claude Opus 4.5 <noreply@anthropic.com> Signed-off-by: Simon Glass <simon.glass@canonical.com> --- tools/buildman/main.py | 1 + tools/buildman/test_builder.py | 131 +++++++++++++++++++++++++++++++++ 2 files changed, 132 insertions(+) diff --git a/tools/buildman/main.py b/tools/buildman/main.py index 044f7a32ebb..5831dbf1222 100755 --- a/tools/buildman/main.py +++ b/tools/buildman/main.py @@ -60,6 +60,7 @@ def run_tests(skip_net_tests, debug, verbose, args): test_builder.TestPrepareOutputSpace, test_builder.TestCheckOutputForLoop, test_builder.TestMake, + test_builder.TestPrintBuildSummary, 'buildman.toolchain']) return (0 if result.wasSuccessful() else 1) diff --git a/tools/buildman/test_builder.py b/tools/buildman/test_builder.py index 1ec371e7821..fd60767bca0 100644 --- a/tools/buildman/test_builder.py +++ b/tools/buildman/test_builder.py @@ -4,6 +4,7 @@ """Unit tests for builder.py""" +from datetime import datetime import os import shutil import unittest @@ -719,5 +720,135 @@ class TestMake(unittest.TestCase): self.assertFalse(self.builder._terminated) +class TestPrintBuildSummary(unittest.TestCase): + """Tests for Builder._print_build_summary()""" + + def setUp(self): + """Set up test fixtures""" + self.builder = builder.Builder( + toolchains=None, base_dir='/tmp/test', git_dir='/src/repo', + num_threads=4, num_jobs=1) + # Set a start time in the past (less than 1 second ago to avoid + # duration output) + self.builder._start_time = datetime.now() + self.builder.thread_exceptions = [] + terminal.set_print_test_mode() + + def tearDown(self): + """Clean up after tests""" + terminal.set_print_test_mode(False) + + def test_basic_count(self): + """Test basic completed message with just count""" + self.builder.count = 10 + self.builder.already_done = 0 + self.builder.kconfig_reconfig = 0 + + terminal.get_print_test_lines() # Clear + self.builder._print_build_summary() + lines = terminal.get_print_test_lines() + + # First line is blank, second is the message + self.assertEqual(len(lines), 2) + self.assertEqual(lines[0].text, '') + self.assertIn('Completed: 10 total built', lines[1].text) + self.assertNotIn('previously', lines[1].text) + + def test_all_previously_done(self): + """Test message when all builds were already done""" + self.builder.count = 5 + self.builder.already_done = 5 + self.builder.kconfig_reconfig = 0 + + terminal.get_print_test_lines() # Clear + self.builder._print_build_summary() + lines = terminal.get_print_test_lines() + + self.assertIn('5 previously', lines[1].text) + self.assertNotIn('newly', lines[1].text) + + def test_some_newly_built(self): + """Test message with some previously done and some new""" + self.builder.count = 10 + self.builder.already_done = 6 + self.builder.kconfig_reconfig = 0 + + terminal.get_print_test_lines() # Clear + self.builder._print_build_summary() + lines = terminal.get_print_test_lines() + + self.assertIn('6 previously', lines[1].text) + self.assertIn('4 newly', lines[1].text) + + def test_with_kconfig_reconfig(self): + """Test message with kconfig reconfigurations""" + self.builder.count = 8 + self.builder.already_done = 0 + self.builder.kconfig_reconfig = 3 + + terminal.get_print_test_lines() # Clear + self.builder._print_build_summary() + lines = terminal.get_print_test_lines() + + self.assertIn('3 reconfig', lines[1].text) + + def test_thread_exceptions(self): + """Test message with thread exceptions""" + self.builder.count = 5 + self.builder.already_done = 0 + self.builder.kconfig_reconfig = 0 + self.builder.thread_exceptions = [Exception('err1'), Exception('err2')] + + terminal.get_print_test_lines() # Clear + self.builder._print_build_summary() + lines = terminal.get_print_test_lines() + + self.assertEqual(len(lines), 3) + self.assertIn('Failed: 2 thread exceptions', lines[2].text) + + @mock.patch('buildman.builder.datetime') + def test_duration_and_rate(self, mock_datetime): + """Test message includes duration and rate for long builds""" + self.builder.count = 100 + self.builder.already_done = 0 + self.builder.kconfig_reconfig = 0 + + # Mock datetime to simulate a 10 second build + start_time = datetime(2024, 1, 1, 12, 0, 0) + end_time = datetime(2024, 1, 1, 12, 0, 10) + self.builder._start_time = start_time + mock_datetime.now.return_value = end_time + mock_datetime.side_effect = lambda *args, **kwargs: datetime(*args, **kwargs) + + terminal.get_print_test_lines() # Clear + self.builder._print_build_summary() + lines = terminal.get_print_test_lines() + + self.assertIn('duration', lines[1].text) + self.assertIn('rate', lines[1].text) + self.assertIn('10.00', lines[1].text) # 100 boards / 10 seconds + + @mock.patch('buildman.builder.datetime') + def test_duration_rounds_up(self, mock_datetime): + """Test duration rounds up when microseconds >= 500000""" + self.builder.count = 100 + self.builder.already_done = 0 + self.builder.kconfig_reconfig = 0 + + # Mock datetime to simulate a 10.6 second build (should round to 11) + start_time = datetime(2024, 1, 1, 12, 0, 0) + end_time = datetime(2024, 1, 1, 12, 0, 10, 600000) # 10.6 seconds + self.builder._start_time = start_time + mock_datetime.now.return_value = end_time + mock_datetime.side_effect = lambda *args, **kwargs: datetime(*args, **kwargs) + + terminal.get_print_test_lines() # Clear + self.builder._print_build_summary() + lines = terminal.get_print_test_lines() + + # Duration should be rounded up to 11 seconds + self.assertIn('0:00:11', lines[1].text) + + if __name__ == '__main__': unittest.main() -- 2.43.0
participants (1)
-
Simon Glass