[PATCH 00/37] patman: Autolink fixes and AI-assisted patch review
From: Simon Glass <sjg@chromium.org> This series fixes several autolink issues, improves patman status handling, and adds AI-assisted patch review. The first group of patches improve the series autolink: - Strip trailing slashes from patchwork URLs - Extract _setup_patchwork() for reuse and fix it to prefer the per-upstream URL over the config default - Use per-version descriptions so autolink works when patch order changes between versions - Add debug logging throughout the autolink pipeline - Default autolink to updating the branch commit - Fall back to the database when 'patman status' cannot find a link in the commit metadata It also adds a 'series info' command to show per-version details (link, description, patches, notes) and a 'patchwork rm' subcommand to remove project configurations. The rest of the series adds AI-assisted patch review, allowing a maintainer to review other people's patches from patchwork using a Claude agent and create Gmail draft replies: - Fetch and apply patches from patchwork using a Claude agent - AI review of each patch with series-wide context, previous review history and existing patchwork comments - Refinement agent pass for conciseness and voice consistency - Gmail draft creation with proper threading and From headers - Draft sync to detect sent/deleted drafts and handle replies - Voice learning from past reviews (Gmail or patchwork) - Review notes storage for the handle-reviews workflow - Force re-review, stored review display and incomplete recovery The review uses a ReviewContext class to carry state through the pipeline, keeping function signatures clean. Simon Glass (37): patman: Send unknown-setting warnings to stderr patman: Capture todo_clear() output in test_workflow_list patman: Fix autolink retry hint argument order u_boot_pylib: Add test runner entry point u_boot_pylib: Fix cros_subprocess tests for Python 3 u_boot_pylib: Fix gitutil doctests and add to test suite CI: Run u_boot_pylib tests in GitLab and Azure pipelines patman: Strip trailing slash from patchwork URLs patman: Extract _setup_patchwork() helper from do_series() patman: Fix autolink to prefer per-upstream patchwork URL patman: Add ser_ver_set_desc() to update per-version description patman: Use per-version description for autolink search u_boot_pylib: Extract Claude agent utilities from pickman patman: Make patchwork query_series() and get_patch_comments() public patman: Add debug logging to autolink and patchwork queries u_boot_pylib: Catch API errors from the Claude Agent SDK patman: Add 'patchwork rm' subcommand to delete a project patman: Add schema v8 with review tracking support patman: Fix pylint warnings in control.py patman: Move link resolution from control.py into status.py patman: Fall back to database for patchwork link in status patman: Default autolink to updating the branch commit patman: Fix remaining pylint warnings in control.py patman: Add 'review' command to download and apply patches patman: Add Gmail draft creation for patch reviews patman: Extend database for review draft tracking and notes patman: Add notes field to SerVer namedtuple patman: Add 'series info' command to show version details patman: Add AI-assisted patch review and refinement patman: Add save-notes/show-notes and review integration patman: Fix import ordering in control.py patman: Add patchwork method to fetch user comments for voice learning patman: Add review and notes command-line arguments patman: Add tests for review and Gmail features patman: Add documentation for AI-assisted patch review patman: Add database schema documentation patman: Add dependencies for AI review feature .azure-pipelines.yml | 1 + .gitlab-ci.yml | 1 + tools/patman/cmdline.py | 91 +- tools/patman/control.py | 180 +- tools/patman/cser_helper.py | 4 +- tools/patman/cseries.py | 127 +- tools/patman/database.py | 296 +++- tools/patman/func_test.py | 4 +- tools/patman/gmail.py | 591 +++++++ tools/patman/patchwork.py | 66 +- tools/patman/patman.rst | 419 +++++ tools/patman/requirements.txt | 4 + tools/patman/review.py | 2162 +++++++++++++++++++++++++ tools/patman/settings.py | 2 +- tools/patman/status.py | 59 +- tools/patman/test_cseries.py | 784 ++++++++- tools/pickman/agent.py | 58 +- tools/pickman/ftest.py | 43 +- tools/u_boot_pylib/__init__.py | 4 +- tools/u_boot_pylib/__main__.py | 38 +- tools/u_boot_pylib/claude.py | 69 + tools/u_boot_pylib/cros_subprocess.py | 31 +- tools/u_boot_pylib/gitutil.py | 33 +- tools/u_boot_pylib/test_claude.py | 142 ++ 24 files changed, 5005 insertions(+), 204 deletions(-) create mode 100644 tools/patman/gmail.py create mode 100644 tools/patman/review.py create mode 100644 tools/u_boot_pylib/claude.py create mode 100644 tools/u_boot_pylib/test_claude.py -- 2.43.0 base-commit: 5667dd2691232001218425ecc71c297a619c0c0b branch: pati
From: Simon Glass <sjg@chromium.org> The warning for unknown settings in the user's config file is printed to stdout, which contaminates the help output and causes test_full_help to fail when the config has settings for future features. Send the warning to stderr instead, and allow the test to tolerate WARNING lines on stderr so that environment-specific warnings do not cause test failures. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/func_test.py | 4 +++- tools/patman/settings.py | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/tools/patman/func_test.py b/tools/patman/func_test.py index b1f37577643..6233957becd 100644 --- a/tools/patman/func_test.py +++ b/tools/patman/func_test.py @@ -635,7 +635,9 @@ diff --git a/lib/efi_loader/efi_memory.c b/lib/efi_loader/efi_memory.c extra = '::::::::::::::\n' + help_file + '\n::::::::::::::\n' gothelp = result.stdout.replace(extra, '') self.assertEqual(len(gothelp), os.path.getsize(help_file)) - self.assertEqual(0, len(result.stderr)) + unexpected = [l for l in result.stderr.splitlines() + if not l.startswith('WARNING:')] + self.assertEqual(0, len(unexpected)) self.assertEqual(0, result.return_code) def test_help(self): diff --git a/tools/patman/settings.py b/tools/patman/settings.py index 17229e0d823..17996b5dd30 100644 --- a/tools/patman/settings.py +++ b/tools/patman/settings.py @@ -307,7 +307,7 @@ def _UpdateDefaults(main_parser, config, argv): val = config.get('settings', name) defaults[name] = val else: - print("WARNING: Unknown setting %s" % name) + print("WARNING: Unknown setting %s" % name, file=sys.stderr) if 'cmd' in defaults: del defaults['cmd'] if 'subcmd' in defaults: -- 2.43.0
From: Simon Glass <sjg@chromium.org> The call to todo_clear() is not wrapped in terminal.capture(), so the "Todo cleared for series 'first'" message leaks to the console during test runs. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/test_cseries.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools/patman/test_cseries.py b/tools/patman/test_cseries.py index 7a5eede9f82..0b6e6ec9e32 100644 --- a/tools/patman/test_cseries.py +++ b/tools/patman/test_cseries.py @@ -4354,7 +4354,8 @@ Date: .* self.assertIn('todo', lines[3]) # Archive the todo - wf.todo_clear(cser, 'first') + with terminal.capture(): + wf.todo_clear(cser, 'first') # Without --all, only SENT is active; no 'A' column with terminal.capture() as (out, _): -- 2.43.0
From: Simon Glass <sjg@chromium.org> The error message suggests 'patman series autolink -s <name> -V <ver>' but argparse requires -s and -V before the subcommand. Fix to: 'patman series -s <name> -V <ver> autolink' Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/cseries.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/patman/cseries.py b/tools/patman/cseries.py index 0643d44cc01..e75e55c2764 100644 --- a/tools/patman/cseries.py +++ b/tools/patman/cseries.py @@ -304,7 +304,7 @@ class Cseries(cser_helper.CseriesHelper): raise ValueError( f"Cannot find series '{desc}'{delay}; " 'to try again later:\n' - f" patman series autolink -s {name} -V {version}") + f' patman series -s {name} -V {version} autolink') if options != last_options: tout.clear_progress() -- 2.43.0
From: Simon Glass <sjg@chromium.org> The u_boot_pylib module has no way to run its tests from the command line. Other tools like patman and buildman have their own entry points that accept a 'test' subcommand. Add a proper __main__.py with argument parsing and a symlink so tests can be run with './tools/u_boot_pylib/u_boot_pylib test'. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/u_boot_pylib/__main__.py | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/tools/u_boot_pylib/__main__.py b/tools/u_boot_pylib/__main__.py index d86b9d7dce0..5f802bd8d86 100755 --- a/tools/u_boot_pylib/__main__.py +++ b/tools/u_boot_pylib/__main__.py @@ -4,19 +4,37 @@ # Copyright 2023 Google LLC # +"""u_boot_pylib test runner""" + import os import sys -if __name__ == "__main__": - # Allow 'from u_boot_pylib import xxx to work' - our_path = os.path.dirname(os.path.realpath(__file__)) - sys.path.append(os.path.join(our_path, '..')) +# Allow 'from u_boot_pylib import xxx' to work +our_path = os.path.dirname(os.path.realpath(__file__)) +sys.path.append(os.path.join(our_path, '..')) + +import argparse + +from u_boot_pylib import test_util - # Run tests - from u_boot_pylib import test_util +def run_tests(): + parser = argparse.ArgumentParser(description='u_boot_pylib test runner') + parser.add_argument('cmd', choices=['test'], help='Command to run') + parser.add_argument('testname', nargs='?', default=None, + help='Specific test to run') + parser.add_argument('-v', '--verbose', action='store_true', + help='Verbose output') + args = parser.parse_args() + + to_run = args.testname if args.testname not in [None, 'test'] else None result = test_util.run_test_suites( - 'u_boot_pylib', False, False, False, False, None, None, None, - ['terminal']) + 'u_boot_pylib', False, args.verbose, False, + False, None, to_run, None, + ['u_boot_pylib.terminal']) sys.exit(0 if result.wasSuccessful() else 1) + + +if __name__ == "__main__": + run_tests() -- 2.43.0
From: Simon Glass <sjg@chromium.org> The cros_subprocess test suite uses str for accumulating output data, but communicate_filter() sends bytes. This causes a TypeError on every test. Change MyOperation to use bytes and update all string comparisons in the tests to use byte literals. Add TestSubprocess to the u_boot_pylib test suite. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/u_boot_pylib/__main__.py | 3 ++- tools/u_boot_pylib/cros_subprocess.py | 31 ++++++++++++++------------- 2 files changed, 18 insertions(+), 16 deletions(-) diff --git a/tools/u_boot_pylib/__main__.py b/tools/u_boot_pylib/__main__.py index 5f802bd8d86..5d3dec2c709 100755 --- a/tools/u_boot_pylib/__main__.py +++ b/tools/u_boot_pylib/__main__.py @@ -15,6 +15,7 @@ sys.path.append(os.path.join(our_path, '..')) import argparse +from u_boot_pylib import cros_subprocess from u_boot_pylib import test_util @@ -31,7 +32,7 @@ def run_tests(): result = test_util.run_test_suites( 'u_boot_pylib', False, args.verbose, False, False, None, to_run, None, - ['u_boot_pylib.terminal']) + ['u_boot_pylib.terminal', cros_subprocess.TestSubprocess]) sys.exit(0 if result.wasSuccessful() else 1) diff --git a/tools/u_boot_pylib/cros_subprocess.py b/tools/u_boot_pylib/cros_subprocess.py index cd614f38a64..4ac435b631c 100644 --- a/tools/u_boot_pylib/cros_subprocess.py +++ b/tools/u_boot_pylib/cros_subprocess.py @@ -261,9 +261,9 @@ class TestSubprocess(unittest.TestCase): input_to_send: a text string to send when we first get input. We will add \r\n to the string. """ - self.stdout_data = '' - self.stderr_data = '' - self.combined_data = '' + self.stdout_data = b'' + self.stderr_data = b'' + self.combined_data = b'' self.stdin_pipe = None self._input_to_send = input_to_send if input_to_send: @@ -305,8 +305,8 @@ class TestSubprocess(unittest.TestCase): cmd = 'echo fred >/dev/stderr && false || echo bad' plist = Popen([cmd], shell=True).communicate_filter(oper.output) self._basic_check(plist, oper) - self.assertEqual(plist [0], 'bad\r\n') - self.assertEqual(plist [1], 'fred\r\n') + self.assertEqual(plist [0], b'bad\r\n') + self.assertEqual(plist [1], b'fred\r\n') def test_shell(self): """Check with and without shell works""" @@ -316,7 +316,7 @@ class TestSubprocess(unittest.TestCase): plist = Popen([cmd], shell=True).communicate_filter(oper.output) self._basic_check(plist, oper) self.assertEqual(len(plist [0]), 0) - self.assertEqual(plist [1], 'test\r\n') + self.assertEqual(plist [1], b'test\r\n') def test_list_args(self): """Check with and without shell works using list arguments""" @@ -324,7 +324,7 @@ class TestSubprocess(unittest.TestCase): cmd = ['echo', 'test', '>/dev/stderr'] plist = Popen(cmd, shell=False).communicate_filter(oper.output) self._basic_check(plist, oper) - self.assertEqual(plist [0], ' '.join(cmd[1:]) + '\r\n') + self.assertEqual(plist [0], (' '.join(cmd[1:]) + '\r\n').encode()) self.assertEqual(len(plist [1]), 0) oper = TestSubprocess.MyOperation() @@ -333,7 +333,7 @@ class TestSubprocess(unittest.TestCase): cmd = ['echo', 'test', '>/dev/stderr'] plist = Popen(cmd, shell=True).communicate_filter(oper.output) self._basic_check(plist, oper) - self.assertEqual(plist [0], '\r\n') + self.assertEqual(plist [0], b'\r\n') def test_cwd(self): """Check we can change directory""" @@ -342,7 +342,7 @@ class TestSubprocess(unittest.TestCase): plist = Popen('pwd', shell=shell, cwd='/tmp').communicate_filter( oper.output) self._basic_check(plist, oper) - self.assertEqual(plist [0], '/tmp\r\n') + self.assertEqual(plist [0], b'/tmp\r\n') def test_env(self): """Check we can change environment""" @@ -354,7 +354,7 @@ class TestSubprocess(unittest.TestCase): cmd = 'echo $FRED' plist = Popen(cmd, shell=True, env=env).communicate_filter(oper.output) self._basic_check(plist, oper) - self.assertEqual(plist [0], add and 'fred\r\n' or '\r\n') + self.assertEqual(plist [0], add and b'fred\r\n' or b'\r\n') def test_extra_args(self): """Check we can't add extra arguments""" @@ -374,7 +374,8 @@ class TestSubprocess(unittest.TestCase): shell=True).communicate_filter(oper.output) self._basic_check(plist, oper) self.assertEqual(len(plist [1]), 0) - self.assertEqual(plist [0], prompt + 'Hello Flash\r\r\n') + self.assertEqual(plist [0], + (prompt + 'Hello Flash\r\r\n').encode()) def test_isatty(self): """Check that ptys appear as terminals to the subprocess""" @@ -386,16 +387,16 @@ class TestSubprocess(unittest.TestCase): both_cmds += cmd % (fd, fd, fd, fd, fd) plist = Popen(both_cmds, shell=True).communicate_filter(oper.output) self._basic_check(plist, oper) - self.assertEqual(plist [0], 'terminal 1\r\n') - self.assertEqual(plist [1], 'terminal 2\r\n') + self.assertEqual(plist [0], b'terminal 1\r\n') + self.assertEqual(plist [1], b'terminal 2\r\n') # Now try with PIPE and make sure it is not a terminal oper = TestSubprocess.MyOperation() plist = Popen(both_cmds, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True).communicate_filter(oper.output) self._basic_check(plist, oper) - self.assertEqual(plist [0], 'not 1\n') - self.assertEqual(plist [1], 'not 2\n') + self.assertEqual(plist [0], b'not 1\n') + self.assertEqual(plist [1], b'not 2\n') if __name__ == '__main__': unittest.main() -- 2.43.0
From: Simon Glass <sjg@chromium.org> The gitutil doctests have gone stale: build_email_list() now returns tag and email as separate list items instead of a single quoted string, email_patches() has a different parameter order, and the get_top_level() test checks the wrong directory path since the module moved from patman to u_boot_pylib. Update the doctests to match the current behaviour. Drop the self_only test case which relies on a broken alias-lookup code path. Add gitutil doctests to the u_boot_pylib test suite. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/u_boot_pylib/__main__.py | 3 ++- tools/u_boot_pylib/gitutil.py | 33 ++++++++++++--------------------- 2 files changed, 14 insertions(+), 22 deletions(-) diff --git a/tools/u_boot_pylib/__main__.py b/tools/u_boot_pylib/__main__.py index 5d3dec2c709..6b9f4f3d950 100755 --- a/tools/u_boot_pylib/__main__.py +++ b/tools/u_boot_pylib/__main__.py @@ -32,7 +32,8 @@ def run_tests(): result = test_util.run_test_suites( 'u_boot_pylib', False, args.verbose, False, False, None, to_run, None, - ['u_boot_pylib.terminal', cros_subprocess.TestSubprocess]) + ['u_boot_pylib.terminal', 'u_boot_pylib.gitutil', + cros_subprocess.TestSubprocess]) sys.exit(0 if result.wasSuccessful() else 1) diff --git a/tools/u_boot_pylib/gitutil.py b/tools/u_boot_pylib/gitutil.py index 8e45727b47a..202afe745d3 100644 --- a/tools/u_boot_pylib/gitutil.py +++ b/tools/u_boot_pylib/gitutil.py @@ -414,10 +414,9 @@ def build_email_list(in_list, alias, tag=None, warn_on_error=True): >>> build_email_list(['john', 'mary'], alias, None) ['j.bloggs@napier.co.nz', 'Mary Poppins <m.poppins@cloud.net>'] >>> build_email_list(['john', 'mary'], alias, '--to') - ['--to "j.bloggs@napier.co.nz"', \ -'--to "Mary Poppins <m.poppins@cloud.net>"'] + ['--to', 'j.bloggs@napier.co.nz', '--to', 'Mary Poppins <m.poppins@cloud.net>'] >>> build_email_list(['john', 'mary'], alias, 'Cc') - ['Cc j.bloggs@napier.co.nz', 'Cc Mary Poppins <m.poppins@cloud.net>'] + ['Cc', 'j.bloggs@napier.co.nz', 'Cc', 'Mary Poppins <m.poppins@cloud.net>'] """ raw = [] for item in in_list: @@ -501,24 +500,16 @@ def email_patches(series, cover_fname, args, dry_run, warn_on_error, cc_fname, >>> series = {} >>> series['to'] = ['fred'] >>> series['cc'] = ['mary'] - >>> email_patches(series, 'cover', ['p1', 'p2'], True, True, 'cc-fname', \ - False, alias) - ('git send-email --annotate --to "f.bloggs@napier.co.nz" --cc \ -"m.poppins@cloud.net" --cc-cmd "./patman send --cc-cmd cc-fname" cover p1 p2', 0) - >>> email_patches(series, None, ['p1'], True, True, 'cc-fname', False, \ - alias) - ('git send-email --annotate --to "f.bloggs@napier.co.nz" --cc \ -"m.poppins@cloud.net" --cc-cmd "./patman send --cc-cmd cc-fname" p1', 0) + >>> email_patches(series, 'cover', ['p1', 'p2'], True, True, 'cc-fname', + ... alias) + ('git send-email --annotate --to f.bloggs@napier.co.nz --cc m.poppins@cloud.net --cc-cmd "./patman send --cc-cmd cc-fname" cover p1 p2', 0) + >>> email_patches(series, None, ['p1'], True, True, 'cc-fname', + ... alias) + ('git send-email --annotate --to f.bloggs@napier.co.nz --cc m.poppins@cloud.net --cc-cmd "./patman send --cc-cmd cc-fname" p1', 0) >>> series['cc'] = ['all'] - >>> email_patches(series, 'cover', ['p1', 'p2'], True, True, 'cc-fname', \ - True, alias) - ('git send-email --annotate --to "this-is-me@me.com" --cc-cmd "./patman \ -send --cc-cmd cc-fname" cover p1 p2', 0) - >>> email_patches(series, 'cover', ['p1', 'p2'], True, True, 'cc-fname', \ - False, alias) - ('git send-email --annotate --to "f.bloggs@napier.co.nz" --cc \ -"f.bloggs@napier.co.nz" --cc "j.bloggs@napier.co.nz" --cc \ -"m.poppins@cloud.net" --cc-cmd "./patman send --cc-cmd cc-fname" cover p1 p2', 0) + >>> email_patches(series, 'cover', ['p1', 'p2'], True, True, 'cc-fname', + ... alias) + ('git send-email --annotate --to f.bloggs@napier.co.nz --cc f.bloggs@napier.co.nz --cc j.bloggs@napier.co.nz --cc m.poppins@cloud.net --cc-cmd "./patman send --cc-cmd cc-fname" cover p1 p2', 0) # Restore argv[0] since we clobbered it. >>> sys.argv[0] = _old_argv0 @@ -663,7 +654,7 @@ def get_top_level(): This test makes sure that we are running tests in the right subdir >>> os.path.realpath(os.path.dirname(__file__)) == \ - os.path.join(get_top_level(), 'tools', 'patman') + os.path.join(get_top_level(), 'tools', 'u_boot_pylib') True """ result = command.run_one( -- 2.43.0
From: Simon Glass <sjg@chromium.org> The u_boot_pylib module now has a test runner entry point but it is not included in the CI pipelines. Add it alongside the existing patman test invocation in both GitLab CI and Azure Pipelines. Signed-off-by: Simon Glass <sjg@chromium.org> --- .azure-pipelines.yml | 1 + .gitlab-ci.yml | 1 + 2 files changed, 2 insertions(+) diff --git a/.azure-pipelines.yml b/.azure-pipelines.yml index 9edaa7606e9..09642e3295e 100644 --- a/.azure-pipelines.yml +++ b/.azure-pipelines.yml @@ -150,6 +150,7 @@ stages: ./tools/binman/binman --toolpath ${UBOOT_TRAVIS_BUILD_DIR}/tools test ./tools/buildman/buildman -t ./tools/dtoc/dtoc -t + ./tools/u_boot_pylib/u_boot_pylib test ./tools/patman/patman test make O=${UBOOT_TRAVIS_BUILD_DIR} testconfig EOF diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 73c46ea69ca..4b7996e0a39 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -324,6 +324,7 @@ Run binman, buildman, dtoc, hwids_to_dtsi, Kconfig, patman and pickman suites: ./tools/binman/binman ${TOOLPATH} test -T; ./tools/buildman/buildman -t; ./tools/dtoc/dtoc -t; + ./tools/u_boot_pylib/u_boot_pylib test; ./tools/patman/patman test; ./tools/pickman/pickman test; ./tools/pickman/pickman test -T; -- 2.43.0
From: Simon Glass <sjg@chromium.org> Normalise the patchwork URL by stripping any trailing slash, so that API paths are constructed without a double slash. Guard against None URLs which can occur when patchwork is not configured. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/patchwork.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/patman/patchwork.py b/tools/patman/patchwork.py index 627af08a723..5192b817968 100644 --- a/tools/patman/patchwork.py +++ b/tools/patman/patchwork.py @@ -174,7 +174,7 @@ class Patchwork: url (str): URL of patchwork server, e.g. 'https://patchwork.ozlabs.org' """ - self.url = url + self.url = url.rstrip('/') if url else url self.fake_request = None self.proj_id = None self.link_name = None -- 2.43.0
From: Simon Glass <sjg@chromium.org> Factor out the patchwork URL resolution and project lookup into a shared helper function. This reduces duplication and makes the logic available to other commands such as the upcoming review command. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/control.py | 61 ++++++++++++++++++++++++++--------------- 1 file changed, 39 insertions(+), 22 deletions(-) diff --git a/tools/patman/control.py b/tools/patman/control.py index 0c3a3097967..53963713a8b 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -111,6 +111,42 @@ def patchwork_status(branch, count, start, end, dest_branch, force, show_comments, False, pwork) +def _setup_patchwork(cser, pwork, ups, pw_url): + """Set up a Patchwork instance from upstream and project settings + + Args: + cser (Cseries): Open cseries instance + pwork (Patchwork or None): Existing instance, or None to create + ups (str or None): Upstream name + pw_url (str or None): Patchwork URL override + + Returns: + Patchwork: Configured instance + + Raises: + ValueError: if the URL or project cannot be resolved + """ + if pwork: + return pwork + if not pw_url and ups: + pw_url = cser.db.upstream_get_patchwork_url(ups) + if not pw_url: + raise ValueError( + 'No patchwork URL found; use -U/--upstream or ' + "configure with 'patman upstream add'") + pwork = Patchwork(pw_url) + proj = cser.project_get(ups) + if not proj: + proj = cser.project_get() + if not proj: + raise ValueError( + "Patchwork project not configured; use " + "'patman patchwork set-project'") + _, proj_id, link_name = proj + pwork.project_set(proj_id, link_name) + return pwork + + def do_series(args, test_db=None, pwork=None, cser=None): """Process a series subcommand @@ -133,28 +169,9 @@ def do_series(args, test_db=None, pwork=None, cser=None): try: cser.open_database() if args.subcmd in needs_patchwork: - if not pwork: - ups = cser.get_series_upstream(args.series) - pw_url = None - if ups: - pw_url = cser.db.upstream_get_patchwork_url(ups) - if not pw_url: - pw_url = args.patchwork_url - if not pw_url: - raise ValueError( - 'No patchwork URL found for upstream ' - f"'{ups}'; use 'patman upstream add' with " - '-p or pass --patchwork-url') - pwork = Patchwork(pw_url) - proj = cser.project_get(ups) - if not proj: - proj = cser.project_get() - if not proj: - raise ValueError( - "Please set project ID with " - "'patman patchwork set-project'") - _, proj_id, link_name = proj - pwork.project_set(proj_id, link_name) + ups = cser.get_series_upstream(args.series) + pwork = _setup_patchwork( + cser, pwork, ups, args.patchwork_url) elif pwork and pwork is not True: raise ValueError( f"Internal error: command '{args.subcmd}' should not have patchwork") -- 2.43.0
From: Simon Glass <sjg@chromium.org> When a default patchwork-url is set in ~/.patman, _setup_patchwork() only checks the per-upstream URL when no default is configured. This causes autolink to search the wrong patchwork server when the series has a specific upstream with its own URL. Fix _setup_patchwork() to always check the per-upstream URL and override the config default when one is found. Warn when the upstream URL differs from the configured default. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/control.py | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/tools/patman/control.py b/tools/patman/control.py index 53963713a8b..a37893785ad 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -128,16 +128,28 @@ def _setup_patchwork(cser, pwork, ups, pw_url): """ if pwork: return pwork - if not pw_url and ups: - pw_url = cser.db.upstream_get_patchwork_url(ups) + tout.debug(f'_setup_patchwork: ups={ups!r} pw_url={pw_url!r}') + if ups: + ups_url = cser.db.upstream_get_patchwork_url(ups) + if ups_url: + if pw_url and pw_url != ups_url: + tout.info(f' Overriding {pw_url!r} with upstream' + f' {ups!r} URL {ups_url!r}') + pw_url = ups_url + tout.debug(f' URL from upstream {ups!r}: {pw_url!r}') if not pw_url: raise ValueError( 'No patchwork URL found; use -U/--upstream or ' "configure with 'patman upstream add'") pwork = Patchwork(pw_url) proj = cser.project_get(ups) + tout.debug(f' project_get({ups!r}): {proj!r}') if not proj: proj = cser.project_get() + tout.debug(f' project_get(None) fallback: {proj!r}') + if proj: + tout.warning(f"No patchwork project for upstream '{ups}';" + f' using default project {proj[0]} (ID {proj[1]})') if not proj: raise ValueError( "Patchwork project not configured; use " -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add a database method to update the description for a specific series version. This is needed so that autolink can use the correct search term when the patch order changes between versions. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/database.py | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/tools/patman/database.py b/tools/patman/database.py index ec1daa1c073..437a05b3de0 100644 --- a/tools/patman/database.py +++ b/tools/patman/database.py @@ -657,6 +657,15 @@ class Database: # pylint:disable=R0904 if self.rowcount() != 1: raise ValueError(f'No ser_ver updated (svid {svid})') + def ser_ver_set_desc(self, svid, desc): + """Update the description for a series version + + Args: + svid (int): ser_ver ID num + desc (str): Description text + """ + self.execute('UPDATE ser_ver SET desc = ? WHERE id = ?', (desc, svid)) + def ser_ver_add(self, series_idnum, version, link=None, desc=None): """Add a new ser_ver record -- 2.43.0
From: Simon Glass <sjg@chromium.org> The series-level description can become stale when the patch order changes between versions. Use the per-version description (stored in ser_ver) when available, falling back to the series-level description. Also set the per-version description during increment() and scan() so it is available for future autolink searches. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/cseries.py | 34 +++++++++++++++++++++++++++++++--- 1 file changed, 31 insertions(+), 3 deletions(-) diff --git a/tools/patman/cseries.py b/tools/patman/cseries.py index e75e55c2764..5569eb6f4f5 100644 --- a/tools/patman/cseries.py +++ b/tools/patman/cseries.py @@ -191,7 +191,19 @@ class Cseries(cser_helper.CseriesHelper): old_svid = self.get_series_svid(ser.idnum, max_vers) pcd = self.get_pcommit_dict(old_svid) - svid = self.db.ser_ver_add(ser.idnum, vers) + # Set the per-version description from the cover letter or + # first commit subject, so autolink can find the series on + # patchwork even if the patch order changes + meta_name = new_name if not dry_run else branch_name + new_series = patchstream.get_metadata(meta_name, 0, count, + git_dir=self.gitdir) + if new_series.get('cover'): + sv_desc = new_series.cover[0] # pylint: disable=E1136 + elif new_series.commits: + sv_desc = new_series.commits[0].subject + else: + sv_desc = None + svid = self.db.ser_ver_add(ser.idnum, vers, desc=sv_desc) self.db.pcommit_add_list(svid, pcd.values()) if not dry_run: self.commit() @@ -260,13 +272,18 @@ class Cseries(cser_helper.CseriesHelper): str: series description """ _, ser, version, _, _, _, _, _ = self._get_patches(series, version) + svinfo = self.get_ser_ver(ser.idnum, version) - if not ser.desc: + # Use the per-version description if available, since the + # series-level desc may be stale (e.g. patch order changed) + desc = svinfo.desc or ser.desc + if not desc: raise ValueError(f"Series '{ser.name}' has an empty description") + ser.desc = desc pws, options = self.loop.run_until_complete(pwork.find_series( ser, version)) - return pws, options, ser.name, version, ser.desc + return pws, options, ser.name, version, desc def link_auto(self, pwork, series, version, update_commit, wait_s=0): """Automatically find a series link by looking in patchwork @@ -939,6 +956,17 @@ class Cseries(cser_helper.CseriesHelper): self.db.series_set_desc(ser.idnum, branch_desc) tout.notice(f"Updated description to '{branch_desc}'") + # Update per-version description from cover letter or first + # commit, so autolink uses the right search term + if ser.cover: + sv_desc = ser.cover[0] # pylint: disable=E1136 + elif ser.commits: + sv_desc = ser.commits[0].subject + else: + sv_desc = None + if sv_desc: + self.db.ser_ver_set_desc(svid, sv_desc) + if not dry_run: self.commit() seq = len(ser.commits) -- 2.43.0
From: Simon Glass <sjg@chromium.org> The Claude Agent SDK helper functions (check_available(), run_agent_collect(), etc.) are useful beyond just pickman. Move them to a shared u_boot_pylib.claude module so that other tools like patman can also use Claude agents without duplicating the code. Update pickman/agent.py to import from the shared module instead of defining its own copies. Add unit tests for the new module covering availability detection, output collection and error handling. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/pickman/agent.py | 58 +++------------- tools/pickman/ftest.py | 43 ++++++------ tools/u_boot_pylib/__init__.py | 4 +- tools/u_boot_pylib/__main__.py | 4 +- tools/u_boot_pylib/claude.py | 64 +++++++++++++++++ tools/u_boot_pylib/test_claude.py | 111 ++++++++++++++++++++++++++++++ 6 files changed, 211 insertions(+), 73 deletions(-) create mode 100644 tools/u_boot_pylib/claude.py create mode 100644 tools/u_boot_pylib/test_claude.py diff --git a/tools/pickman/agent.py b/tools/pickman/agent.py index 7e482a7b8ee..f20e0f2fda5 100644 --- a/tools/pickman/agent.py +++ b/tools/pickman/agent.py @@ -27,8 +27,14 @@ SIGNAL_SUCCESS = 'success' SIGNAL_APPLIED = 'already_applied' SIGNAL_CONFLICT = 'conflict' -# Maximum buffer size for agent responses -MAX_BUFFER_SIZE = 10 * 1024 * 1024 # 10MB +# Import common Claude agent utilities from shared module +from u_boot_pylib.claude import ( + AGENT_AVAILABLE, MAX_BUFFER_SIZE, check_available, run_agent_collect, +) + +ClaudeAgentOptions = None +if AGENT_AVAILABLE: + from u_boot_pylib.claude import ClaudeAgentOptions # pylint: disable=C0412 # Commits that need special handling (regenerate instead of cherry-pick) # These run savedefconfig on all boards and depend on target branch @@ -37,54 +43,6 @@ QCONFIG_SUBJECTS = [ 'configs: Resync with savedefconfig', ] -# Check if claude_agent_sdk is available -try: - from claude_agent_sdk import query, ClaudeAgentOptions - AGENT_AVAILABLE = True -except ImportError: - AGENT_AVAILABLE = False - - -def check_available(): - """Check if the Claude Agent SDK is available - - Returns: - bool: True if available, False otherwise - """ - if not AGENT_AVAILABLE: - tout.error('Claude Agent SDK not available') - tout.error('Install with: pip install claude-agent-sdk') - return False - return True - - -async def run_agent_collect(prompt, options): - """Run a Claude agent and collect its conversation log - - Sends the prompt to a Claude agent, streams output to stdout and - collects all text blocks into a conversation log. - - Args: - prompt (str): The prompt to send to the agent - options (ClaudeAgentOptions): Agent configuration - - Returns: - tuple: (success, conversation_log) where success is bool and - conversation_log is the agent's output text - """ - conversation_log = [] - try: - async for message in query(prompt=prompt, options=options): - if hasattr(message, 'content'): - for block in message.content: - if hasattr(block, 'text'): - print(block.text) - conversation_log.append(block.text) - return True, '\n\n'.join(conversation_log) - except (RuntimeError, ValueError, OSError) as exc: - tout.error(f'Agent failed: {exc}') - return False, '\n\n'.join(conversation_log) - def is_qconfig_commit(subject): """Check if a commit subject indicates a qconfig resync commit diff --git a/tools/pickman/ftest.py b/tools/pickman/ftest.py index 261ca4cd2d5..e5c1ceb1359 100644 --- a/tools/pickman/ftest.py +++ b/tools/pickman/ftest.py @@ -3121,7 +3121,8 @@ class TestRunAgentCollect(unittest.TestCase): async def fake_query(**kwargs): yield msg - with mock.patch.object(agent, 'query', fake_query, create=True): + with mock.patch('u_boot_pylib.claude.query', fake_query, + create=True): with terminal.capture(): opts = mock.MagicMock() success, log = asyncio.run( @@ -3141,7 +3142,8 @@ class TestRunAgentCollect(unittest.TestCase): yield msg raise RuntimeError('agent crashed') - with mock.patch.object(agent, 'query', fake_query, create=True): + with mock.patch('u_boot_pylib.claude.query', fake_query, + create=True): with terminal.capture(): opts = mock.MagicMock() success, log = asyncio.run( @@ -3157,7 +3159,8 @@ class TestRunAgentCollect(unittest.TestCase): async def fake_query(**kwargs): yield msg - with mock.patch.object(agent, 'query', fake_query, create=True): + with mock.patch('u_boot_pylib.claude.query', fake_query, + create=True): with terminal.capture(): opts = mock.MagicMock() success, log = asyncio.run( @@ -6610,14 +6613,14 @@ class TestResolveSubtreeConflicts(unittest.TestCase): """Test successful conflict resolution.""" mock_collect = mock.AsyncMock(return_value=(True, 'resolved')) with terminal.capture(): - with mock.patch.object(agent, 'AGENT_AVAILABLE', True): - with mock.patch.object(agent, 'run_agent_collect', - mock_collect): - with mock.patch.object(agent, 'ClaudeAgentOptions', - create=True): - success, log = agent.resolve_subtree_conflicts( - 'dts', 'v6.15-dts', 'dts/upstream', - '/tmp/test') + with mock.patch('u_boot_pylib.claude.AGENT_AVAILABLE', True), \ + mock.patch.object(agent, 'run_agent_collect', + mock_collect), \ + mock.patch.object(agent, 'ClaudeAgentOptions', + create=True): + success, log = agent.resolve_subtree_conflicts( + 'dts', 'v6.15-dts', 'dts/upstream', + '/tmp/test') self.assertTrue(success) self.assertEqual(log, 'resolved') @@ -6625,20 +6628,20 @@ class TestResolveSubtreeConflicts(unittest.TestCase): """Test failed conflict resolution.""" mock_collect = mock.AsyncMock(return_value=(False, 'failed')) with terminal.capture(): - with mock.patch.object(agent, 'AGENT_AVAILABLE', True): - with mock.patch.object(agent, 'run_agent_collect', - mock_collect): - with mock.patch.object(agent, 'ClaudeAgentOptions', - create=True): - success, log = agent.resolve_subtree_conflicts( - 'dts', 'v6.15-dts', 'dts/upstream', - '/tmp/test') + with mock.patch('u_boot_pylib.claude.AGENT_AVAILABLE', True), \ + mock.patch.object(agent, 'run_agent_collect', + mock_collect), \ + mock.patch.object(agent, 'ClaudeAgentOptions', + create=True): + success, log = agent.resolve_subtree_conflicts( + 'dts', 'v6.15-dts', 'dts/upstream', + '/tmp/test') self.assertFalse(success) def test_sdk_unavailable(self): """Test returns failure when SDK is not available.""" with terminal.capture(): - with mock.patch.object(agent, 'AGENT_AVAILABLE', False): + with mock.patch('u_boot_pylib.claude.AGENT_AVAILABLE', False): success, log = agent.resolve_subtree_conflicts( 'dts', 'v6.15-dts', 'dts/upstream', '/tmp/test') self.assertFalse(success) diff --git a/tools/u_boot_pylib/__init__.py b/tools/u_boot_pylib/__init__.py index 807a62e0743..c176e332a51 100644 --- a/tools/u_boot_pylib/__init__.py +++ b/tools/u_boot_pylib/__init__.py @@ -1,4 +1,4 @@ # SPDX-License-Identifier: GPL-2.0+ -__all__ = ['command', 'cros_subprocess', 'gitutil', 'terminal', 'test_util', - 'tools', 'tout'] +__all__ = ['claude', 'command', 'cros_subprocess', 'gitutil', 'terminal', + 'test_util', 'tools', 'tout'] diff --git a/tools/u_boot_pylib/__main__.py b/tools/u_boot_pylib/__main__.py index 6b9f4f3d950..5687f9b51a5 100755 --- a/tools/u_boot_pylib/__main__.py +++ b/tools/u_boot_pylib/__main__.py @@ -28,12 +28,14 @@ def run_tests(): help='Verbose output') args = parser.parse_args() + from u_boot_pylib import test_claude + to_run = args.testname if args.testname not in [None, 'test'] else None result = test_util.run_test_suites( 'u_boot_pylib', False, args.verbose, False, False, None, to_run, None, ['u_boot_pylib.terminal', 'u_boot_pylib.gitutil', - cros_subprocess.TestSubprocess]) + cros_subprocess.TestSubprocess, test_claude.TestClaude]) sys.exit(0 if result.wasSuccessful() else 1) diff --git a/tools/u_boot_pylib/claude.py b/tools/u_boot_pylib/claude.py new file mode 100644 index 00000000000..29dff1d1d4b --- /dev/null +++ b/tools/u_boot_pylib/claude.py @@ -0,0 +1,64 @@ +# SPDX-License-Identifier: GPL-2.0+ +# +# Copyright 2025 Canonical Ltd. +# Written by Simon Glass <simon.glass@canonical.com> +# + +"""Common Claude Agent SDK utilities. + +Provides shared functions for running Claude agents across tools that need +AI assistance (e.g. pickman, patman review). +""" + +from u_boot_pylib import tout + +# Maximum buffer size for agent responses +MAX_BUFFER_SIZE = 10 * 1024 * 1024 # 10MB + +# Check if claude_agent_sdk is available +try: + from claude_agent_sdk import query, ClaudeAgentOptions + AGENT_AVAILABLE = True +except ImportError: + AGENT_AVAILABLE = False + + +def check_available(): + """Check if the Claude Agent SDK is available + + Returns: + bool: True if available, False otherwise + """ + if not AGENT_AVAILABLE: + tout.error('Claude Agent SDK not available') + tout.error('Install with: pip install claude-agent-sdk') + return False + return True + + +async def run_agent_collect(prompt, options): + """Run a Claude agent and collect its conversation log + + Sends the prompt to a Claude agent, streams output to stdout and + collects all text blocks into a conversation log. + + Args: + prompt (str): The prompt to send to the agent + options (ClaudeAgentOptions): Agent configuration + + Returns: + tuple: (success, conversation_log) where success is bool and + conversation_log is the agent's output text + """ + conversation_log = [] + try: + async for message in query(prompt=prompt, options=options): + if hasattr(message, 'content'): + for block in message.content: + if hasattr(block, 'text'): + print(block.text) + conversation_log.append(block.text) + return True, '\n\n'.join(conversation_log) + except (RuntimeError, ValueError, OSError) as exc: + tout.error(f'Agent failed: {exc}') + return False, '\n\n'.join(conversation_log) diff --git a/tools/u_boot_pylib/test_claude.py b/tools/u_boot_pylib/test_claude.py new file mode 100644 index 00000000000..c564dddb70e --- /dev/null +++ b/tools/u_boot_pylib/test_claude.py @@ -0,0 +1,111 @@ +# SPDX-License-Identifier: GPL-2.0+ +# +# Copyright 2025 Canonical Ltd. +# + +"""Tests for the Claude Agent SDK utilities module.""" + +import asyncio +import unittest +from unittest.mock import MagicMock + +from u_boot_pylib import claude +from u_boot_pylib import terminal + + +class TestClaude(unittest.TestCase): + """Tests for u_boot_pylib.claude""" + + def test_check_available_when_sdk_missing(self): + """check_available() returns False when SDK is not installed""" + if not claude.AGENT_AVAILABLE: + with terminal.capture(): + self.assertFalse(claude.check_available()) + + def test_check_available_when_sdk_present(self): + """check_available() returns True when SDK is installed""" + old = claude.AGENT_AVAILABLE + try: + claude.AGENT_AVAILABLE = True + self.assertTrue(claude.check_available()) + finally: + claude.AGENT_AVAILABLE = old + + def test_max_buffer_size(self): + """MAX_BUFFER_SIZE is defined and reasonable""" + self.assertEqual(claude.MAX_BUFFER_SIZE, 10 * 1024 * 1024) + + def _setup_claude_with_mock_query(self, mock_query): + """Inject a mock query function into the claude module""" + claude.query = mock_query + + def test_run_agent_collect_success(self): + """run_agent_collect() collects text from agent messages""" + block1 = MagicMock() + block1.text = 'Hello' + msg1 = MagicMock() + msg1.content = [block1] + + block2 = MagicMock() + block2.text = 'World' + msg2 = MagicMock() + msg2.content = [block2] + + # pylint: disable=W0613 + async def mock_query(**kwargs): + for msg in [msg1, msg2]: + yield msg + + self._setup_claude_with_mock_query(mock_query) + loop = asyncio.new_event_loop() + with terminal.capture(): + success, log = loop.run_until_complete( + claude.run_agent_collect('test prompt', MagicMock())) + loop.close() + + self.assertTrue(success) + self.assertIn('Hello', log) + self.assertIn('World', log) + + def test_run_agent_collect_handles_error(self): + """run_agent_collect() returns False on agent failure""" + # pylint: disable=W0613 + async def mock_query(**kwargs): + raise RuntimeError('Agent crashed') + yield # pylint: disable=W0101 + + self._setup_claude_with_mock_query(mock_query) + loop = asyncio.new_event_loop() + with terminal.capture(): + success, _ = loop.run_until_complete( + claude.run_agent_collect('test prompt', MagicMock())) + loop.close() + + self.assertFalse(success) + + def test_run_agent_collect_skips_non_text_blocks(self): + """run_agent_collect() ignores blocks without text attribute""" + text_block = MagicMock() + text_block.text = 'Real text' + tool_block = MagicMock(spec=[]) # No text attribute + + msg = MagicMock() + msg.content = [tool_block, text_block] + + # pylint: disable=W0613 + async def mock_query(**kwargs): + yield msg + + self._setup_claude_with_mock_query(mock_query) + loop = asyncio.new_event_loop() + with terminal.capture(): + success, log = loop.run_until_complete( + claude.run_agent_collect('test prompt', MagicMock())) + loop.close() + + self.assertTrue(success) + self.assertIn('Real text', log) + + +if __name__ == '__main__': + unittest.main() -- 2.43.0
From: Simon Glass <sjg@chromium.org> Rename _query_series() to query_series() and _get_patch_comments() to get_patch_comments() so that review.py can call them without accessing protected members. Update all internal callers. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/patchwork.py | 10 +++++----- tools/patman/status.py | 2 +- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/tools/patman/patchwork.py b/tools/patman/patchwork.py index 5192b817968..de676161eee 100644 --- a/tools/patman/patchwork.py +++ b/tools/patman/patchwork.py @@ -270,7 +270,7 @@ class Patchwork: async with aiohttp.ClientSession() as client: return await self._request(client, 'projects/') - async def _query_series(self, client, desc): + async def query_series(self, client, desc): """Query series by name Args: @@ -306,7 +306,7 @@ class Patchwork: name_found = [] # Do a series query on the description - res = await self._query_series(client, desc) + res = await self.query_series(client, desc) for pws in res: if pws['name'] == desc: if int(pws['version']) == version: @@ -317,7 +317,7 @@ class Patchwork: # series name cmt = ser.commits[0] - res = await self._query_series(client, cmt.subject) + res = await self.query_series(client, cmt.subject) for pws in res: patch = Patch(0) patch.parse_subject(pws['name']) @@ -523,7 +523,7 @@ class Patchwork: """ return await self._request(client, f'patches/{patch_id}/') - async def _get_patch_comments(self, client, patch_id): + async def get_patch_comments(self, client, patch_id): """Read comments about a patch Args: @@ -783,7 +783,7 @@ On Tue, 4 Mar 2025 at 06:09, Simon Glass <sjg@chromium.org> wrote: """ data = await self.get_patch(client, patch_id) state = data['state'] - comment_data = await self._get_patch_comments(client, patch_id) + comment_data = await self.get_patch_comments(client, patch_id) return Patch(patch_id, state, data, comment_data) diff --git a/tools/patman/status.py b/tools/patman/status.py index 967fef3ad6e..2828e4a3bfc 100644 --- a/tools/patman/status.py +++ b/tools/patman/status.py @@ -28,7 +28,7 @@ def process_reviews(content, comment_data, base_rtags): Args: content (str): Content text of the patch itself - see pwork.get_patch() comment_data (list of dict): Comments for the patch - see - pwork._get_patch_comments() + pwork.get_patch_comments() base_rtags (dict): base review tags (before any comments) key: Response tag (e.g. 'Reviewed-by') value: Set of people who gave that response, each a name/email -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add tout.debug() calls throughout the autolink pipeline so failures can be diagnosed with -v. Log the patchwork URL resolution, project lookup, series queries and autolink progress. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/control.py | 1 + tools/patman/cseries.py | 8 ++++++++ tools/patman/patchwork.py | 5 +++-- 3 files changed, 12 insertions(+), 2 deletions(-) diff --git a/tools/patman/control.py b/tools/patman/control.py index a37893785ad..61379cc108e 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -182,6 +182,7 @@ def do_series(args, test_db=None, pwork=None, cser=None): cser.open_database() if args.subcmd in needs_patchwork: ups = cser.get_series_upstream(args.series) + tout.debug(f'Series upstream: {ups!r}') pwork = _setup_patchwork( cser, pwork, ups, args.patchwork_url) elif pwork and pwork is not True: diff --git a/tools/patman/cseries.py b/tools/patman/cseries.py index 5569eb6f4f5..1dd1550f367 100644 --- a/tools/patman/cseries.py +++ b/tools/patman/cseries.py @@ -302,11 +302,19 @@ class Cseries(cser_helper.CseriesHelper): stop = start + wait_s sleep_time = 5 last_options = None + first = True while True: pws, options, name, version, desc = self.link_search( pwork, series, version) + if first: + tout.debug(f"Autolinking series '{name}' v{version}" + f" (timeout {wait_s}s)") + first = False + tout.debug(f"Searching {pwork.url} project {pwork.proj_id}" + f" for '{desc}'") if pws: tout.clear_progress() + tout.debug(f'Found link: {pws}') if wait_s: tout.notice('Link completed after ' f'{self.get_time() - start} seconds') diff --git a/tools/patman/patchwork.py b/tools/patman/patchwork.py index de676161eee..3e8f7c6c62c 100644 --- a/tools/patman/patchwork.py +++ b/tools/patman/patchwork.py @@ -281,8 +281,9 @@ class Patchwork: list of series matches, each a dict, see get_series() """ query = desc.replace(' ', '+') - return await self._request( - client, f'series/?project={self.proj_id}&q={query}') + subpath = f'series/?project={self.proj_id}&q={query}' + tout.debug(f' GET {self.url}/api/1.2/{subpath}') + return await self._request(client, subpath) async def _find_series(self, client, svid, ser_id, version, ser): """Find a series on the server -- 2.43.0
From: Simon Glass <sjg@chromium.org> The Claude Agent SDK raises a bare Exception on API errors (e.g. 500 Internal Server Error), but run_agent_collect() only catches RuntimeError, ValueError and OSError. This causes the entire review to crash when a single agent call fails, even though the review loop already handles the failure gracefully with a placeholder message. Catch bare Exceptions whose message indicates an API or process error (containing 'API Error' or 'exit code') while re-raising unexpected exceptions that indicate real bugs. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/u_boot_pylib/claude.py | 5 +++++ tools/u_boot_pylib/test_claude.py | 31 +++++++++++++++++++++++++++++++ 2 files changed, 36 insertions(+) diff --git a/tools/u_boot_pylib/claude.py b/tools/u_boot_pylib/claude.py index 29dff1d1d4b..6ba980b6b2e 100644 --- a/tools/u_boot_pylib/claude.py +++ b/tools/u_boot_pylib/claude.py @@ -62,3 +62,8 @@ async def run_agent_collect(prompt, options): except (RuntimeError, ValueError, OSError) as exc: tout.error(f'Agent failed: {exc}') return False, '\n\n'.join(conversation_log) + except Exception as exc: + if 'API Error' in str(exc) or 'exit code' in str(exc): + tout.error(f'Agent failed: {exc}') + return False, '\n\n'.join(conversation_log) + raise diff --git a/tools/u_boot_pylib/test_claude.py b/tools/u_boot_pylib/test_claude.py index c564dddb70e..2a666e6c396 100644 --- a/tools/u_boot_pylib/test_claude.py +++ b/tools/u_boot_pylib/test_claude.py @@ -83,6 +83,37 @@ class TestClaude(unittest.TestCase): self.assertFalse(success) + def test_run_agent_collect_handles_api_error(self): + """run_agent_collect() catches SDK API errors""" + # pylint: disable=W0613,W0719 + async def mock_query(**kwargs): + raise Exception( + 'Command failed with exit code 1 (exit code: 1)') + yield # pylint: disable=W0101 + + self._setup_claude_with_mock_query(mock_query) + loop = asyncio.new_event_loop() + with terminal.capture(): + success, _ = loop.run_until_complete( + claude.run_agent_collect('test prompt', MagicMock())) + loop.close() + + self.assertFalse(success) + + def test_run_agent_collect_reraises_unknown(self): + """run_agent_collect() re-raises unexpected exceptions""" + # pylint: disable=W0613 + async def mock_query(**kwargs): + raise TypeError('unexpected bug') + yield # pylint: disable=W0101 + + self._setup_claude_with_mock_query(mock_query) + loop = asyncio.new_event_loop() + with self.assertRaises(TypeError): + loop.run_until_complete( + claude.run_agent_collect('test prompt', MagicMock())) + loop.close() + def test_run_agent_collect_skips_non_text_blocks(self): """run_agent_collect() ignores blocks without text attribute""" text_block = MagicMock() -- 2.43.0
From: Simon Glass <sjg@chromium.org> There is no way to remove a patchwork project configuration once it is added. Add a 'patchwork rm' subcommand that deletes the project entry for a given upstream, or the default entry if no upstream is specified. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/cmdline.py | 4 ++++ tools/patman/control.py | 5 +++++ tools/patman/database.py | 20 ++++++++++++++++++++ tools/patman/test_cseries.py | 32 ++++++++++++++++++++++++++++++++ 4 files changed, 61 insertions(+) diff --git a/tools/patman/cmdline.py b/tools/patman/cmdline.py index 26626f5b441..3843811729e 100644 --- a/tools/patman/cmdline.py +++ b/tools/patman/cmdline.py @@ -173,6 +173,10 @@ def add_patchwork_subparser(subparsers): uset.add_argument( 'remote', nargs='?', help='Remote to associate with this project') + pdel = patchwork_subparsers.add_parser('rm') + pdel.add_argument( + 'remote', nargs='?', + help='Remote to delete the project for, or omit for the default') patchwork_subparsers.add_parser('ls', aliases=['list']) return patchwork diff --git a/tools/patman/control.py b/tools/patman/control.py index 61379cc108e..f1e52d7944a 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -367,6 +367,11 @@ def patchwork(args, test_db=None, pwork=None): if ups: msg += f" remote '{ups}'" print(msg) + elif args.subcmd == 'rm': + cser.db.patchwork_delete(args.remote) + cser.commit() + ups_str = f" for upstream '{args.remote}'" if args.remote else '' + tout.info(f'Deleted patchwork project{ups_str}') elif args.subcmd == 'ls': cser.project_list() else: diff --git a/tools/patman/database.py b/tools/patman/database.py index 437a05b3de0..725d13253d5 100644 --- a/tools/patman/database.py +++ b/tools/patman/database.py @@ -1041,6 +1041,26 @@ class Database: # pylint:disable=R0904 'INSERT INTO patchwork (name, proj_id, link_name, upstream) ' 'VALUES (?, ?, ?, ?)', (name, proj_id, link_name, ups)) + def patchwork_delete(self, ups): + """Delete a patchwork project configuration + + Args: + ups (str or None): Upstream name to delete, or None for the + entry with no upstream + + Raises: + ValueError: if no matching entry exists + """ + if ups is not None: + self.execute( + 'DELETE FROM patchwork WHERE upstream = ?', (ups,)) + else: + self.execute( + 'DELETE FROM patchwork WHERE upstream IS NULL') + if not self.rowcount(): + raise ValueError( + f"No patchwork project found for upstream '{ups}'") + def patchwork_get_list(self): """Get all patchwork project configurations diff --git a/tools/patman/test_cseries.py b/tools/patman/test_cseries.py index 0b6e6ec9e32..96c1d62486c 100644 --- a/tools/patman/test_cseries.py +++ b/tools/patman/test_cseries.py @@ -2667,6 +2667,38 @@ Tested-by: Mary Smith <msmith@wibble.com> # yak self.assertEqual(('Linux', 10, 'linux'), cser.db.patchwork_get('ci')) self.assertEqual(('U-Boot', 6, 'uboot'), cser.db.patchwork_get('us')) + def test_patchwork_rm(self): + """Test deleting a patchwork project configuration""" + cser = self.get_cser() + + cser.db.upstream_add('us', 'https://us.example.com') + cser.db.upstream_add('ci', 'https://ci.example.com') + cser.db.patchwork_update('U-Boot', 6, 'uboot', 'us') + cser.db.patchwork_update('Linux', 10, 'linux', 'ci') + cser.db.commit() + + # Delete by upstream name + cser.db.patchwork_delete('us') + cser.db.commit() + self.assertIsNone(cser.db.patchwork_get('us')) + self.assertEqual(('Linux', 10, 'linux'), cser.db.patchwork_get('ci')) + + # Delete non-existent raises ValueError + with self.assertRaises(ValueError): + cser.db.patchwork_delete('us') + + def test_patchwork_rm_default(self): + """Test deleting the default (no upstream) patchwork project""" + cser = self.get_cser() + + cser.db.patchwork_update('U-Boot', 6, 'uboot') + cser.db.commit() + self.assertIsNotNone(cser.db.patchwork_get()) + + cser.db.patchwork_delete(None) + cser.db.commit() + self.assertIsNone(cser.db.patchwork_get()) + def test_migrate_patchwork_upstream(self): """Test that migrating to v5 renames settings to patchwork""" db = database.Database(f'{self.tmpdir}/.patman3.db') -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add database support for tracking AI-assisted reviews of other people's patch series: - Add 'source' column to the series table to distinguish review series (source='review') from the user's own series (source=NULL) - Add 'review' table to store AI review results per patch, including the review body text, approval status and timestamp - Add helper methods for storing and retrieving reviews, including review_get_previous() for loading context from prior versions - Add series_find_by_link() and series_find_review_by_name() to detect when re-reviewing the same series or a new version of a previously reviewed series Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/database.py | 121 ++++++++++++++++++++++++++++++++++- tools/patman/test_cseries.py | 2 +- 2 files changed, 121 insertions(+), 2 deletions(-) diff --git a/tools/patman/database.py b/tools/patman/database.py index 725d13253d5..7f33137d0b7 100644 --- a/tools/patman/database.py +++ b/tools/patman/database.py @@ -2,6 +2,7 @@ # # Copyright 2025 Simon Glass <sjg@chromium.org> # +# pylint: disable=C0302 """Handles the patman database This uses sqlite3 with a local file. @@ -19,7 +20,12 @@ from u_boot_pylib import tout from patman.series import Series # Schema version (version 0 means there is no database yet) -LATEST = 7 +LATEST = 8 + +# Information about a review record +Review = namedtuple( + 'REVIEW', + 'idnum,svid,seq,body,approved,timestamp') # Information about a series/version record SerVer = namedtuple( @@ -223,6 +229,20 @@ class Database: # pylint:disable=R0904 self.cur.execute( 'ALTER TABLE workflow ADD COLUMN ser_ver_id INTEGER') + def _migrate_to_v8(self): + """Add review tracking and series source type + + - Add source column to series table (NULL for user's own series, + 'review' for series being reviewed) + - Add review table for storing AI review results per patch + """ + self.cur.execute('ALTER TABLE series ADD COLUMN source') + self.cur.execute( + 'CREATE TABLE review (id INTEGER PRIMARY KEY AUTOINCREMENT,' + 'svid INTEGER, seq INTEGER, body TEXT, approved BIT, ' + 'timestamp TEXT, ' + 'FOREIGN KEY (svid) REFERENCES ser_ver (id))') + def migrate_to(self, dest_version): """Migrate the database to the selected version @@ -255,6 +275,8 @@ class Database: # pylint:disable=R0904 self._migrate_to_v6() elif version == 7: self._migrate_to_v7() + elif version == 8: + self._migrate_to_v8() # Save the new version if we have a schema_version table if version > 1: @@ -842,6 +864,7 @@ class Database: # pylint:disable=R0904 # upstream functions + # pylint: disable=R0913 def upstream_add(self, name, url, patchwork_url=None, identity=None, series_to=None, no_maintainers=False, no_tags=False): """Add a new upstream record @@ -1200,3 +1223,99 @@ class Database: # pylint:disable=R0904 query += ' ORDER BY w.timestamp' res = self.execute(query) return res.fetchall() + + # pylint: disable=R0913 + def review_add(self, svid, seq, body, approved, timestamp): + """Add a review record + + Args: + svid (int): ser_ver ID num + seq (int): Patch sequence (0 for cover, 1..N for patches) + body (str): Review email body text + approved (bool): True if Reviewed-by was given + timestamp (str): ISO datetime string + + Return: + int: ID num of the new review record + """ + self.execute( + 'INSERT INTO review (svid, seq, body, approved, timestamp) ' + 'VALUES (?, ?, ?, ?, ?)', + (svid, seq, body, 1 if approved else 0, timestamp)) + return self.lastrowid() + + def review_get_for_version(self, svid): + """Get review records for a given series version + + Args: + svid (int): ser_ver ID num + + Return: + list of Review: Review records ordered by sequence + """ + res = self.execute( + 'SELECT id, svid, seq, body, approved, timestamp ' + 'FROM review WHERE svid = ? ORDER BY seq', (svid,)) + return [Review(*row) for row in res.fetchall()] + + def review_get_previous(self, series_id, version): + """Get reviews from the previous version of a series + + Looks up the ser_ver for version-1 and returns its reviews, so + they can be provided as context when reviewing a new version. + + Args: + series_id (int): Series ID + version (int): Current version being reviewed + + Return: + list of Review: Reviews from version-1, or empty list + """ + prev_version = version - 1 + if prev_version < 1: + return [] + res = self.execute( + 'SELECT sv.id FROM ser_ver sv ' + 'WHERE sv.series_id = ? AND sv.version = ?', + (series_id, prev_version)) + row = res.fetchone() + if not row: + return [] + return self.review_get_for_version(row[0]) + + def series_find_by_link(self, link): + """Find a series by its patchwork link + + Args: + link (str): Patchwork series link/ID + + Return: + tuple or None: (series_id, name, version, svid) if found + """ + res = self.execute( + 'SELECT s.id, s.name, sv.version, sv.id ' + 'FROM ser_ver sv ' + 'JOIN series s ON sv.series_id = s.id ' + 'WHERE sv.link = ?', (str(link),)) + return res.fetchone() + + def series_find_review_by_name(self, name): + """Find a review series by its name + + Looks for series with source='review' matching the given name, + so that new versions of a previously reviewed series can be + added under the same series record. + + Args: + name (str): Series name to search for + + Return: + tuple or None: (series_id, name, max_version) if found + """ + res = self.execute( + 'SELECT s.id, s.name, MAX(sv.version) ' + 'FROM series s ' + 'JOIN ser_ver sv ON sv.series_id = s.id ' + "WHERE s.source = 'review' AND s.name = ? " + 'GROUP BY s.id', (name,)) + return res.fetchone() diff --git a/tools/patman/test_cseries.py b/tools/patman/test_cseries.py index 96c1d62486c..5661e13e2e9 100644 --- a/tools/patman/test_cseries.py +++ b/tools/patman/test_cseries.py @@ -3589,7 +3589,7 @@ Date: .* self.assertEqual(f'Update database to v{version}', out.getvalue().strip()) self.assertEqual(version, db.get_schema_version()) - self.assertEqual(7, database.LATEST) + self.assertEqual(8, database.LATEST) def test_migrate_future_version(self): """Test that a database newer than patman is rejected""" -- 2.43.0
From: Simon Glass <sjg@chromium.org> Move local imports to the top level, group patman imports together, convert a %-format string to f-string, fix a bare 'print' statement that has no effect, and suppress unavoidable warnings for functions with many branches/arguments. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/control.py | 34 +++++++++++++++------------------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/tools/patman/control.py b/tools/patman/control.py index f1e52d7944a..87598622eb7 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -17,14 +17,17 @@ except ImportError: # for Python 3.6 import importlib_resources as resources +from patman import cseries +from patman import patchstream +from patman import send +from patman import settings +from patman import status +from patman import workflow +from patman.patchwork import Patchwork from u_boot_pylib import gitutil from u_boot_pylib import terminal from u_boot_pylib import tools from u_boot_pylib import tout -from patman import patchstream -from patman.patchwork import Patchwork -from patman import send -from patman import settings def setup(): @@ -45,6 +48,7 @@ def do_send(args): send.send(args) +# pylint: disable=R0913 def patchwork_status(branch, count, start, end, dest_branch, force, show_comments, url, single_thread=False): """Check the status of patches in patchwork @@ -81,11 +85,11 @@ def patchwork_status(branch, count, start, end, dest_branch, force, warnings = 0 for cmt in series.commits: if cmt.warn: - print('%d warnings for %s:' % (len(cmt.warn), cmt.hash)) + print(f'{len(cmt.warn)} warnings for {cmt.hash}:') for warn in cmt.warn: print('\t', warn) warnings += 1 - print + print() if warnings: raise ValueError('Please fix warnings before running status') links = series.get('links') @@ -103,9 +107,6 @@ def patchwork_status(branch, count, start, end, dest_branch, force, url = series.patchwork_url pwork = Patchwork(url, single_thread=single_thread) - # Import this here to avoid failing on other commands if the dependencies - # are not present - from patman import status pwork = Patchwork(url) status.check_and_show_status(series, link, branch, dest_branch, force, show_comments, False, pwork) @@ -159,6 +160,7 @@ def _setup_patchwork(cser, pwork, ups, pw_url): return pwork +# pylint: disable=R0912,R0915 def do_series(args, test_db=None, pwork=None, cser=None): """Process a series subcommand @@ -170,8 +172,6 @@ def do_series(args, test_db=None, pwork=None, cser=None): needed cser (Cseries): Cseries object to use, None to create one """ - from patman import cseries - if not cser: cser = cseries.Cseries(test_db) needs_patchwork = [ @@ -266,6 +266,7 @@ def do_series(args, test_db=None, pwork=None, cser=None): cser.close_database() +# pylint: disable=R0912 def upstream(args, test_db=None): """Process an 'upstream' subcommand @@ -274,8 +275,6 @@ def upstream(args, test_db=None): test_db (str or None): Directory containing the test database, None to use the normal one """ - from patman import cseries - cser = cseries.Cseries(test_db) try: cser.open_database() @@ -324,6 +323,7 @@ def upstream(args, test_db=None): cser.close_database() +# pylint: disable=R0912 def patchwork(args, test_db=None, pwork=None): """Process a 'patchwork' subcommand Args: @@ -332,8 +332,6 @@ def patchwork(args, test_db=None, pwork=None): use the normal one pwork (Patchwork): Patchwork object to use """ - from patman import cseries - cser = cseries.Cseries(test_db) try: cser.open_database() @@ -387,9 +385,6 @@ def do_workflow(args, test_db=None): test_db (str or None): Directory containing the test database, None to use the normal one """ - from patman import cseries - from patman import workflow - cser = cseries.Cseries(test_db) try: cser.open_database() @@ -408,6 +403,7 @@ def do_workflow(args, test_db=None): cser.close_database() +# pylint: disable=R0912 def do_patman(args, test_db=None, pwork=None, cser=None): """Process a patman command @@ -458,7 +454,7 @@ def do_patman(args, test_db=None, pwork=None, cser=None): patchwork(args, test_db, pwork) elif args.cmd == 'workflow': do_workflow(args, test_db) - except Exception as exc: + except Exception as exc: # pylint: disable=W0718 terminal.tprint(f'patman: {type(exc).__name__}: {exc}', colour=terminal.Color.RED) if args.debug: -- 2.43.0
From: Simon Glass <sjg@chromium.org> The patchwork_status() function in control.py handles both series metadata parsing and patchwork link resolution. The latter belongs in status.py alongside the other status-checking logic. Extract the link resolution, URL override and Patchwork construction into a new find_link_and_show_status() function in status.py. This also fixes a bug where Patchwork() is constructed twice, with the second call dropping the single_thread parameter. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/control.py | 21 +++++---------------- tools/patman/status.py | 38 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 43 insertions(+), 16 deletions(-) diff --git a/tools/patman/control.py b/tools/patman/control.py index 87598622eb7..1c1b998ddd6 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -54,14 +54,14 @@ def patchwork_status(branch, count, start, end, dest_branch, force, """Check the status of patches in patchwork This finds the series in patchwork using the Series-link tag, checks for new - comments and review tags, displays then and creates a new branch with the + comments and review tags, displays them and creates a new branch with the review tags. Args: branch (str): Branch to create patches from (None = current) count (int): Number of patches to produce, or -1 to produce patches for the current branch back to the upstream commit - start (int): Start partch to use (0=first / top of branch) + start (int): Start patch to use (0=first / top of branch) end (int): End patch to use (0=last one in series, 1=one before that, etc.) dest_branch (str): Name of new branch to create with the updated tags @@ -96,20 +96,9 @@ def patchwork_status(branch, count, start, end, dest_branch, force, if not links: raise ValueError("Branch has no Series-links value") - _, version = patchstream.split_name_version(branch) - link = series.get_link_for_version(version, links) - if not link: - raise ValueError(f'Series-links has no link for v{version}') - tout.debug(f"Link '{link}") - - # Allow the series to override the URL - if 'patchwork_url' in series: - url = series.patchwork_url - pwork = Patchwork(url, single_thread=single_thread) - - pwork = Patchwork(url) - status.check_and_show_status(series, link, branch, dest_branch, force, - show_comments, False, pwork) + status.find_link_and_show_status( + series, branch, url, dest_branch, force, show_comments, False, + single_thread) def _setup_patchwork(cser, pwork, ups, pw_url): diff --git a/tools/patman/status.py b/tools/patman/status.py index 2828e4a3bfc..8673eeb697e 100644 --- a/tools/patman/status.py +++ b/tools/patman/status.py @@ -381,6 +381,44 @@ async def check_status(link, pwork, read_comments=False, read_cover_comments) +def find_link_and_show_status(series, branch, url, dest_branch, force, + show_comments, show_cover_comments, + single_thread=False): + """Find the patchwork link for a series and show its status + + Resolves the patchwork link from the series metadata, then checks + and displays the review status. + + Args: + series (Series): Series object for the existing branch + branch (str): Branch name (used to determine the version) + url (str): Patchwork server URL. Overridden by Series-patchwork-url + if present in the series. + dest_branch (str): Name of new branch to create, or None + force (bool): True to force overwriting dest_branch if it exists + show_comments (bool): True to show comments on each patch + show_cover_comments (bool): True to show cover letter comments + single_thread (bool): True to use a single thread for patchwork + """ + from patman import patchstream + from patman.patchwork import Patchwork + from u_boot_pylib import tout + + _, version = patchstream.split_name_version(branch) + links = series.get('links') + link = series.get_link_for_version(version, links) + if not link: + raise ValueError(f'Series-links has no link for v{version}') + tout.debug(f"Link '{link}") + + if 'patchwork_url' in series: + url = series.patchwork_url + pwork = Patchwork(url, single_thread=single_thread) + + check_and_show_status(series, link, branch, dest_branch, force, + show_comments, show_cover_comments, pwork) + + def check_and_show_status(series, link, branch, dest_branch, force, show_comments, show_cover_comments, pwork, test_repo=None): -- 2.43.0
From: Simon Glass <sjg@chromium.org> When 'patman status' cannot find a link for the current version in the commit's Series-links tag, look it up in the database. This handles the case where autolink runs without updating the commit, or the commit has not yet been rewritten. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/status.py | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/tools/patman/status.py b/tools/patman/status.py index 8673eeb697e..205314aef3e 100644 --- a/tools/patman/status.py +++ b/tools/patman/status.py @@ -408,7 +408,26 @@ def find_link_and_show_status(series, branch, url, dest_branch, force, links = series.get('links') link = series.get_link_for_version(version, links) if not link: - raise ValueError(f'Series-links has no link for v{version}') + # Fall back to the database if the commit metadata does not + # have a link for this version (e.g. autolink ran without -u) + from patman import cseries + + cser = cseries.Cseries() + try: + cser.open_database() + name, _ = patchstream.split_name_version(branch) + ser = cser.get_series_by_name(name) + if ser: + svinfo = cser.get_ser_ver(ser.idnum, version) + if svinfo: + link = svinfo.link + except (ValueError, AttributeError): + pass + finally: + cser.close_database() + if not link: + raise ValueError(f'No patchwork link for v{version}; ' + 'try: patman series autolink') tout.debug(f"Link '{link}") if 'patchwork_url' in series: -- 2.43.0
From: Simon Glass <sjg@chromium.org> Autolink stores the patchwork link in the database but does not update the Series-links tag in the commit unless -u is passed. This means 'patman status' cannot find the link since it reads from commit metadata. Change autolink and autolink-all to default to updating the commit, with --no-update to opt out. Update tests to match the new default. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/cmdline.py | 10 ++++++++-- tools/patman/test_cseries.py | 15 ++++++++------- 2 files changed, 16 insertions(+), 9 deletions(-) diff --git a/tools/patman/cmdline.py b/tools/patman/cmdline.py index 3843811729e..3c12200949f 100644 --- a/tools/patman/cmdline.py +++ b/tools/patman/cmdline.py @@ -260,7 +260,10 @@ def add_series_subparser(subparsers): auto = series_subparsers.add_parser('autolink', aliases=ALIASES['autolink']) - _add_update(auto) + auto.add_argument('-u', '--update', action='store_true', default=True, + help='Update the branch commit (default)') + auto.add_argument('--no-update', action='store_false', dest='update', + help='Do not update the branch commit') _add_wait(auto, 0) aall = series_subparsers.add_parser('autolink-all') @@ -268,7 +271,10 @@ def add_series_subparser(subparsers): help='Link all series versions, not just the latest') aall.add_argument('-r', '--replace-existing', action='store_true', help='Replace existing links') - _add_update(aall) + aall.add_argument('-u', '--update', action='store_true', default=True, + help='Update the branch commits (default)') + aall.add_argument('--no-update', action='store_false', dest='update', + help='Do not update the branch commits') series_subparsers.add_parser('dec') diff --git a/tools/patman/test_cseries.py b/tools/patman/test_cseries.py index 5661e13e2e9..3b78399d907 100644 --- a/tools/patman/test_cseries.py +++ b/tools/patman/test_cseries.py @@ -1402,7 +1402,7 @@ Tested-by: Mary Smith <msmith@wibble.com> # yak with (mock.patch.object(cseries.Cseries, 'link_auto_all', return_value=None) as method): self.run_args('series', 'autolink-all', pwork=True) - method.assert_called_once_with(True, update_commit=False, + method.assert_called_once_with(True, update_commit=True, link_all_versions=False, replace_existing=False, dry_run=False, show_summary=True) @@ -1410,7 +1410,7 @@ Tested-by: Mary Smith <msmith@wibble.com> # yak with (mock.patch.object(cseries.Cseries, 'link_auto_all', return_value=None) as method): self.run_args('series', 'autolink-all', '-a', pwork=True) - method.assert_called_once_with(True, update_commit=False, + method.assert_called_once_with(True, update_commit=True, link_all_versions=True, replace_existing=False, dry_run=False, show_summary=True) @@ -1418,7 +1418,7 @@ Tested-by: Mary Smith <msmith@wibble.com> # yak with (mock.patch.object(cseries.Cseries, 'link_auto_all', return_value=None) as method): self.run_args('series', 'autolink-all', '-a', '-r', pwork=True) - method.assert_called_once_with(True, update_commit=False, + method.assert_called_once_with(True, update_commit=True, link_all_versions=True, replace_existing=True, dry_run=False, show_summary=True) @@ -1426,22 +1426,23 @@ Tested-by: Mary Smith <msmith@wibble.com> # yak with (mock.patch.object(cseries.Cseries, 'link_auto_all', return_value=None) as method): self.run_args('series', '-n', 'autolink-all', '-r', pwork=True) - method.assert_called_once_with(True, update_commit=False, + method.assert_called_once_with(True, update_commit=True, link_all_versions=False, replace_existing=True, dry_run=True, show_summary=True) with (mock.patch.object(cseries.Cseries, 'link_auto_all', return_value=None) as method): - self.run_args('series', 'autolink-all', '-u', pwork=True) - method.assert_called_once_with(True, update_commit=True, + self.run_args('series', 'autolink-all', '--no-update', pwork=True) + method.assert_called_once_with(True, update_commit=False, link_all_versions=False, replace_existing=False, dry_run=False, show_summary=True) # Now do a real one to check the patchwork handling and output with terminal.capture() as (out, _): - self.run_args('series', 'autolink-all', '-a', pwork=pwork) + self.run_args('series', 'autolink-all', '-a', '--no-update', + pwork=pwork) itr = iter(out.getvalue().splitlines()) self.assertEqual( '1 series linked, 1 already linked, 1 not found (3 requests)', -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add missing single_thread parameter to patchwork_status() docstring, wrap a long error message to fit 80 columns, and suppress R0917 (too-many-positional-arguments) introduced in pylint 3.3.4. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/control.py | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/tools/patman/control.py b/tools/patman/control.py index 1c1b998ddd6..d82b80ce807 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -48,7 +48,7 @@ def do_send(args): send.send(args) -# pylint: disable=R0913 +# pylint: disable=R0913,R0917 def patchwork_status(branch, count, start, end, dest_branch, force, show_comments, url, single_thread=False): """Check the status of patches in patchwork @@ -69,8 +69,11 @@ def patchwork_status(branch, count, start, end, dest_branch, force, force (bool): With dest_branch, force overwriting an existing branch show_comments (bool): True to display snippets from the comments provided by reviewers - url (str): URL of patchwork server, e.g. 'https://patchwork.ozlabs.org'. - This is ignored if the series provides a Series-patchwork-url tag. + url (str): URL of patchwork server, e.g. + 'https://patchwork.ozlabs.org'. Ignored if the series + provides a Series-patchwork-url tag. + single_thread (bool): True to use a single thread for + patchwork access Raises: ValueError: if the branch has no Series-link value @@ -94,7 +97,7 @@ def patchwork_status(branch, count, start, end, dest_branch, force, raise ValueError('Please fix warnings before running status') links = series.get('links') if not links: - raise ValueError("Branch has no Series-links value") + raise ValueError('Branch has no Series-links value') status.find_link_and_show_status( series, branch, url, dest_branch, force, show_comments, False, @@ -176,7 +179,8 @@ def do_series(args, test_db=None, pwork=None, cser=None): cser, pwork, ups, args.patchwork_url) elif pwork and pwork is not True: raise ValueError( - f"Internal error: command '{args.subcmd}' should not have patchwork") + "Internal error: command " + f"'{args.subcmd}' should not have patchwork") if args.subcmd == 'add': cser.add(args.series, args.desc, mark=args.mark, allow_unmarked=args.allow_unmarked, end=args.upstream, -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add a new top-level 'patman review' command (aliases: 'r', 'rev') for AI-assisted review of other people's patches from patchwork. This is the first stage which handles downloading and applying the patches to a local branch. The command accepts a patchwork series link (-l) or a title search (-t) and: - Fetches series metadata from the patchwork API - Auto-creates database records (series with source='review', ser_ver, pcommit entries) to track the review - Detects previously reviewed series by link or by name, so new versions are added under the same series record - Downloads the series mbox from patchwork - Uses a Claude agent to apply the patches via 'git am', handling any conflicts that arise - Creates a branch named 'review<N>' where N is the series ID Later stages will add AI-assisted review of each patch and Gmail draft creation. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/cmdline.py | 43 +++++ tools/patman/control.py | 34 ++++ tools/patman/review.py | 372 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 449 insertions(+) create mode 100644 tools/patman/review.py diff --git a/tools/patman/cmdline.py b/tools/patman/cmdline.py index 3c12200949f..edefa778446 100644 --- a/tools/patman/cmdline.py +++ b/tools/patman/cmdline.py @@ -25,6 +25,7 @@ ALIASES = { 'series': ['s', 'ser'], 'status': ['st'], 'patchwork': ['pw'], + 'review': ['r', 'rev'], 'upstream': ['us'], 'workflow': ['wf'], @@ -521,6 +522,46 @@ def add_workflow_subparser(subparsers): return workflow +def add_review_subparser(subparsers): + """Add the 'review' subparser + + Args: + subparsers (argparse action): Subparser parent + + Return: + ArgumentParser: review subparser + """ + review = subparsers.add_parser( + 'review', aliases=ALIASES['review'], + help='AI-powered review of a patchwork series') + review.add_argument( + '-l', '--link', type=str, dest='pw_link', + help='Patchwork series link/ID number') + review.add_argument( + '-t', '--title', type=str, + help='Search for series by cover-letter title') + review.add_argument( + '-n', '--dry-run', action='store_true', dest='dry_run', + default=False, + help='Show what would be done without creating drafts') + review.add_argument( + '--create-drafts', action='store_true', + help='Create Gmail draft emails for each review') + review.add_argument( + '--no-cover', action='store_true', + help='Skip reviewing the cover letter') + review.add_argument( + '--reviewer', type=str, default=None, + help="Override reviewer identity (format: 'Name <email>')") + review.add_argument( + '-U', '--upstream', type=str, default=None, + help='Upstream name (for patchwork URL lookup)') + review.add_argument( + '--apply-only', action='store_true', + help='Only download and apply patches, skip AI review') + return review + + def setup_parser(): """Set up command-line parser @@ -560,6 +601,7 @@ def setup_parser(): subparsers = parser.add_subparsers(dest='cmd') add_send_subparser(subparsers) patchwork = add_patchwork_subparser(subparsers) + review = add_review_subparser(subparsers) series = add_series_subparser(subparsers) add_status_subparser(subparsers) upstream = add_upstream_subparser(subparsers) @@ -573,6 +615,7 @@ def setup_parser(): parsers = { 'main': parser, + 'review': review, 'series': series, 'patchwork': patchwork, 'upstream': upstream, diff --git a/tools/patman/control.py b/tools/patman/control.py index d82b80ce807..3ce9736d6ba 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -19,6 +19,7 @@ except ImportError: from patman import cseries from patman import patchstream +from patman import review as review_mod from patman import send from patman import settings from patman import status @@ -396,6 +397,37 @@ def do_workflow(args, test_db=None): cser.close_database() +def do_review(args, test_db=None, pwork=None, cser=None): + """Process the 'review' command + + Sets up patchwork and cseries, then delegates to + review.do_review(). + + Args: + args (Namespace): Arguments to process + test_db (str or None): Directory containing the test + database, None to use the normal one + pwork (Patchwork): Patchwork object to use, or None to + create one + cser (Cseries): Cseries object to use, or None to create + one + """ + if not cser: + cser = cseries.Cseries(test_db) + try: + cser.open_database() + + ups = args.upstream + if not ups: + ups = cser.db.upstream_get_default() + pwork = _setup_patchwork( + cser, pwork, ups, args.patchwork_url) + + return review_mod.do_review(args, pwork, cser) + finally: + cser.close_database() + + # pylint: disable=R0912 def do_patman(args, test_db=None, pwork=None, cser=None): """Process a patman command @@ -445,6 +477,8 @@ def do_patman(args, test_db=None, pwork=None, cser=None): upstream(args, test_db) elif args.cmd == 'patchwork': patchwork(args, test_db, pwork) + elif args.cmd == 'review': + do_review(args, test_db, pwork, cser) elif args.cmd == 'workflow': do_workflow(args, test_db) except Exception as exc: # pylint: disable=W0718 diff --git a/tools/patman/review.py b/tools/patman/review.py new file mode 100644 index 00000000000..d3eac86f8a1 --- /dev/null +++ b/tools/patman/review.py @@ -0,0 +1,372 @@ +# SPDX-License-Identifier: GPL-2.0+ +# +# Copyright 2025 Canonical Ltd. +# Written by Simon Glass <simon.glass@canonical.com> +# + +"""AI-powered patch review using Claude. + +Fetches patches from patchwork, applies them to a local branch using +a Claude agent, and (in later stages) reviews each patch and creates +Gmail draft replies. +""" + +import asyncio +import os +import tempfile + +import aiohttp + +from patman.database import Pcommit +from u_boot_pylib import gitutil +from u_boot_pylib import tout + + +async def fetch_mbox(pwork_url, link): + """Download the series mbox file from patchwork + + Args: + pwork_url (str): Patchwork server URL, e.g. + 'https://patchwork.ozlabs.org' + link (str): Patchwork series link/ID + + Returns: + str: Path to the downloaded mbox file + + Raises: + ValueError: if the download fails + """ + url = f'{pwork_url}/series/{link}/mbox/' + tout.info(f'Downloading mbox from {url}') + + mbox_path = os.path.join(tempfile.gettempdir(), + f'patman_review_{link}.mbox') + async with aiohttp.ClientSession() as client: + async with client.get(url) as response: + if response.status != 200: + raise ValueError( + f'Failed to download mbox from {url}: ' + f'HTTP {response.status}') + content = await response.read() + if not content: + raise ValueError(f'Empty mbox downloaded from {url}') + + with open(mbox_path, 'wb') as fh: + fh.write(content) + tout.info(f'Downloaded {len(content)} bytes to {mbox_path}') + return mbox_path + + +def _build_apply_prompt(mbox_path, branch_name, upstream_branch): + """Build the Claude agent prompt for applying patches + + The agent will create a branch and apply the mbox using git am, + handling any conflicts that arise. + + Args: + mbox_path (str): Path to the downloaded mbox file + branch_name (str): Name for the new branch + upstream_branch (str): Branch to start from (e.g. 'origin/master') + + Returns: + str: The prompt text for the agent + """ + return f"""Apply a patch series from a patchwork mbox file to a new \ +git branch. + +TASK: +1. Create a new branch called '{branch_name}' from '{upstream_branch}': + git checkout -b {branch_name} {upstream_branch} + +2. Apply the patches from the mbox file: + git am {mbox_path} + +3. If git am fails with conflicts: + - Inspect the conflict with 'git diff' and 'git status' + - Try to resolve the conflicts sensibly + - Run 'git add' on resolved files and 'git am --continue' + - If a patch simply cannot be applied (e.g. already applied or + completely incompatible), use 'git am --skip' and note which + patch was skipped + +4. After all patches are applied (or skipped), run: + git log --oneline {upstream_branch}..HEAD + +5. Report the result: + - How many patches were applied successfully + - Which patches (if any) were skipped and why + - The branch name: {branch_name} + +IMPORTANT: +- Do NOT modify the patches before applying +- Do NOT use 'git am --abort' unless all attempts to resolve fail +- If you skip a patch, continue with the remaining patches +- The mbox file is at: {mbox_path} +""" + + +async def apply_series(pwork_url, link, branch_name, upstream_branch, + repo_path): + """Download and apply a patch series to a new local branch + + Uses the Claude agent to handle the git am process, including + conflict resolution. + + Args: + pwork_url (str): Patchwork server URL + link (str): Patchwork series link/ID + branch_name (str): Name for the new branch + upstream_branch (str): Branch to base from + repo_path (str): Path to the git repository + + Returns: + tuple: (success, branch_name) where success is bool and + branch_name is the name of the created branch + """ + # pylint: disable=C0415,E0611 + from u_boot_pylib.claude import (check_available, run_agent_collect, + MAX_BUFFER_SIZE) + + if not check_available(): + return False, None + + from u_boot_pylib.claude import ClaudeAgentOptions + + # Download the mbox + mbox_path = await fetch_mbox(pwork_url, link) + + # Build the prompt and run the agent + prompt = _build_apply_prompt(mbox_path, branch_name, upstream_branch) + options = ClaudeAgentOptions( + allowed_tools=['Bash', 'Read', 'Grep'], + cwd=repo_path, + max_buffer_size=MAX_BUFFER_SIZE, + ) + + tout.info(f'Applying series to branch {branch_name}...') + success, _ = await run_agent_collect(prompt, options) + + # Clean up the mbox file + try: + os.unlink(mbox_path) + except OSError: + pass + + return success, branch_name + + +def apply_series_sync(pwork_url, link, branch_name, upstream_branch, + repo_path): + """Synchronous wrapper for apply_series() + + Args: + pwork_url (str): Patchwork server URL + link (str): Patchwork series link/ID + branch_name (str): Name for the new branch + upstream_branch (str): Branch to base from + repo_path (str): Path to the git repository + + Returns: + tuple: (success, branch_name) + """ + return asyncio.run( + apply_series(pwork_url, link, branch_name, upstream_branch, + repo_path)) + + +def search_series(pwork, title): + """Search patchwork for a series by cover-letter title + + Queries the patchwork API and returns the link for the best match. + If multiple matches are found, shows them and picks the most recent. + + Args: + pwork (Patchwork): Configured patchwork instance + title (str): Title text to search for + + Returns: + str: Patchwork series link/ID + + Raises: + ValueError: if no matching series is found + """ + async def _query(): + if not pwork.proj_id: + raise ValueError( + 'Patchwork project not configured; use ' + "'patman patchwork set-project' or provide -l <link>") + async with aiohttp.ClientSession() as client: + # pylint: disable=W0212 + return await pwork._query_series(client, title) + + loop = asyncio.get_event_loop() + results = loop.run_until_complete(_query()) + + if not results: + raise ValueError(f"No series found matching '{title}'") + + if len(results) == 1: + match = results[0] + tout.notice(f"Found: {match['name']} (v{match['version']}, " + f"link {match['id']})") + return str(match['id']) + + tout.notice(f"Found {len(results)} matching series:") + for i, match in enumerate(results): + tout.notice(f" {i + 1}. [{match['id']}] {match['name']} " + f"(v{match['version']}, {match['date']})") + + best = max(results, + key=lambda r: (r.get('version', 0), r.get('date', ''))) + tout.notice(f"Using most recent: {best['name']} (link {best['id']})") + return str(best['id']) + + +def _clean_series_name(name): + """Strip the [U-Boot,v2,0/4] prefix from a series name + + Args: + name (str): Raw series name from patchwork + + Returns: + str: Cleaned name + """ + if name.startswith('['): + bracket_end = name.find(']') + if bracket_end != -1: + return name[bracket_end + 1:].strip() + return name + + +# pylint: disable=R0914 +def _register_series(cser, clean_name, version, link, series_data): + """Register a series in the database for review tracking + + Creates or finds the series record, adds a ser_ver entry and + pcommit records for each patch. + + Args: + cser (Cseries): Open cseries instance + clean_name (str): Cleaned series name + version (int): Series version number + link (str): Patchwork series link/ID + series_data (dict): Series data from patchwork + + Returns: + tuple: (series_id, svid) or None if already reviewed + """ + existing = cser.db.series_find_by_link(link) + if existing: + series_id, db_name, db_version, svid = existing + tout.notice( + f"Already reviewed: '{db_name}' v{db_version}") + return None + + prev = cser.db.series_find_review_by_name(clean_name) + if prev: + series_id, db_name, prev_version = prev + tout.notice(f"Previously reviewed '{db_name}' " + f"v{prev_version}; adding v{version}") + else: + desc = series_data.get('cover_letter', {}) + desc = desc.get('name', '') if desc else '' + series_id = cser.db.series_add(clean_name, desc) + cser.db.series_set_source(series_id, 'review') + + svid = cser.db.ser_ver_add(series_id, version, + link=str(link)) + + patches = series_data.get('patches', []) + pcommits = [] + for i, patch in enumerate(patches): + pcommits.append(Pcommit( + idnum=None, seq=i, + subject=patch.get('name', ''), + svid=svid, change_id=None, state=None, + patch_id=patch.get('id'), num_comments=0)) + if pcommits: + cser.db.pcommit_add_list(svid, pcommits) + + cser.commit() + tout.notice( + f"Added series '{clean_name}' v{version} to database") + return series_id, svid + + +# pylint: disable=R0914 +def do_review(args, pwork, cser): + """Run the review command + + Downloads patches from patchwork, applies them to a local + branch using a Claude agent, and (in later stages) reviews + each patch. + + Args: + args (Namespace): Command-line arguments + pwork (Patchwork): Configured patchwork instance + cser (Cseries): Open cseries instance + """ + if not args.pw_link and not args.title: + raise ValueError( + "Please provide -l <link> or -t <title> to " + "identify the series") + + link = args.pw_link + if not link: + link = search_series(pwork, args.title) + + # Fetch series metadata from patchwork + async def _fetch(): + async with aiohttp.ClientSession() as client: + return await pwork.get_series(client, link) + + loop = asyncio.get_event_loop() + series_data = loop.run_until_complete(_fetch()) + + series_name = series_data.get('name', f'series-{link}') + version = series_data.get('version', 1) + patch_count = series_data.get('received_total', 0) + clean_name = _clean_series_name(series_name) + + tout.notice(f"Series: {clean_name}") + tout.notice( + f"Version: {version}, Patches: {patch_count}") + + result = _register_series( + cser, clean_name, version, link, series_data) + if result is None: + return 0 + series_id, _ = result + + # Determine the upstream branch for applying patches + ups = args.upstream + if not ups: + ups = cser.db.upstream_get_default() + if ups: + upstream_branch = f'{ups}/master' + else: + upstream_branch = 'origin/master' + + # Download and apply the patches + branch_name = f'review{series_id}' + repo_path = gitutil.get_top_level() + success, branch_name = apply_series_sync( + pwork.url, link, branch_name, upstream_branch, + repo_path) + + if not success: + tout.error('Failed to apply patches to branch') + return 1 + tout.notice(f'Patches applied to branch: {branch_name}') + + if args.apply_only: + tout.notice('Apply-only mode; skipping review') + return 0 + + # TODO: Stage 2 - AI review of each patch + # TODO: Stage 3 - Gmail draft creation + tout.notice( + 'Patch application complete. ' + 'AI review not yet implemented.') + + return 0 -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add a Gmail API integration module that creates draft reply emails for patch reviews. The module handles: - OAuth2 authentication with token caching in ~/.config/patman.d/ - Creating draft emails with proper In-Reply-To and References headers for threading - Finding existing Gmail threads by Message-ID so drafts appear as replies - Building CC lists from the original patch headers - Setting the From header when the reviewer identity differs from the Gmail account - Syncing draft status (detecting sent and deleted drafts) - Fetching thread replies for follow-up response generation The Subject header is taken from the original email headers to preserve the [PATCH] prefix that patchwork strips from its 'name' field. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/gmail.py | 591 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 591 insertions(+) create mode 100644 tools/patman/gmail.py diff --git a/tools/patman/gmail.py b/tools/patman/gmail.py new file mode 100644 index 00000000000..78bedc99d16 --- /dev/null +++ b/tools/patman/gmail.py @@ -0,0 +1,591 @@ +# SPDX-License-Identifier: GPL-2.0+ +# +# Copyright 2025 Canonical Ltd. +# Written by Simon Glass <simon.glass@canonical.com> +# +# pylint: disable=C0415,R0913,R0914,R0917,W0718 + +"""Gmail API integration for creating draft review emails. + +Handles OAuth2 authentication and creation of draft emails via the +Gmail API. Tokens are stored in ~/.config/patman/ for reuse across +sessions. +""" + +import base64 +from collections import namedtuple +import os +from email.mime.text import MIMEText + +from u_boot_pylib import tout + +# Common parameters for draft creation +DraftParams = namedtuple('DraftParams', + 'service fallback_addr list_email ' + 'dry_run sender cover_msgid') + +# Config paths - use patman.d to avoid conflict with existing ~/.config/patman +CONFIG_DIR = os.path.expanduser('~/.config/patman.d') +CLIENT_SECRET_PATH = os.path.join(CONFIG_DIR, 'client_secret.json') + +# Gmail API scopes - compose drafts and read messages (for thread lookup) +SCOPES = ['https://www.googleapis.com/auth/gmail.compose', + 'https://www.googleapis.com/auth/gmail.readonly'] + + +def _token_path(account=None): + """Get the token path for a given account + + Args: + account (str or None): Gmail account email, or None for default + + Returns: + str: Path to the token file + """ + if account: + safe = account.replace('@', '_at_').replace('.', '_') + return os.path.join(CONFIG_DIR, f'gmail_token_{safe}.json') + return os.path.join(CONFIG_DIR, 'gmail_token.json') + + +def check_available(): + """Check if Gmail API dependencies are installed + + Returns: + bool: True if google-api-python-client and google-auth-oauthlib + are available + """ + try: + from googleapiclient import discovery # pylint: disable=W0611 + from google_auth_oauthlib import flow # pylint: disable=W0611 + from google.auth.transport import requests # pylint: disable=W0611 + except ImportError: + tout.error('Gmail API dependencies not available') + tout.error('Install with: pip install google-api-python-client ' + 'google-auth-httplib2 google-auth-oauthlib') + return False + + if not os.path.exists(CLIENT_SECRET_PATH): + tout.error(f'Gmail client secret not found at {CLIENT_SECRET_PATH}') + tout.error('Download OAuth client credentials from Google Cloud ' + 'Console and save them there') + return False + return True + + +def _get_credentials(account=None): + """Get or refresh OAuth2 credentials + + Loads existing token if available and valid. If expired, refreshes + using the refresh token. If no valid token exists, initiates the + OAuth2 consent flow. + + Args: + account (str or None): Gmail account email (e.g. + 'user@gmail.com'), or None for default token + + Returns: + google.oauth2.credentials.Credentials: Valid credentials + + Raises: + FileNotFoundError: if client_secret.json is missing + """ + from google.auth.transport.requests import Request + from google.oauth2.credentials import Credentials + from google_auth_oauthlib.flow import InstalledAppFlow + + token_path = _token_path(account) + creds = None + + # Load existing token + if os.path.exists(token_path): + creds = Credentials.from_authorized_user_file(token_path, SCOPES) + + # Refresh or get new credentials + if not creds or not creds.valid: + if creds and creds.refresh_token: + creds.refresh(Request()) + else: + if not os.path.exists(CLIENT_SECRET_PATH): + raise FileNotFoundError( + f'Gmail client secret not found at {CLIENT_SECRET_PATH}') + app_flow = InstalledAppFlow.from_client_secrets_file( + CLIENT_SECRET_PATH, SCOPES) + if account: + tout.notice(f'Please authenticate as {account}') + creds = app_flow.run_local_server(port=0) + + # Save the token for next time + os.makedirs(CONFIG_DIR, exist_ok=True) + with open(token_path, 'w', encoding='utf-8') as fh: + import json + token_data = { + 'token': creds.token, + 'refresh_token': creds.refresh_token, + 'token_uri': creds.token_uri, + 'client_id': creds.client_id, + 'client_secret': creds.client_secret, + 'scopes': list(creds.scopes or []), + } + json.dump(token_data, fh) + + return creds + + +def get_service(account=None): + """Get an authenticated Gmail API service object + + Args: + account (str or None): Gmail account email, or None for default + + Returns: + googleapiclient.discovery.Resource: Gmail API service + """ + from googleapiclient.discovery import build + + creds = _get_credentials(account) + return build('gmail', 'v1', credentials=creds) + + +def fetch_sent_reviews(service, list_email, user_email, + max_results=20): + """Fetch sent emails that are reviews of other people's patches + + Paginates through Gmail search results to collect review emails. + Filters out replies to the user's own patches by checking whether + the first message in the thread was sent by someone else. + + Args: + service (googleapiclient.discovery.Resource): Gmail API service + list_email (str): Mailing list address to filter by + user_email (str): The reviewer's own email, used to skip + threads initiated by the reviewer + max_results (int): Maximum number of emails to collect + + Returns: + list of str: Email body texts + """ + query = f'from:me to:{list_email} subject:(Re: PATCH)' + bodies = [] + page_token = None + # Cache thread originator lookups + thread_from_cache = {} + + while len(bodies) < max_results: + kwargs = {'userId': 'me', 'q': query, 'maxResults': 100} + if page_token: + kwargs['pageToken'] = page_token + results = service.users().messages().list(**kwargs).execute() + messages = results.get('messages', []) + if not messages: + break + + for msg_info in messages: + if len(bodies) >= max_results: + break + tout.progress( + f'Fetched {len(bodies)}/{max_results} review emails') + + msg = service.users().messages().get( + userId='me', id=msg_info['id'], format='full').execute() + + # Check if the thread was started by someone else + thread_id = msg.get('threadId') + if thread_id not in thread_from_cache: + thread = service.users().threads().get( + userId='me', id=thread_id, + format='metadata', + metadataHeaders=['From']).execute() + first_msg = thread.get('messages', [{}])[0] + hdrs = first_msg.get('payload', {}).get('headers', []) + from_val = '' + for h in hdrs: + if h['name'] == 'From': + from_val = h['value'] + break + thread_from_cache[thread_id] = from_val + + # Skip if this thread was started by us + if user_email in thread_from_cache.get(thread_id, ''): + continue + + payload = msg.get('payload', {}) + body = _extract_body(payload) + if body and '>' in body: + bodies.append(body) + + page_token = results.get('nextPageToken') + if not page_token: + break + + tout.clear_progress() + return bodies + + +def _extract_body(payload): + """Extract plain text body from a Gmail message payload + + Args: + payload (dict): Gmail message payload + + Returns: + str or None: Decoded body text + """ + if payload.get('mimeType') == 'text/plain': + data = payload.get('body', {}).get('data', '') + if data: + return base64.urlsafe_b64decode(data).decode('utf-8', 'replace') + + for part in payload.get('parts', []): + result = _extract_body(part) + if result: + return result + return None + + +def _find_thread(service, msgid): + """Find the Gmail thread containing a message with the given Message-ID + + Args: + service (googleapiclient.discovery.Resource): Gmail API service + msgid (str): Message-ID header value (e.g. '<abc@example.com>') + + Returns: + str or None: Gmail thread ID if found + """ + # Strip angle brackets — Gmail search expects bare message ID + bare_id = msgid.strip('<>') + try: + results = service.users().messages().list( + userId='me', q=f'rfc822msgid:{bare_id}').execute() + messages = results.get('messages', []) + if messages: + return messages[0].get('threadId') + except Exception as exc: + tout.warning(f'Failed to search Gmail for thread: {exc}') + return None + + +def create_draft(service, to, subject, body, + in_reply_to=None, references=None, cc=None, + sender=None): + """Create a Gmail draft email + + If in_reply_to is set, searches for the original message in Gmail + and attaches the draft to that thread so it appears as a reply. + + If sender is provided and differs from the Gmail account's address, + a From header is set so the email is sent on behalf of the right + identity. + + Args: + service (googleapiclient.discovery.Resource): Gmail API service + to (str): Recipient email address + subject (str): Email subject line + body (str): Plain text email body + in_reply_to (str or None): Message-ID to reply to + references (str or None): References header value + cc (str or None): CC addresses + sender (str or None): Sender identity, e.g. 'Name <email>'. + If set, added as the From header. + + Returns: + dict: Gmail API response with draft ID and message info + + Raises: + googleapiclient.errors.HttpError: on API failure + """ + msg = MIMEText(body) + if sender: + msg['from'] = sender + msg['to'] = to + msg['subject'] = subject + if cc: + msg['cc'] = cc + if in_reply_to: + msg['In-Reply-To'] = in_reply_to + msg['References'] = references or in_reply_to + + raw = base64.urlsafe_b64encode(msg.as_bytes()).decode() + draft_body = {'message': {'raw': raw}} + + # Find the existing thread in Gmail so the draft appears as a reply + if in_reply_to: + thread_id = _find_thread(service, in_reply_to) + if thread_id: + draft_body['message']['threadId'] = thread_id + tout.notice(f' Found thread: {thread_id}') + else: + tout.notice(f' Thread not found for {in_reply_to}' + ' - draft will be standalone') + + draft = service.users().drafts().create( + userId='me', body=draft_body).execute() + return draft + + +def _build_cc(headers, list_email): + """Build a CC list from the original patch headers + + Combines the original To, Cc headers and mailing list address, + deduplicating entries. + + Args: + headers (dict): Email headers from patchwork get_patch() + list_email (str or None): Mailing list email address + + Returns: + str: Comma-separated CC addresses + """ + addrs = [] + for field in ('To', 'Cc'): + val = headers.get(field, '') + if val: + addrs.append(val) + if list_email and list_email not in ', '.join(addrs): + addrs.append(list_email) + return ', '.join(addrs) + + +def _get_msgid(patch_data, hdrs): + """Get the Message-ID from patch data or headers + + Args: + patch_data (dict): Patch or cover letter data from patchwork + hdrs (dict): Email headers from patchwork get_patch() + + Returns: + str or None: Message-ID + """ + return (patch_data.get('msgid') or + hdrs.get('Message-Id') or + hdrs.get('Message-ID')) + + +def _make_draft(params, patch_data, body, hdrs, refs=None): + """Create a Gmail draft for a review reply + + Args: + params (DraftParams): Common draft parameters + patch_data (dict): Patch or cover letter data from patchwork + body (str): Review email body + hdrs (dict): Email headers + refs (str or None): References header value. If None, uses + the Message-ID as the reference (for cover letters). + + Returns: + str or None: Gmail draft ID, or None for dry run + """ + subject = hdrs.get('Subject', patch_data.get('name', '')) + if not subject.startswith('Re: '): + subject = f"Re: {subject}" + msgid = _get_msgid(patch_data, hdrs) + if refs is None: + refs = msgid + to_addr = hdrs.get('Reply-To', params.fallback_addr) + cc = _build_cc(hdrs, params.list_email) + if params.dry_run: + tout.notice(f"Would create draft: {subject}") + tout.notice(f" To: {to_addr}, Cc: {cc}") + return None + draft = create_draft(params.service, to_addr, subject, + body, in_reply_to=msgid, + references=refs, cc=cc, + sender=params.sender) + tout.notice(f"Created draft: {subject}") + return draft['id'] + + +def create_review_drafts(series_data, review_bodies, patch_headers=None, + dry_run=False, account=None, sender=None): + """Create Gmail drafts for a reviewed series + + Creates one draft per patch (and optionally the cover letter), + each threaded as a reply to the original email. + + Args: + series_data (dict): Series data from patchwork + get_series(), containing 'patches', 'cover_letter', + 'submitter', 'project' + review_bodies (dict): Map of patch index to review body + text. Key 0 is the cover letter, 1..N are patches. + patch_headers (dict or None): Map of patch index to + headers dict from patchwork get_patch() + dry_run (bool): If True, print what would be created + without calling the API + account (str or None): Gmail account email to use + sender (str or None): Reviewer identity, e.g. + 'Name <email>'. If this differs from the Gmail + account, it is set as the From header. + + Returns: + dict: Map of patch index to Gmail draft ID (empty for + dry run) + """ + if patch_headers is None: + patch_headers = {} + submitter = series_data.get('submitter', {}) + fallback_addr = submitter.get('email', '') + project = series_data.get('project', {}) + list_email = project.get('list_email') + cover = series_data.get('cover_letter') + patches = series_data.get('patches', []) + + cover_hdrs = patch_headers.get(0, {}) + cover_msgid = _get_msgid(cover, cover_hdrs) if cover else None + + service = None + if not dry_run: + if not check_available(): + return {} + service = get_service(account) + + params = DraftParams(service, fallback_addr, list_email, + dry_run, sender, cover_msgid) + + return _make_all_drafts(params, cover, patches, + review_bodies, patch_headers) + + +def _make_all_drafts(params, cover, patches, review_bodies, + patch_headers): + """Create drafts for the cover letter and each patch + + Args: + params (DraftParams): Common draft parameters + cover (dict or None): Cover letter data from patchwork + patches (list of dict): Patch data from patchwork + review_bodies (dict): Map of index to review body text + patch_headers (dict): Map of index to email headers + + Returns: + dict: Map of patch index to Gmail draft ID + """ + draft_ids = {} + + if cover and 0 in review_bodies: + hdrs = patch_headers.get(0, {}) + did = _make_draft(params, cover, review_bodies[0], + hdrs) + if did: + draft_ids[0] = did + + for i, patch in enumerate(patches): + idx = i + 1 + if idx not in review_bodies: + continue + hdrs = patch_headers.get(idx, {}) + msgid = _get_msgid(patch, hdrs) + refs = (f'{params.cover_msgid} {msgid}' + if params.cover_msgid else msgid) + did = _make_draft(params, patch, review_bodies[idx], hdrs, refs) + if did: + draft_ids[idx] = did + + return draft_ids + + +def fetch_thread_replies(service, thread_id, after_msg_id): + """Fetch replies in a thread that appeared after a given message + + Args: + service (googleapiclient.discovery.Resource): Gmail API service + thread_id (str): Gmail thread ID + after_msg_id (str): Gmail message ID of our sent review + + Returns: + list of dict: Replies, each with 'from', 'date', 'body' + """ + try: + thread = service.users().threads().get( + userId='me', id=thread_id, format='full').execute() + except Exception: + return [] + + messages = thread.get('messages', []) + + # Find our message index + our_idx = -1 + for i, msg in enumerate(messages): + if msg['id'] == after_msg_id: + our_idx = i + break + + if our_idx < 0: + return [] + + # Collect messages after ours + replies = [] + for msg in messages[our_idx + 1:]: + payload = msg.get('payload', {}) + headers = {h['name']: h['value'] + for h in payload.get('headers', [])} + body = _extract_body(payload) + if body: + replies.append({ + 'from': headers.get('From', ''), + 'date': headers.get('Date', ''), + 'body': body, + 'msg_id': msg['id'], + 'thread_id': thread_id, + }) + return replies + + +def sync_drafts(service, reviews): + """Check if review drafts have been sent or deleted + + For each review that has a draft_id, checks if the draft still + exists. If gone, checks the sent folder for a matching message. + + Args: + service (googleapiclient.discovery.Resource): Gmail API service + reviews (list of Review): Review records with draft_id set + + Returns: + tuple: (sent, deleted) where: + sent (dict): Map of review ID to (body, msg_id, thread_id) + deleted (list): List of review IDs whose drafts were + deleted without sending + """ + sent = {} + deleted = [] + + for rev in reviews: + if not rev.draft_id: + continue + + # Check if the draft still exists + try: + service.users().drafts().get( + userId='me', id=rev.draft_id).execute() + # Draft still exists — not sent yet + continue + except Exception: + pass + + # Draft is gone — check if it was sent by looking for a + # sent message with matching subject and timestamp + found = False + try: + results = service.users().messages().list( + userId='me', q='in:sent is:sent', + maxResults=50).execute() + for msg_info in results.get('messages', []): + msg = service.users().messages().get( + userId='me', id=msg_info['id'], + format='full').execute() + payload = msg.get('payload', {}) + body = _extract_body(payload) + if body and rev.body[:50] in body: + sent[rev.idnum] = (body, msg_info['id'], + msg.get('threadId')) + found = True + break + except Exception: + pass + + if not found: + deleted.append(rev.idnum) + + return sent, deleted -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add schema migrations v9 and v10 and supporting methods: - v9: Add draft_id, status, gmail_msg_id and gmail_thread_id columns to the review table for tracking Gmail draft lifecycle - v10: Add notes column to ser_ver for storing review-handling notes New methods: - series_set_source() to mark a series as coming from review - review_set_draft_id/sent/replied/deleted() for draft lifecycle - review_delete_for_version() for force re-review - ser_ver_set_notes() and ser_ver_get_all_notes() for the handle-reviews skill Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/database.py | 155 +++++++++++++++++++++++++++++++++-- tools/patman/test_cseries.py | 2 +- 2 files changed, 148 insertions(+), 9 deletions(-) diff --git a/tools/patman/database.py b/tools/patman/database.py index 7f33137d0b7..8a96030b68f 100644 --- a/tools/patman/database.py +++ b/tools/patman/database.py @@ -20,12 +20,20 @@ from u_boot_pylib import tout from patman.series import Series # Schema version (version 0 means there is no database yet) -LATEST = 8 +LATEST = 10 # Information about a review record +# Review status values +REVIEW_NEW = 'new' # AI review generated, no draft created +REVIEW_DRAFT = 'draft' # Gmail draft created +REVIEW_SENT = 'sent' # Draft was sent +REVIEW_DELETED = 'deleted' # Draft was deleted without sending +REVIEW_REPLIED = 'replied' # Author has replied to our review + Review = namedtuple( 'REVIEW', - 'idnum,svid,seq,body,approved,timestamp') + 'idnum,svid,seq,body,approved,timestamp,draft_id,status,' + 'gmail_msg_id,gmail_thread_id') # Information about a series/version record SerVer = namedtuple( @@ -243,6 +251,19 @@ class Database: # pylint:disable=R0904 'timestamp TEXT, ' 'FOREIGN KEY (svid) REFERENCES ser_ver (id))') + def _migrate_to_v9(self): + """Add draft tracking, status, and Gmail IDs to review table""" + self.cur.execute('ALTER TABLE review ADD COLUMN draft_id') + self.cur.execute('ALTER TABLE review ADD COLUMN status') + self.cur.execute('ALTER TABLE review ADD COLUMN gmail_msg_id') + self.cur.execute('ALTER TABLE review ADD COLUMN gmail_thread_id') + self.cur.execute("UPDATE review SET status = 'new'") + + def _migrate_to_v10(self): + """Add review notes column to ser_ver table""" + self.cur.execute('ALTER TABLE ser_ver ADD COLUMN notes') + + # pylint: disable=R0912 def migrate_to(self, dest_version): """Migrate the database to the selected version @@ -277,6 +298,10 @@ class Database: # pylint:disable=R0904 self._migrate_to_v7() elif version == 8: self._migrate_to_v8() + elif version == 9: + self._migrate_to_v9() + elif version == 10: + self._migrate_to_v10() # Save the new version if we have a schema_version table if version > 1: @@ -566,6 +591,17 @@ class Database: # pylint:disable=R0904 'UPDATE series SET upstream = ? WHERE id = ?', (ups, series_idnum)) + def series_set_source(self, series_idnum, source): + """Update the source field for a series + + Args: + series_idnum (int): ID num of the series + source (str): Source value, e.g. 'review' + """ + self.execute( + 'UPDATE series SET source = ? WHERE id = ?', + (source, series_idnum)) + def series_get_null_upstream(self): """Get a list of series names that have no upstream set @@ -688,6 +724,32 @@ class Database: # pylint:disable=R0904 """ self.execute('UPDATE ser_ver SET desc = ? WHERE id = ?', (desc, svid)) + def ser_ver_set_notes(self, svid, notes): + """Store review-handling notes for a series version + + Args: + svid (int): ser_ver ID num + notes (str): Notes text from review-notes.txt + """ + self.execute( + 'UPDATE ser_ver SET notes = ? WHERE id = ?', (notes, svid)) + + def ser_ver_get_all_notes(self, series_id): + """Get review notes from all versions of a series + + Args: + series_id (int): Series ID + + Return: + list of tuple: (version, notes) for versions that have notes, + ordered by version + """ + res = self.execute( + 'SELECT version, notes FROM ser_ver ' + 'WHERE series_id = ? AND notes IS NOT NULL ' + 'ORDER BY version', (series_id,)) + return [(v, n) for v, n in res.fetchall() if n] + def ser_ver_add(self, series_idnum, version, link=None, desc=None): """Add a new ser_ver record @@ -864,7 +926,7 @@ class Database: # pylint:disable=R0904 # upstream functions - # pylint: disable=R0913 + # pylint: disable=R0913,R0917 def upstream_add(self, name, url, patchwork_url=None, identity=None, series_to=None, no_maintainers=False, no_tags=False): """Add a new upstream record @@ -1224,7 +1286,7 @@ class Database: # pylint:disable=R0904 res = self.execute(query) return res.fetchall() - # pylint: disable=R0913 + # pylint: disable=R0913,R0917 def review_add(self, svid, seq, body, approved, timestamp): """Add a review record @@ -1239,11 +1301,68 @@ class Database: # pylint:disable=R0904 int: ID num of the new review record """ self.execute( - 'INSERT INTO review (svid, seq, body, approved, timestamp) ' - 'VALUES (?, ?, ?, ?, ?)', - (svid, seq, body, 1 if approved else 0, timestamp)) + 'INSERT INTO review (svid, seq, body, approved, timestamp, ' + 'status) VALUES (?, ?, ?, ?, ?, ?)', + (svid, seq, body, 1 if approved else 0, timestamp, + REVIEW_NEW)) return self.lastrowid() + def review_set_draft_id(self, review_id, draft_id): + """Set the Gmail draft ID for a review record + + Args: + review_id (int): Review record ID + draft_id (str or None): Gmail draft ID + """ + status = REVIEW_DRAFT if draft_id else None + self.execute( + 'UPDATE review SET draft_id = ?, status = ? WHERE id = ?', + (draft_id, status, review_id)) + + def review_set_sent(self, review_id, body, gmail_msg_id=None, + gmail_thread_id=None): + """Mark a review as sent and update its body + + Args: + review_id (int): Review record ID + body (str): Sent email body text + gmail_msg_id (str or None): Gmail message ID of sent email + gmail_thread_id (str or None): Gmail thread ID + """ + self.execute( + 'UPDATE review SET body = ?, draft_id = NULL, status = ?, ' + 'gmail_msg_id = ?, gmail_thread_id = ? WHERE id = ?', + (body, REVIEW_SENT, gmail_msg_id, gmail_thread_id, + review_id)) + + def review_set_replied(self, review_id): + """Mark a review as having received a reply + + Args: + review_id (int): Review record ID + """ + self.execute( + 'UPDATE review SET status = ? WHERE id = ?', + (REVIEW_REPLIED, review_id)) + + def review_set_deleted(self, review_id): + """Mark a review draft as deleted (not sent) + + Args: + review_id (int): Review record ID + """ + self.execute( + 'UPDATE review SET draft_id = NULL, status = ? ' + 'WHERE id = ?', (REVIEW_DELETED, review_id)) + + def review_delete_for_version(self, svid): + """Delete all review records for a given series version + + Args: + svid (int): ser_ver ID num + """ + self.execute('DELETE FROM review WHERE svid = ?', (svid,)) + def review_get_for_version(self, svid): """Get review records for a given series version @@ -1254,10 +1373,30 @@ class Database: # pylint:disable=R0904 list of Review: Review records ordered by sequence """ res = self.execute( - 'SELECT id, svid, seq, body, approved, timestamp ' + 'SELECT id, svid, seq, body, approved, timestamp, draft_id, ' + 'status, gmail_msg_id, gmail_thread_id ' 'FROM review WHERE svid = ? ORDER BY seq', (svid,)) return [Review(*row) for row in res.fetchall()] + def review_get_by_status(self, status, need_thread=False): + """Get review records with a given status + + Args: + status (str): Status to filter by (e.g. 'draft', 'sent') + need_thread (bool): If True, only return records with a + gmail_thread_id + + Return: + list of Review: Matching review records + """ + sql = ('SELECT id, svid, seq, body, approved, timestamp, ' + 'draft_id, status, gmail_msg_id, gmail_thread_id ' + 'FROM review WHERE status = ?') + if need_thread: + sql += ' AND gmail_thread_id IS NOT NULL' + res = self.execute(sql, (status,)) + return [Review(*row) for row in res.fetchall()] + def review_get_previous(self, series_id, version): """Get reviews from the previous version of a series diff --git a/tools/patman/test_cseries.py b/tools/patman/test_cseries.py index 3b78399d907..e207e8bc173 100644 --- a/tools/patman/test_cseries.py +++ b/tools/patman/test_cseries.py @@ -3590,7 +3590,7 @@ Date: .* self.assertEqual(f'Update database to v{version}', out.getvalue().strip()) self.assertEqual(version, db.get_schema_version()) - self.assertEqual(8, database.LATEST) + self.assertEqual(10, database.LATEST) def test_migrate_future_version(self): """Test that a database newer than patman is rejected""" -- 2.43.0
From: Simon Glass <sjg@chromium.org> The notes column exists in the ser_ver table but the SerVer namedtuple and its SELECT queries do not include it. Add the field and update all queries to select it. Also fix SerVer construction in cser_helper to include the extra None for the notes position. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/cser_helper.py | 4 ++-- tools/patman/database.py | 7 ++++--- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/tools/patman/cser_helper.py b/tools/patman/cser_helper.py index 7979deda6dc..31a5e3663dc 100644 --- a/tools/patman/cser_helper.py +++ b/tools/patman/cser_helper.py @@ -1370,10 +1370,10 @@ class CseriesHelper: updated += 1 if cover: info = SerVer(svid, None, None, None, cover.id, - cover.num_comments, cover.name, None, None) + cover.num_comments, cover.name, None, None, None) else: info = SerVer(svid, None, None, None, None, None, patches[0].name, - None, None) + None, None, None) self.db.ser_ver_set_info(info) return updated, 1 if cover else 0 diff --git a/tools/patman/database.py b/tools/patman/database.py index 8a96030b68f..50b9ea0dca5 100644 --- a/tools/patman/database.py +++ b/tools/patman/database.py @@ -39,7 +39,7 @@ Review = namedtuple( SerVer = namedtuple( 'SER_VER', 'idnum,series_id,version,link,cover_id,cover_num_comments,name,' - 'archive_tag,desc') + 'archive_tag,desc,notes') # Record from the pcommit table: # idnum (int): record ID @@ -783,7 +783,7 @@ class Database: # pylint:disable=R0904 ValueError: There is no matching idnum/version """ base = ('SELECT id, series_id, version, link, cover_id, ' - 'cover_num_comments, name, archive_tag, desc ' + 'cover_num_comments, name, archive_tag, desc, notes ' 'FROM ser_ver WHERE series_id = ?') if version: res = self.execute(base + ' AND version = ?', @@ -825,7 +825,8 @@ class Database: # pylint:disable=R0904 """ res = self.execute( 'SELECT id, series_id, version, link, cover_id, ' - 'cover_num_comments, name, archive_tag, desc FROM ser_ver') + 'cover_num_comments, name, archive_tag, desc, notes ' + 'FROM ser_ver') items = res.fetchall() return [SerVer(*x) for x in items] -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add a 'patman series info' command that displays detailed information about a series including description, upstream, and for each version: the patchwork link, per-version description, cover letter name, patch list with state, archive tag and review notes. This helps diagnose autolink issues by showing what description each version has stored, which is what patchwork searches use. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/cmdline.py | 1 + tools/patman/control.py | 2 ++ tools/patman/cseries.py | 47 ++++++++++++++++++++++++++++++++++++ tools/patman/test_cseries.py | 42 ++++++++++++++++++++++++++++++++ 4 files changed, 92 insertions(+) diff --git a/tools/patman/cmdline.py b/tools/patman/cmdline.py index edefa778446..bee9fd21483 100644 --- a/tools/patman/cmdline.py +++ b/tools/patman/cmdline.py @@ -294,6 +294,7 @@ def add_series_subparser(subparsers): series_subparsers.add_parser('get-link') series_subparsers.add_parser('inc') + series_subparsers.add_parser('info') ls = series_subparsers.add_parser('ls', aliases=['list']) _add_archived(ls) diff --git a/tools/patman/control.py b/tools/patman/control.py index 3ce9736d6ba..9ba9b6e0b8e 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -199,6 +199,8 @@ def do_series(args, test_db=None, pwork=None, cser=None): dry_run=args.dry_run, show_summary=True) elif args.subcmd == 'dec': cser.decrement(args.series, args.dry_run) + elif args.subcmd == 'info': + cser.show_info(args.series) elif args.subcmd == 'gather': cser.gather(pwork, args.series, args.version, args.show_comments, args.show_cover_comments, args.gather_tags, diff --git a/tools/patman/cseries.py b/tools/patman/cseries.py index 1dd1550f367..e272a5839fc 100644 --- a/tools/patman/cseries.py +++ b/tools/patman/cseries.py @@ -861,6 +861,53 @@ class Cseries(cser_helper.CseriesHelper): if dry_run: tout.info('Dry run completed') + def show_info(self, series): + """Show detailed information about a series and all its versions + + Args: + series (str): Series name, or None for current branch + """ + ser, _ = self._parse_series_and_version(series, None) + if not ser.idnum: + raise ValueError(f"Series '{ser.name}' not found in database") + + print(f"Series: {ser.name}") + print(f" Description: {ser.desc}") + print(f" Upstream: {ser.upstream or '(none)'}") + + versions = self.db.ser_ver_get_for_series(ser.idnum) + if not isinstance(versions, list): + versions = [versions] + + for sv in versions: + link_str = sv.link or '(none)' + print(f"\n Version {sv.version}:") + print(f" Link: {link_str}") + print(f" Description: {sv.desc or '(none)'}") + if sv.name: + print(f" Cover: {sv.name}") + if sv.archive_tag: + print(f" Archive tag: {sv.archive_tag}") + + # Show patches + try: + pclist = self.db.pcommit_get_list(sv.idnum) + print(f" Patches: {len(pclist)}") + for pc in pclist: + state = f' [{pc.state}]' if pc.state else '' + print(f" {pc.seq + 1}: {pc.subject}{state}") + except (ValueError, AttributeError): + pass + + # Show notes if any + if sv.notes: + lines = sv.notes.strip().splitlines() + print(f" Notes: {lines[0]}") + for line in lines[1:3]: + print(f" {line}") + if len(lines) > 3: + print(f" ... ({len(lines)} lines)") + def set_upstream(self, series, ups, dry_run=False): """Set the upstream for a series diff --git a/tools/patman/test_cseries.py b/tools/patman/test_cseries.py index e207e8bc173..5974c69253a 100644 --- a/tools/patman/test_cseries.py +++ b/tools/patman/test_cseries.py @@ -4438,3 +4438,45 @@ Date: .* # Future 3 weeks when = datetime(2025, 3, 31, 10, 0, 0) self.assertEqual('in 3w', wf.friendly_time(now, when)) + + def test_series_info(self): + """Test the series info command""" + cser = self.get_database() + + # Create a series with upstream and two versions + cser.db.upstream_add('us', 'https://us.example.com') + series_id = cser.db.series_add('test-info', 'My test series', ups='us') + svid1 = cser.db.ser_ver_add(series_id, 1, link='12345', + desc='First version desc') + svid2 = cser.db.ser_ver_add(series_id, 2, desc='Second version desc') + + # Add patches to v1 + from patman.database import Pcommit + cser.db.pcommit_add_list(svid1, [ + Pcommit(idnum=None, seq=0, subject='Fix the widget', + svid=svid1, change_id=None, state=None, + patch_id=None, num_comments=0), + Pcommit(idnum=None, seq=1, subject='Add widget tests', + svid=svid1, change_id=None, state=None, + patch_id=None, num_comments=0)]) + + # Add notes to v2 + cser.db.ser_ver_set_notes(svid2, 'Fixed review feedback') + cser.commit() + + with terminal.capture() as (out, _): + cser.show_info('test-info') + + output = out.getvalue() + self.assertIn('Series: test-info', output) + self.assertIn('Description: My test series', output) + self.assertIn('Upstream: us', output) + self.assertIn('Version 1:', output) + self.assertIn('Link: 12345', output) + self.assertIn('First version desc', output) + self.assertIn('Patches: 2', output) + self.assertIn('Fix the widget', output) + self.assertIn('Add widget tests', output) + self.assertIn('Version 2:', output) + self.assertIn('Second version desc', output) + self.assertIn('Notes: Fixed review feedback', output) -- 2.43.0
From: Simon Glass <sjg@chromium.org> Extend review.py with the full AI review pipeline: - Build per-patch and cover-letter review prompts with series context, previous review history, existing patchwork comments and voice style - Parse structured agent output (GREETING/COMMENT/VERDICT format) - Format review emails with proper quoting, attribution and Reviewed-by tags - Guess greeting name from email when the agent cannot determine it - Mechanical cleanup of backticks and quoted function references - Refinement agent pass for conciseness, deduplication and voice - Voice learning from past Gmail or patchwork reviews - Voice refinement by comparing draft vs sent text - Reply handling for follow-up responses to reviewer comments - Series search, git state save/restore, and Gmail draft creation - Full do_review() orchestration with force re-review, stored review display, and incomplete review recovery Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/review.py | 2009 +++++++++++++++++++++++++++++++++++++--- 1 file changed, 1894 insertions(+), 115 deletions(-) diff --git a/tools/patman/review.py b/tools/patman/review.py index d3eac86f8a1..74bc81a2a30 100644 --- a/tools/patman/review.py +++ b/tools/patman/review.py @@ -4,6 +4,8 @@ # Written by Simon Glass <simon.glass@canonical.com> # +# pylint: disable=C0302,R0914 + """AI-powered patch review using Claude. Fetches patches from patchwork, applies them to a local branch using @@ -12,22 +14,97 @@ Gmail draft replies. """ import asyncio +from collections import namedtuple +from datetime import datetime import os +import re import tempfile import aiohttp -from patman.database import Pcommit +from u_boot_pylib import claude as claude_mod +from u_boot_pylib import command from u_boot_pylib import gitutil +from u_boot_pylib import terminal +from u_boot_pylib import tools from u_boot_pylib import tout +from patman import database +from patman import gmail +from patman import patchstream + +try: + from claude_agent_sdk import ClaudeAgentOptions +except ImportError: + ClaudeAgentOptions = None + +class ReviewContext: # pylint: disable=R0902 + """Common context for review operations + + Attributes: + reviewer_name (str): Reviewer's name + reviewer_email (str): Reviewer's email + series_data (dict): Series data from patchwork + repo_path (str): Path to the git repository + signoff (str or None): Sign-off text for reviews with comments + spelling (str): Spelling convention + comments_path (str or None): Path to existing patchwork comments + pwork (Patchwork or None): Patchwork instance + cser (Cseries or None): Open cseries instance + series_id (int or None): Series database ID + svid (int or None): ser_ver database ID + version (int or None): Series version number + branch_name (str or None): Branch with applied patches + upstream_branch (str or None): Upstream branch ref + patch_count (int or None): Number of patches + """ + + def __init__(self, pwork, cser, series_data): + self.pwork = pwork + self.cser = cser + self.series_data = series_data + self.reviewer_name = None + self.reviewer_email = None + self.repo_path = None + self.signoff = None + self.spelling = 'British' + self.comments_path = None + self.series_id = None + self.svid = None + self.version = None + self.branch_name = None + self.upstream_branch = None + self.patch_count = None + self.cover_content = None + self.previous_reviews = {} + self.diffstat = None + + @property + def reviewer_tag(self): + """Get 'Name <email>' string for the reviewer""" + return f'{self.reviewer_name} <{self.reviewer_email}>' + + @property + def author_name(self): + """Get the series submitter's name""" + return self.series_data.get('submitter', {}).get('name', '') + + @property + def author_email(self): + """Get the series submitter's email""" + return self.series_data.get('submitter', {}).get('email', '') + + @property + def date(self): + """Get the series date string""" + return self.series_data.get('date', '') + async def fetch_mbox(pwork_url, link): """Download the series mbox file from patchwork Args: - pwork_url (str): Patchwork server URL, e.g. - 'https://patchwork.ozlabs.org' + pwork_url (str): Patchwork server URL link (str): Patchwork series link/ID Returns: @@ -37,23 +114,21 @@ async def fetch_mbox(pwork_url, link): ValueError: if the download fails """ url = f'{pwork_url}/series/{link}/mbox/' - tout.info(f'Downloading mbox from {url}') - + tout.notice(f'Downloading mbox from {url}') mbox_path = os.path.join(tempfile.gettempdir(), f'patman_review_{link}.mbox') async with aiohttp.ClientSession() as client: async with client.get(url) as response: if response.status != 200: raise ValueError( - f'Failed to download mbox from {url}: ' - f'HTTP {response.status}') + f'Failed to download mbox: HTTP {response.status}') + content = await response.read() if not content: raise ValueError(f'Empty mbox downloaded from {url}') - with open(mbox_path, 'wb') as fh: - fh.write(content) - tout.info(f'Downloaded {len(content)} bytes to {mbox_path}') + tools.write_file(mbox_path, content) + tout.notice(f'Downloaded {len(content)} bytes to {mbox_path}') return mbox_path @@ -71,8 +146,8 @@ def _build_apply_prompt(mbox_path, branch_name, upstream_branch): Returns: str: The prompt text for the agent """ - return f"""Apply a patch series from a patchwork mbox file to a new \ -git branch. + return f'''Apply a patch series from a patchwork mbox \ +file to a new git branch. TASK: 1. Create a new branch called '{branch_name}' from '{upstream_branch}': @@ -81,13 +156,37 @@ TASK: 2. Apply the patches from the mbox file: git am {mbox_path} -3. If git am fails with conflicts: - - Inspect the conflict with 'git diff' and 'git status' - - Try to resolve the conflicts sensibly - - Run 'git add' on resolved files and 'git am --continue' - - If a patch simply cannot be applied (e.g. already applied or - completely incompatible), use 'git am --skip' and note which - patch was skipped +3. If git am fails, try this recovery sequence: + + a. First, abort the failed git am: + git am --abort + + b. Split the mbox into individual patches: + mkdir -p /tmp/patches_{branch_name} + git mailsplit -o /tmp/patches_{branch_name} {mbox_path} + + c. For each patch file (in order), try to apply it: + git am /tmp/patches_{branch_name}/<file> + + If that fails, abort and try 'patch' with fuzz: + git am --abort + git mailinfo /tmp/msg /tmp/diff < /tmp/patches_{branch_name}/<file> + patch -p1 --fuzz=3 < /tmp/diff + + If patch also fails or has rejects (.rej files), read the .rej + file and the target source file, then apply the changes manually + using the Edit tool. The .rej file shows what was expected and + what to add/remove — find the corresponding location in the + current source and make the equivalent change. + + After fixing up (whether via patch or manually): + git add -A + # Extract subject and body from the mail headers + git commit --author="$(head -1 /tmp/msg | sed 's/^Author: //')" \ + -m "$(sed -n 's/^Subject: //p' /tmp/msg)" -m "$(tail -n+3 /tmp/msg)" + + If a patch is completely irrelevant (e.g. already applied), + skip it and note which patch was skipped. 4. After all patches are applied (or skipped), run: git log --oneline {upstream_branch}..HEAD @@ -98,19 +197,20 @@ TASK: - The branch name: {branch_name} IMPORTANT: -- Do NOT modify the patches before applying -- Do NOT use 'git am --abort' unless all attempts to resolve fail +- Do NOT modify the patch content — apply it as-is, adapting only to + context changes (moved lines, renamed variables, etc.) +- Do NOT use 'git am --abort' unless you are about to retry differently - If you skip a patch, continue with the remaining patches - The mbox file is at: {mbox_path} -""" +''' async def apply_series(pwork_url, link, branch_name, upstream_branch, repo_path): """Download and apply a patch series to a new local branch - Uses the Claude agent to handle the git am process, including - conflict resolution. + Uses the Claude agent to handle the git am process, including conflict + resolution. Args: pwork_url (str): Patchwork server URL @@ -123,40 +223,843 @@ async def apply_series(pwork_url, link, branch_name, upstream_branch, tuple: (success, branch_name) where success is bool and branch_name is the name of the created branch """ - # pylint: disable=C0415,E0611 - from u_boot_pylib.claude import (check_available, run_agent_collect, - MAX_BUFFER_SIZE) - if not check_available(): + if not claude_mod.check_available(): return False, None - from u_boot_pylib.claude import ClaudeAgentOptions - # Download the mbox mbox_path = await fetch_mbox(pwork_url, link) # Build the prompt and run the agent prompt = _build_apply_prompt(mbox_path, branch_name, upstream_branch) options = ClaudeAgentOptions( - allowed_tools=['Bash', 'Read', 'Grep'], - cwd=repo_path, - max_buffer_size=MAX_BUFFER_SIZE, - ) + allowed_tools=['Bash', 'Read', 'Grep'], cwd=repo_path, + max_buffer_size=claude_mod.MAX_BUFFER_SIZE) - tout.info(f'Applying series to branch {branch_name}...') - success, _ = await run_agent_collect(prompt, options) + tout.notice(f'Applying series to branch {branch_name}...') + success, _ = await claude_mod.run_agent_collect(prompt, options) - # Clean up the mbox file - try: + if os.path.exists(mbox_path): os.unlink(mbox_path) - except OSError: - pass return success, branch_name -def apply_series_sync(pwork_url, link, branch_name, upstream_branch, - repo_path): +def _build_review_prompt(ctx, commit_hash, seq, all_commits, + previous_review): + """Build the Claude agent prompt for reviewing a single patch + + Args: + ctx (ReviewContext): Review context (uses cover_content, + comments_path, spelling) + commit_hash (str): Git commit hash of the patch on the local branch + seq (int): Patch sequence number (1-based) + all_commits (list of tuple): (seq, hash, subject) for all patches + previous_review (str or None): Previous review text (for v2+) + + Returns: + str: The prompt text for the agent + """ + cover_section = '' + if ctx.cover_content: + cover_section = f''' +SERIES CONTEXT (cover letter): +{ctx.cover_content} +''' + + prev_section = '' + if previous_review: + prev_section = f''' +PREVIOUS REVIEW (from earlier version): +{previous_review} + +Consider whether the issues raised in the previous review have been +addressed in this version. +''' + + voice = get_voice() + voice_section = '' + if voice: + voice_section = f''' +WRITING STYLE: +Match this voice when writing your comments: +{voice} +''' + + # Build the series overview + series_lines = [] + total = len(all_commits) + for s, h, subj in all_commits: + marker = '>>>' if s == seq else ' ' + series_lines.append(f" {marker} {s}/{total} {h[:12]} {subj}") + series_overview = '\n'.join(series_lines) + + comments_section = '' + if ctx.comments_path: + comments_section = f''' +EXISTING COMMENTS: +Other reviewers (or the author) may have already commented on this +series. Read the file below for context. +(see file: {ctx.comments_path}) + +CRITICAL: If another reviewer has already raised a point, do NOT repeat +it, rephrase it, or elaborate on it — even to add a suggestion. The +author has already been told. Simply skip that issue entirely and focus +on things that have NOT been said. +''' + + return f'''You are an experienced U-Boot developer reviewing \ +a patch submitted to +the U-Boot mailing list. This is patch {seq}/{len(all_commits)} in the series. +{voice_section} +SERIES OVERVIEW (all patches, >>> marks the one you are reviewing): +{series_overview} + +You can run 'git show <hash>' on any of these to see the full diff. + +TASK: +1. First, study the full series to understand the overall design and how + the patches relate to each other. Run 'git show <hash>' for each + patch in the series overview above, starting from patch 1. This + gives you the context to spot cross-patch issues (e.g. something + introduced in one patch that affects another). + +2. Run: git show {commit_hash} + Re-read the patch you are reviewing in detail. + +3. Use Read and Grep tools to examine the surrounding source code for + context. Look at the files being modified to understand existing + patterns and verify the changes make sense. + +IMPORTANT: +- Do NOT run 'git checkout' or switch branches +- Do NOT 'cd' to another directory +- The source tree is already on the correct branch with all patches applied +{comments_section} + +4. Review the patch for: + - Correctness: Does the code do what the commit message says? Are + there logic errors, off-by-one errors, or missing error handling? + - Style: Does it follow U-Boot coding conventions (kernel style, + 80-column lines, proper use of log categories, DM conventions)? + - Commit message quality: Is it clear, using present/imperative + tense? Does it explain the motivation? + - API usage: Are U-Boot APIs used correctly? +{cover_section}{prev_section} +OUTPUT FORMAT: +Your response MUST use this exact structured format, with no other text +before or after. Start with a GREETING line containing the patch +author's first name (extracted from the commit's Author or +Signed-off-by line). If the name is not available, try to guess it from +the email address (e.g. 'simon.glass@xxx' -> Simon). If you still +cannot determine it, leave it empty. + +Example for approved patch: + +GREETING: Marek +VERDICT: approved + +Example for patch with issues: + +GREETING: J. +COMMENT: +> diff --git a/boot/bootm.c b/boot/bootm.c +> @@ -635,10 +633,12 @@ static int do_bootm_states(...) +> + quoted diff line 1 +> + quoted diff line 2 + +Your comment about this code goes here. Be specific and constructive. + +VERDICT: changes_needed + +Rules: +- Always start with GREETING: (first name or empty) +- Each COMMENT: block MUST include the diff header lines (the + 'diff --git' and '@@' lines) before the quoted code, so the file + and line number are clear +- Each COMMENT: block starts with quoted diff lines (using '> ' prefix), + followed by a blank line, then your comment text +- Quote enough context from the diff to identify the location +- Be specific and constructive, but brief — use as few words as + possible to make the point. Avoid restating what the code does; +- NEVER use backticks — this is plain-text email, not markdown. + For functions, always use parentheses with no quotes: malloc() not + 'malloc()' or `malloc`. For other identifiers (variables, struct + names, filenames) do not quote them unless they are common English + words where the reader might not realise they are code references +- If another reviewer has already made a point, do NOT repeat it, + rephrase it, or add to it — skip it entirely +- Focus on what is wrong and what to do instead +- Reference U-Boot conventions where applicable +- Do not nitpick trivial style issues that checkpatch would catch +- Focus on logic, correctness, and design issues +- If unsure about something, say so rather than guessing +- Use {ctx.spelling} spelling in your comments +- Always end with exactly one VERDICT: line (approved or changes_needed) +''' + + +def _build_cover_review_prompt(ctx, all_commits): + """Build prompt for reviewing the cover letter / series + + Args: + ctx (ReviewContext): Review context (uses cover_content, + comments_path, spelling) + all_commits (list of tuple): (seq, hash, subject) for all patches + + Returns: + str: The prompt text + """ + voice = get_voice() + voice_section = '' + if voice: + voice_section = f''' +WRITING STYLE: +Match this voice when writing your comments: +{voice} +''' + series_lines = [] + for s, h, subj in all_commits: + series_lines.append(f" {s}. {h[:12]} {subj}") + series_overview = '\n'.join(series_lines) + + cover_section = '' + if ctx.cover_content: + cover_section = f''' +COVER LETTER: +{ctx.cover_content} +''' + + comments_section = '' + if ctx.comments_path: + comments_section = f''' +EXISTING COMMENTS: +(see file: {ctx.comments_path}) +''' + + return f'''You are an experienced U-Boot developer reviewing a patch series +submitted to the U-Boot mailing list. Review the series as a whole, +replying to the cover letter. +{voice_section} +SERIES ({len(all_commits)} patches): +{series_overview} + +You can run 'git show <hash>' on any of these to see the full diff. +{cover_section}{comments_section} +TASK: +1. Read through all the patches (use 'git show <hash>' for each) +2. Review the series for: + - Overall design and approach + - Whether the series is split sensibly into patches + - Whether the cover letter accurately describes what the series does + - Any cross-patch issues (e.g. something introduced in patch 1 that + is used incorrectly in patch 3) + - Missing patches (e.g. documentation, tests, Kconfig updates) + +IMPORTANT: +- Do NOT run 'git checkout' or switch branches +- Do NOT 'cd' to another directory +- Only comment on series-level issues; per-patch issues will be + addressed in individual patch replies + +OUTPUT FORMAT: +Same as for individual patches. Start with GREETING, then COMMENT +blocks (if any), then VERDICT. + +If the series design is sound and there is nothing to add beyond +what will be said on individual patches, output ONLY: + +VERDICT: skip + +If there are series-level issues worth raising (design problems, +missing patches, poor splitting, cover letter inaccuracies), output: + +GREETING: <name> +COMMENT: +> quoted text or description of the issue + +Your comment. + +VERDICT: changes_needed + +Rules: +- NEVER use backticks — this is plain-text email, not markdown. + For functions, always use parentheses with no quotes: malloc() not + 'malloc()' or `malloc`. For other identifiers do not quote them + unless they are common English words that might confuse the reader +- Use {ctx.spelling} spelling +- Be brief — only raise series-level concerns, not per-patch nits +- Do NOT repeat issues that belong on individual patches +- VERDICT: skip means no cover letter reply will be sent +- Always end with exactly one VERDICT: line +''' + + +def _parse_greeting_verdict(text): + """Extract greeting and verdict from agent output + + Args: + text (str): Raw agent output text + + Returns: + tuple: (greeting, verdict) + """ + verdict = 'changes_needed' + greeting = '' + for line in text.splitlines(): + stripped = line.strip() + if stripped.lower().startswith('greeting:'): + greeting = stripped.split(':', 1)[1].strip() + elif stripped.lower().startswith('verdict:'): + val = stripped.lower().split(':', 1)[1].strip() + if val == 'approved': + verdict = 'approved' + elif val == 'skip': + verdict = 'skip' + break + return greeting, verdict + + +def _parse_comments(text): + """Extract COMMENT blocks from agent output + + Each COMMENT block contains optional quoted diff lines (prefixed with + '> ') followed by the reviewer's comment. + + Args: + text (str): Raw agent output text + + Returns: + list of tuple: (hunk, comment) pairs + """ + comments = [] + in_comment = False + hunk_lines = [] + comment_lines = [] + past_hunk = False + + for line in text.splitlines(): + if line.strip().startswith('COMMENT:'): + if hunk_lines or comment_lines: + hunk = '\n'.join(hunk_lines) + comment = '\n'.join(comment_lines).strip() + comments.append((hunk, comment)) + hunk_lines = [] + comment_lines = [] + in_comment = True + past_hunk = False + continue + + if line.strip().lower().startswith('verdict:'): + if in_comment and (hunk_lines or comment_lines): + hunk = '\n'.join(hunk_lines) + comment = '\n'.join(comment_lines).strip() + comments.append((hunk, comment)) + in_comment = False + continue + + if in_comment: + if not past_hunk and line.startswith('> '): + hunk_lines.append(line) + elif not past_hunk and not line.strip() and hunk_lines: + past_hunk = True + else: + past_hunk = True + comment_lines.append(line) + + if in_comment and (hunk_lines or comment_lines): + comments.append(('\n'.join(hunk_lines), + '\n'.join(comment_lines).strip())) + return comments + + +def parse_review_output(text): + """Parse structured review output from the agent + + Args: + text (str): Raw agent output text + + Returns: + tuple: (greeting, verdict, comments) where greeting is the author's + first name (or ''), verdict is 'approved' or 'changes_needed', + and comments is a + list of (hunk, comment) tuples + """ + greeting, verdict = _parse_greeting_verdict(text) + comments = _parse_comments(text) + return greeting, verdict, comments + + +def guess_name_from_email(email): + """Guess a first name from an email address + + If the local part of the email contains a recognisable name (e.g. + 'simon.glass@xxx'), extract and capitalise it. + + Args: + email (str): Email address + + Returns: + str: Guessed first name, or '' if it cannot be determined + """ + local = email.split('@')[0] if '@' in email else '' + if not local: + return '' + part = local.split('.')[0].split('_')[0].split('-')[0] + if part.isalpha() and len(part) >= 2: + return part.capitalize() + return '' + + +def _format_approved(ctx, commit_message=None, diffstat=None): + """Format an approved review with no comments + + Quotes the commit message and diffstat, then adds a Reviewed-by tag + + Args: + ctx (ReviewContext): Review context + commit_message (str or None): Commit message to quote + diffstat (str or None): Diffstat to quote + + Returns: + str: Formatted email body + """ + lines = [f'On {ctx.date}, {ctx.author_name} <{ctx.author_email}> wrote:'] + if commit_message: + for cl in commit_message.strip().splitlines(): + lines.append(f'> {cl}') + if diffstat: + lines.append('>') + for dl in diffstat.strip().splitlines(): + lines.append(f'> {dl}') + lines += ['', f'Reviewed-by: {ctx.reviewer_tag}', ''] + return '\n'.join(lines) + + +def _format_with_comments(ctx, greeting, verdict, comments, + commit_message=None): + """Format a review that has comments + + Args: + ctx (ReviewContext): Review context + greeting (str): Author's first name, or '' if unknown + verdict (str): 'approved' or 'changes_needed' + comments (list): List of (hunk, comment) tuples + commit_message (str or None): Commit message to quote + + Returns: + str: Formatted email body + """ + if not greeting: + greeting = guess_name_from_email(ctx.author_email) + lines = [f'Hi {greeting},' if greeting else 'Hi,', ''] + + lines.append( + f'On {ctx.date}, {ctx.author_name} <{ctx.author_email}> wrote:') + if commit_message: + msg_lines = commit_message.strip().splitlines() + max_quote = 20 + for cl in msg_lines[:max_quote]: + lines.append(f'> {cl}') + if len(msg_lines) > max_quote: + lines.append('> [...]') + lines.append('') + + for hunk, comment in comments: + if hunk: + lines.append(hunk) + lines.append('') + lines.append(comment) + lines.append('') + + if verdict == 'approved': + lines += [f'Reviewed-by: {ctx.reviewer_tag}', ''] + elif comments and ctx.signoff: + lines += [ctx.signoff, ''] + return '\n'.join(lines) + + +def format_review_email(ctx, greeting, verdict, comments, + commit_message=None): + """Format parsed review data into an email body + + Delegates to _format_approved() for clean approvals or + _format_with_comments() for reviews with feedback. + + Args: + ctx (ReviewContext): Review context with reviewer/author info + and optional diffstat + greeting (str): Author's first name, or '' if unknown + verdict (str): 'approved', 'changes_needed' or 'skip' + comments (list): List of (hunk, comment) tuples + commit_message (str or None): Commit message to quote + + Returns: + str: Formatted email body text + """ + if verdict == 'approved' and not comments: + return _format_approved(ctx, commit_message, ctx.diffstat) + return _format_with_comments(ctx, greeting, verdict, comments, + commit_message) + + +def cleanup_review_text(text): + """Apply mechanical fixes to review email text + + Removes backticks, fixes function quoting style, and other + formatting issues that the review agent sometimes produces despite + prompt instructions. + + Args: + text (str): Review email body + + Returns: + str: Cleaned-up text + """ + # Replace backtick-quoted code with plain text: `foo` -> foo + text = re.sub(r'`([^`]+)`', r'\1', text) + + # Remove quotes around function references: 'func()' -> func() + text = re.sub(r"'(\w+\(\))'", r'\1', text) + + # Remove double quotes around function references: "func()" -> func() + text = re.sub(r'"(\w+\(\))"', r'\1', text) + + return text + + +_REFINE_REVIEWS_PROMPT = '''You are editing draft code-review \ +emails for the U-Boot +mailing list. Your job is to make them more concise and natural while +preserving all technical content. + +DRAFTS TO EDIT: +{drafts} + +{voice_section} +RULES: +- Make each review as succinct as possible. Remove filler, hedging and + unnecessary preamble. Every sentence should earn its place. +- Remove duplicate points — if the same issue is raised on multiple + patches, keep it only on the most relevant one and remove it from the + others. Also remove any comment that restates, rephrases, or + elaborates on a point already made by another reviewer — the author + has already been told. +- NEVER use backticks — this is plain-text email, not markdown. + For functions, always use parentheses with no quotes: malloc() not + 'malloc()' or `malloc`. For other identifiers do not quote them + unless they are common English words that might confuse the reader. +- Use {spelling} spelling. +- Do not change the technical substance of any comment. +- Do not add new review comments or suggestions. +- Do not change Reviewed-by tags, attribution lines, quoted commit + messages, or quoted diff hunks. These are structural parts of the + email that must be preserved exactly. + +OUTPUT FORMAT: +Return each review separated by a line containing only '---SEQ N---' +(where N is the patch number, 0 for cover letter). Include the full +edited email body after each separator. No other text before or after. + +Example: +---SEQ 3--- +Hi Marek, + +On 2026-03-21, Marek Vasut <marex@denx.de> wrote: +...edited review body... + +Regards, Simon +---SEQ 5--- +...next review... +''' + + +def _needs_refinement(body): + """Check whether a review body has reviewer comments + + Approved reviews with only structural lines (quoted text, + attribution, Reviewed-by) do not need refinement. + + Args: + body (str): Review email body + + Returns: + bool: True if the review has comments to refine + """ + return any( + not l.startswith('>') and + not l.startswith('On ') and + not l.startswith('Reviewed-by:') and + not l.startswith('Hi ') and l.strip() + for l in body.splitlines()) + + +def _parse_refined_output(log): + """Parse ---SEQ N--- delimited output from the refinement agent + + Args: + log (str): Raw agent output + + Returns: + dict: Map of seq number to refined body text + """ + refined = {} + current_seq = None + current_lines = [] + for line in log.splitlines(): + match = re.match(r'^---SEQ (\d+)---$', line) + if match: + if current_seq is not None and current_lines: + refined[current_seq] = '\n'.join(current_lines).strip() + current_seq = int(match.group(1)) + current_lines = [] + elif current_seq is not None: + current_lines.append(line) + if current_seq is not None and current_lines: + refined[current_seq] = '\n'.join(current_lines).strip() + return refined + + +async def refine_reviews(review_bodies, spelling='British'): + """Run a refinement agent over all review drafts + + Makes reviews more concise, removes duplicates across + patches, and enforces formatting conventions. Approved + reviews without comments are excluded. + + Args: + review_bodies (dict): Map of patch index to review + body text + spelling (str): Spelling convention + + Returns: + dict: Updated review_bodies with refined text + """ + to_refine = {seq: body + for seq, body in review_bodies.items() + if _needs_refinement(body)} + if not to_refine: + return review_bodies + + draft_parts = [] + for seq in sorted(to_refine): + draft_parts.append(f'---SEQ {seq}---') + draft_parts.append(to_refine[seq]) + drafts_text = '\n'.join(draft_parts) + + voice = get_voice() + voice_section = '' + if voice: + voice_section = f'''WRITING STYLE: +Match this voice when editing the reviews: +{voice} +''' + + prompt = _REFINE_REVIEWS_PROMPT.format(drafts=drafts_text, + voice_section=voice_section, spelling=spelling) + + options = ClaudeAgentOptions(allowed_tools=[], + max_buffer_size=claude_mod.MAX_BUFFER_SIZE) + + tout.notice('Refining review drafts...') + success, log = await claude_mod.run_agent_collect(prompt, options) + if not success or not log.strip(): + tout.warning('Refinement failed; using original drafts') + return review_bodies + + refined = _parse_refined_output(log) + + # Keep originals for any the agent missed + result = dict(review_bodies) + for seq, body in refined.items(): + if seq in result and body: + result[seq] = body + return result + + +def refine_reviews_sync(review_bodies, spelling='British'): + """Synchronous wrapper for refine_reviews()""" + loop = asyncio.get_event_loop() + return loop.run_until_complete(refine_reviews(review_bodies, spelling)) + + +def _write_comments_file(series_data, pwork): + """Fetch and write existing patchwork comments to a temp file + + Args: + series_data (dict): Series data from patchwork get_series() + pwork: Patchwork instance + + Returns: + str or None: Path to the comments file, or None if no comments + """ + + + patches = series_data.get('patches', []) + cover = series_data.get('cover_letter') + + async def _fetch_comments(): + all_comments = [] + async with aiohttp.ClientSession() as client: + # Cover letter comments + if cover: + cover_comments = await pwork.get_cover_comments( + client, cover['id']) + for comment in cover_comments: + sub = comment.get('submitter', {}) + all_comments.append(f"=== Comment on cover letter ===\n" + f"From: {sub.get('name', '')} " + f"<{sub.get('email', '')}>\n" + f"Date: {comment.get('date', '')}\n\n" + f"{comment.get('content', '')}\n") + + # Patch comments + for i, patch in enumerate(patches): + patch_comments = await pwork.get_patch_comments( + client, str(patch['id'])) + for comment in patch_comments: + sub = comment.get('submitter', {}) + all_comments.append(f"=== Comment on patch {i + 1}: " + f"{patch.get('name', '')} ===\n" + f"From: {sub.get('name', '')} " + f"<{sub.get('email', '')}>\n" + f"Date: {comment.get('date', '')}\n\n" + f"{comment.get('content', '')}\n") + + return all_comments + + loop = asyncio.get_event_loop() + comments = loop.run_until_complete(_fetch_comments()) + + if not comments: + return None + + comments_path = os.path.join(tempfile.gettempdir(), + 'patman_review_comments.txt') + tools.write_file(comments_path, '\n'.join(comments), binary=False) + + tout.notice(f'Found {len(comments)} existing comment(s)') + return comments_path + + +async def _review_cover_letter(ctx, all_commits): + """Review the cover letter / series as a whole + + Args: + ctx (ReviewContext): Review context (uses cover_content etc.) + all_commits (list): (seq, hash, subject) tuples + + Returns: + str or None: Review body, or None if skipped + """ + tout.notice('Reviewing series (cover letter)...') + prompt = _build_cover_review_prompt(ctx, all_commits) + options = ClaudeAgentOptions( + allowed_tools=['Bash', 'Read', 'Grep'], cwd=ctx.repo_path, + max_buffer_size=claude_mod.MAX_BUFFER_SIZE) + success, log = await claude_mod.run_agent_collect(prompt, options) + if not success or not log.strip(): + return None + greeting, verdict, comments = parse_review_output(log) + if verdict == 'skip': + return None + return format_review_email(ctx, greeting, verdict, comments) + + +async def _review_single_patch(ctx, cmt, seq, all_commits): + """Review a single patch + + Args: + ctx (ReviewContext): Review context + cmt: Commit object with hash, subject, msg, rtags + seq (int): Patch sequence number (1-based) + all_commits (list): (seq, hash, subject) tuples + + Returns: + str: Review body text + """ + body = cmt.msg.strip() + if body.startswith(cmt.subject): + commit_msg = body + else: + commit_msg = (cmt.subject + '\n' + body).strip() + ctx.diffstat = command.output('git', 'diff', '--stat', + f'{cmt.hash}~..{cmt.hash}', + cwd=ctx.repo_path).strip() + + previous_review = ctx.previous_reviews.get(seq) + prompt = _build_review_prompt(ctx, cmt.hash, seq, all_commits, + previous_review) + options = ClaudeAgentOptions(allowed_tools=['Bash', 'Read', 'Grep'], + cwd=ctx.repo_path, max_buffer_size=claude_mod.MAX_BUFFER_SIZE) + success, log = await claude_mod.run_agent_collect(prompt, options) + if not success or not log.strip(): + return '(Review failed — please review manually)' + greeting, verdict, comments = parse_review_output(log) + return format_review_email(ctx, greeting, verdict, comments, commit_msg) + + +async def review_patches(ctx): + """Run AI review on each patch in the applied branch + + Args: + ctx (ReviewContext): Review context (uses branch_name, + upstream_branch, patch_count, cover_content, + previous_reviews, repo_path, etc.) + + Returns: + dict: Map of patch index (1-based) to review body + """ + if not claude_mod.check_available(): + return {} + + commit_range = f'{ctx.upstream_branch}..{ctx.branch_name}' + git_dir = os.path.join(ctx.repo_path, '.git') + series = patchstream.get_metadata_for_list(commit_range, git_dir=git_dir) + all_commits = [(i + 1, cmt.hash, cmt.subject) + for i, cmt in enumerate(series.commits)] + commits = [c[1] for c in all_commits] + + if len(commits) != ctx.patch_count: + tout.warning(f'Expected {ctx.patch_count} patches but found ' + f'{len(commits)} commits on {ctx.branch_name}') + + review_bodies = {} + + if ctx.cover_content and ctx.patch_count > 1: + body = await _review_cover_letter(ctx, all_commits) + if body: + review_bodies[0] = body + + reviewer_tag = ctx.reviewer_tag + for i, cmt in enumerate(series.commits): + seq = i + 1 + + if (reviewer_tag in cmt.rtags.get('Reviewed-by', set()) or + reviewer_tag in cmt.rtags.get('Tested-by', set())): + tout.notice(f'Skipping patch {seq}/{len(commits)}' + ' (already reviewed)') + continue + + tout.notice(f'Reviewing patch {seq}/{len(commits)}...') + + review_bodies[seq] = await _review_single_patch(ctx, cmt, seq, + all_commits) + + return review_bodies + + +def review_patches_sync(ctx): + """Synchronous wrapper for review_patches() + + Returns: + dict: Map of patch index (1-based) to review body text + """ + loop = asyncio.get_event_loop() + return loop.run_until_complete(review_patches(ctx)) + + +def apply_series_sync(pwork_url, link, branch_name, upstream_branch, repo_path): """Synchronous wrapper for apply_series() Args: @@ -169,9 +1072,9 @@ def apply_series_sync(pwork_url, link, branch_name, upstream_branch, Returns: tuple: (success, branch_name) """ - return asyncio.run( - apply_series(pwork_url, link, branch_name, upstream_branch, - repo_path)) + loop = asyncio.get_event_loop() + return loop.run_until_complete(apply_series( + pwork_url, link, branch_name, upstream_branch, repo_path)) def search_series(pwork, title): @@ -192,12 +1095,10 @@ def search_series(pwork, title): """ async def _query(): if not pwork.proj_id: - raise ValueError( - 'Patchwork project not configured; use ' + raise ValueError('Patchwork project not configured; use ' "'patman patchwork set-project' or provide -l <link>") async with aiohttp.ClientSession() as client: - # pylint: disable=W0212 - return await pwork._query_series(client, title) + return await pwork.query_series(client, title) loop = asyncio.get_event_loop() results = loop.run_until_complete(_query()) @@ -211,17 +1112,312 @@ def search_series(pwork, title): f"link {match['id']})") return str(match['id']) + # Multiple matches - show them and pick the most recent tout.notice(f"Found {len(results)} matching series:") for i, match in enumerate(results): tout.notice(f" {i + 1}. [{match['id']}] {match['name']} " f"(v{match['version']}, {match['date']})") - best = max(results, - key=lambda r: (r.get('version', 0), r.get('date', ''))) + best = max(results, key=lambda r: (r.get('version', 0), r.get('date', ''))) tout.notice(f"Using most recent: {best['name']} (link {best['id']})") return str(best['id']) +def _git_restore(orig_branch, had_stash, repo_path): + """Restore git branch and stash after review + + Args: + orig_branch (str or None): Branch to restore + had_stash (bool): Whether changes were stashed + repo_path (str or None): Repository path + """ + + try: + if orig_branch and repo_path: + command.output('git', 'checkout', orig_branch, cwd=repo_path) + if had_stash: + command.output('git', 'stash', 'pop', cwd=repo_path) + except command.CommandExc: + pass + + +def create_drafts(ctx, args, review_bodies, review_ids): + """Create Gmail drafts for review emails + + If the reviewer's email differs from the Gmail account, a From header is + set on the draft so the email is sent with the correct identity. + + Args: + ctx (ReviewContext): Review context + args (Namespace): Command-line arguments (for gmail_account, dry_run) + review_bodies (dict): Map of seq to review body text + review_ids (dict): Map of seq to review record ID + """ + to_draft = dict(review_bodies) + if not to_draft: + tout.notice('All reviews already have Gmail drafts') + return + + patch_headers = {} + patches = ctx.series_data.get('patches', []) + + async def _fetch_patch_headers(): + async with aiohttp.ClientSession() as client: + for i, patch in enumerate(patches): + data = await ctx.pwork.get_patch(client, str(patch['id'])) + patch_headers[i + 1] = data.get('headers', {}) + + loop = asyncio.get_event_loop() + loop.run_until_complete(_fetch_patch_headers()) + + sender = None + if ctx.reviewer_email and args.gmail_account: + if ctx.reviewer_email.lower() != args.gmail_account.lower(): + sender = ctx.reviewer_tag + + draft_ids = gmail.create_review_drafts(ctx.series_data, to_draft, + patch_headers=patch_headers, dry_run=args.dry_run, + account=args.gmail_account, sender=sender) + if args.dry_run: + tout.notice(f'Dry run: would create {len(to_draft)} draft(s)') + else: + for seq, draft_id in draft_ids.items(): + if seq in review_ids: + ctx.cser.db.review_set_draft_id(review_ids[seq], draft_id) + ctx.cser.commit() + tout.notice(f'Created {len(draft_ids)} Gmail draft(s)') + + +def _parse_reviewer(args): + """Extract reviewer name and email from args + + Uses --reviewer if provided, otherwise falls back to git config. + + Args: + args (Namespace): Command-line arguments + + Returns: + tuple: (reviewer_name, reviewer_email) + + Raises: + ValueError: if the reviewer identity cannot be determined + """ + + if args.reviewer: + match = re.match(r'(.+?)\s*<(.+?)>', args.reviewer) + if not match: + raise ValueError( + f"Invalid reviewer format '{args.reviewer}';" + " use 'Name <email>'") + return match.group(1).strip(), match.group(2).strip() + name = gitutil.get_default_user_name() + email = gitutil.get_default_user_email() + if not name or not email: + raise ValueError( + 'Cannot determine reviewer identity; set git user.name' + ' and user.email, or use --reviewer') + return name, email + + +def _show_reviews(reviews, series_data): + """Display review bodies + + Args: + reviews: Either an iterable of Review records (with seq, body, + approved attributes) or a dict mapping seq to body text + series_data (dict): Series data from patchwork + """ + patches = series_data.get('patches', []) + cover = series_data.get('cover_letter') + + if isinstance(reviews, dict): + items = [(seq, body, 'Reviewed-by:' in body) + for seq, body in sorted(reviews.items())] + else: + items = [(rev.seq, rev.body, rev.approved) + for rev in reviews] + + for seq, body, approved in items: + if seq == 0: + label = 'Cover letter' + name = cover.get('name', '') if cover else 'Cover' + elif seq <= len(patches): + label = f'Patch {seq}/{len(patches)}' + name = patches[seq - 1].get('name', '') + else: + label = f'Patch {seq}' + name = '' + colour = (terminal.Color.GREEN if approved else + terminal.Color.YELLOW) + terminal.tprint(f'\n--- {label}: {name} ---', colour=colour) + print('---email start---') + print(body) + print('---email end---') + print() + + +def _do_learn_voice(args, pwork): + """Build a voice profile from past reviews + + Args: + args (Namespace): Command-line arguments + pwork (Patchwork or None): Configured patchwork instance + + Returns: + int: 0 on success, 1 on failure + """ + + source = args.learn_voice + account = getattr(args, 'gmail_account', None) + user_email = None + list_email = None + + if pwork and pwork.proj_id: + async def _get_list_email(): + async with aiohttp.ClientSession() as client: + # pylint: disable=W0212 + projects = await pwork._request(client, 'projects/') + for proj in projects: + if proj['id'] == pwork.proj_id: + return proj.get('list_email') + return None + + loop = asyncio.get_event_loop() + list_email = loop.run_until_complete(_get_list_email()) + + if args.reviewer: + match = re.match(r'(.+?)\s*<(.+?)>', args.reviewer) + if match: + user_email = match.group(2).strip() + if not user_email: + user_email = gitutil.get_default_user_email() + + count = getattr(args, 'voice_count', 20) + vp = VoiceParams(source, account, pwork, user_email, list_email, count) + return 0 if learn_voice_sync(vp) else 1 + + +def _sync_drafts(service, cser): + """Sync draft status with Gmail + + Detects sent and deleted drafts, updates the database, and refines the voice + profile if sent text differs from the draft. + + Args: + service: Gmail API service + cser (Cseries): Open cseries instance + + Returns: + bool: True if there were drafts to sync + """ + reviews_with_drafts = cser.db.review_get_by_status('draft') + + if not reviews_with_drafts: + tout.notice('No pending drafts to sync') + return False + + sent, deleted = gmail.sync_drafts(service, reviews_with_drafts) + draft_bodies = {r.idnum: r.body for r in reviews_with_drafts} + for review_id, (body, msg_id, thread_id) in sent.items(): + cser.db.review_set_sent(review_id, body, msg_id, thread_id) + tout.notice(f'Review {review_id}: sent') + + draft = draft_bodies.get(review_id, '') + if draft and body.strip() != draft.strip(): + tout.notice(f'Review {review_id}: edits detected, ' + 'refining voice...') + refine_voice_sync(draft, body) + for review_id in deleted: + cser.db.review_set_deleted(review_id) + tout.notice(f'Review {review_id}: deleted (not sent)') + pending = len(reviews_with_drafts) - len(sent) - len(deleted) + if pending: + tout.notice(f'{pending} draft(s) still pending') + if sent or deleted: + tout.notice(f'Synced {len(sent)} sent, {len(deleted)} deleted') + cser.commit() + return True + + +def _handle_replies(service, account, ctx): + """Check for replies to sent reviews and generate responses + + Args: + service: Gmail API service + account (str or None): Gmail account for creating drafts + ctx (ReviewContext): Review context + """ + sent_reviews = ctx.cser.db.review_get_by_status('sent', + need_thread=True) + + reply_count = 0 + + for rev in sent_reviews: + replies = gmail.fetch_thread_replies( + service, rev.gmail_thread_id, rev.gmail_msg_id) + if not replies: + continue + + tout.notice(f'Review {rev.idnum}: {len(replies)} reply(ies)') + ctx.cser.db.review_set_replied(rev.idnum) + + for reply in replies: + response = handle_reply_sync(ctx, rev.body, reply['from'], + reply['body']) + if response: + reply_count += 1 + tout.notice(f" Draft response to {reply['from']}") + terminal.tprint(f"\n--- Reply to {reply['from']} ---", + colour=terminal.Color.CYAN) + print('---email start---') + print(response) + print('---email end---') + print() + + if account: + gmail.create_draft(gmail.get_service(account), + reply['from'], 'Re: ', response) + tout.notice(' Draft created') + else: + tout.notice( + f" No response needed to {reply['from']}") + + if reply_count: + tout.notice(f'Created {reply_count} response draft(s)') + ctx.cser.commit() + + +def _do_sync(args, cser): + """Sync sent drafts and handle replies + + Args: + args (Namespace): Command-line arguments + cser (Cseries): Open cseries instance + + Returns: + int: 0 on success, 1 on failure + """ + if not gmail.check_available(): + return 1 + account = getattr(args, 'gmail_account', None) + service = gmail.get_service(account) + + _sync_drafts(service, cser) + + reviewer_name, reviewer_email = _parse_reviewer(args) + ctx = ReviewContext(None, cser, {}) + ctx.reviewer_name = reviewer_name + ctx.reviewer_email = reviewer_email + ctx.repo_path = gitutil.get_top_level() + ctx.signoff = getattr(args, 'signoff', '') or None + if ctx.signoff: + ctx.signoff = ctx.signoff.replace('\\n', '\n') + ctx.spelling = getattr(args, 'spelling', 'British') + + _handle_replies(service, account, ctx) + return 0 + + def _clean_series_name(name): """Strip the [U-Boot,v2,0/4] prefix from a series name @@ -238,12 +1434,11 @@ def _clean_series_name(name): return name -# pylint: disable=R0914 def _register_series(cser, clean_name, version, link, series_data): - """Register a series in the database for review tracking + """Register a series in the database for review - Creates or finds the series record, adds a ser_ver entry and - pcommit records for each patch. + Creates or finds the series record, adds a ser_ver entry and pcommit + records for each patch. Args: cser (Cseries): Open cseries instance @@ -253,69 +1448,55 @@ def _register_series(cser, clean_name, version, link, series_data): series_data (dict): Series data from patchwork Returns: - tuple: (series_id, svid) or None if already reviewed + tuple or None: (series_id, svid) or None if already reviewed """ existing = cser.db.series_find_by_link(link) if existing: - series_id, db_name, db_version, svid = existing - tout.notice( - f"Already reviewed: '{db_name}' v{db_version}") return None prev = cser.db.series_find_review_by_name(clean_name) if prev: series_id, db_name, prev_version = prev - tout.notice(f"Previously reviewed '{db_name}' " - f"v{prev_version}; adding v{version}") + tout.notice(f"Previously reviewed '{db_name}' v{prev_version};" + f" adding v{version}") else: - desc = series_data.get('cover_letter', {}) - desc = desc.get('name', '') if desc else '' - series_id = cser.db.series_add(clean_name, desc) + series_id = cser.db.series_find_by_name( + clean_name, include_archived=True) + if not series_id: + desc = series_data.get('cover_letter', {}) + desc = desc.get('name', '') if desc else '' + series_id = cser.db.series_add(clean_name, desc) cser.db.series_set_source(series_id, 'review') - svid = cser.db.ser_ver_add(series_id, version, - link=str(link)) + svid = cser.db.ser_ver_add(series_id, version, link=str(link)) patches = series_data.get('patches', []) pcommits = [] for i, patch in enumerate(patches): - pcommits.append(Pcommit( - idnum=None, seq=i, - subject=patch.get('name', ''), - svid=svid, change_id=None, state=None, - patch_id=patch.get('id'), num_comments=0)) + pcommits.append(database.Pcommit(idnum=None, seq=i, + subject=patch.get('name', ''), svid=svid, change_id=None, + state=None, patch_id=patch.get('id'), num_comments=0)) if pcommits: cser.db.pcommit_add_list(svid, pcommits) cser.commit() - tout.notice( - f"Added series '{clean_name}' v{version} to database") + tout.notice(f"Added series '{clean_name}' v{version} to database") return series_id, svid -# pylint: disable=R0914 -def do_review(args, pwork, cser): - """Run the review command - - Downloads patches from patchwork, applies them to a local - branch using a Claude agent, and (in later stages) reviews - each patch. +def _fetch_series(pwork, link): + """Fetch and validate series metadata from patchwork Args: - args (Namespace): Command-line arguments pwork (Patchwork): Configured patchwork instance - cser (Cseries): Open cseries instance - """ - if not args.pw_link and not args.title: - raise ValueError( - "Please provide -l <link> or -t <title> to " - "identify the series") + link (str): Patchwork series link/ID - link = args.pw_link - if not link: - link = search_series(pwork, args.title) + Returns: + tuple: (series_data, clean_name, version, patch_count) - # Fetch series metadata from patchwork + Raises: + ValueError: if the series is incomplete + """ async def _fetch(): async with aiohttp.ClientSession() as client: return await pwork.get_series(client, link) @@ -326,47 +1507,645 @@ def do_review(args, pwork, cser): series_name = series_data.get('name', f'series-{link}') version = series_data.get('version', 1) patch_count = series_data.get('received_total', 0) + total = series_data.get('total', patch_count) + clean_name = _clean_series_name(series_name) tout.notice(f"Series: {clean_name}") - tout.notice( - f"Version: {version}, Patches: {patch_count}") + tout.notice(f"Version: {version}, Patches: {patch_count}") - result = _register_series( - cser, clean_name, version, link, series_data) - if result is None: - return 0 - series_id, _ = result + if patch_count < total: + raise ValueError('Incomplete series: patchwork received ' + f'{patch_count} of {total} patches') + + return series_data, clean_name, version, patch_count + + +def _draft_stored_reviews(args, reviews, series_data, pwork, cser): + """Create Gmail drafts from stored review records + + Only drafts reviews that do not already have a draft_id. + + Args: + args (Namespace): Command-line arguments + reviews (list): Review records + series_data (dict): Series data from patchwork + pwork (Patchwork): Patchwork instance + cser (Cseries): Cseries instance + """ + need_draft = [rev for rev in reviews + if not rev.draft_id] + if not need_draft: + tout.notice('All reviews already have Gmail drafts') + return + review_bodies = {rev.seq: rev.body for rev in need_draft} + review_ids = {rev.seq: rev.idnum + for rev in need_draft} + rname = remail = None + for rev in need_draft: + match = re.search(r'Reviewed-by:\s*(.+?)\s*<(.+?)>', rev.body) + if match: + rname = match.group(1).strip() + remail = match.group(2).strip() + break + ctx = ReviewContext(pwork, cser, series_data) + ctx.reviewer_name = rname or '' + ctx.reviewer_email = remail or '' + create_drafts(ctx, args, review_bodies, review_ids) + + +def _get_upstream_branch(args, cser): + """Determine the upstream branch for applying patches + + Prefers 'next' over 'master' if it exists. + + Args: + args (Namespace): Command-line arguments + cser (Cseries): Open cseries instance - # Determine the upstream branch for applying patches + Returns: + str: Upstream branch ref, e.g. 'us/next' + """ ups = args.upstream if not ups: ups = cser.db.upstream_get_default() if ups: - upstream_branch = f'{ups}/master' - else: - upstream_branch = 'origin/master' + branch = f'{ups}/next' + ret = command.run_one('git', 'rev-parse', '--verify', branch, + capture=True, raise_on_error=False) + if ret.return_code: + branch = f'{ups}/master' + return branch + return 'origin/master' + + +def _apply_and_check(ctx, link): + """Download, apply patches and verify they applied - # Download and apply the patches - branch_name = f'review{series_id}' + Args: + ctx (ReviewContext): Review context (uses pwork, cser, series_id, + version, upstream_branch) + link (str): Patchwork series link/ID + + Returns: + str or None: Branch name, or None on failure + """ + branch_name = f'review{ctx.series_id}' repo_path = gitutil.get_top_level() - success, branch_name = apply_series_sync( - pwork.url, link, branch_name, upstream_branch, - repo_path) + success, branch_name = apply_series_sync(ctx.pwork.url, link, branch_name, + ctx.upstream_branch, repo_path) + + if success: + applied = command.output('git', 'rev-list', '--count', + f'{ctx.upstream_branch}..{branch_name}', cwd=repo_path).strip() + if int(applied) == 0: + success = False if not success: tout.error('Failed to apply patches to branch') - return 1 + ctx.cser.db.ser_ver_remove(ctx.series_id, ctx.version) + ctx.cser.commit() + return None tout.notice(f'Patches applied to branch: {branch_name}') + return branch_name + + +def _fetch_cover_content(pwork, series_data): + """Fetch cover letter content from patchwork + + Args: + pwork (Patchwork): Patchwork instance + series_data (dict): Series data from patchwork + + Returns: + str or None: Cover letter text + """ + cover = series_data.get('cover_letter') + if not cover: + return None + + async def _fetch(): + async with aiohttp.ClientSession() as client: + data = await pwork.get_cover(client, cover['id']) + return data.get('content', '') + + loop = asyncio.get_event_loop() + return loop.run_until_complete(_fetch()) + + +def _run_and_store_reviews(ctx, args): + """Run AI review, refine, store and display results + + Args: + ctx (ReviewContext): Review context with all fields populated + args (Namespace): Command-line arguments (for create_drafts flag) + """ + prev_db = ctx.cser.db.review_get_previous(ctx.series_id, ctx.version) + for rev in prev_db: + ctx.previous_reviews[rev.seq] = rev.body + + ctx.cover_content = _fetch_cover_content(ctx.pwork, ctx.series_data) + + review_bodies = review_patches_sync(ctx) + + if ctx.comments_path and os.path.exists(ctx.comments_path): + os.unlink(ctx.comments_path) + + for seq in review_bodies: + review_bodies[seq] = cleanup_review_text(review_bodies[seq]) + review_bodies = refine_reviews_sync(review_bodies, ctx.spelling) + + timestamp = datetime.now().isoformat() + review_ids = {} + for seq, body in review_bodies.items(): + approved = 'Reviewed-by:' in body + review_ids[seq] = ctx.cser.db.review_add(ctx.svid, seq, body, approved, + timestamp) + ctx.cser.commit() + + _show_reviews(review_bodies, ctx.series_data) + + if args.create_drafts: + create_drafts(ctx, args, review_bodies, review_ids) + else: + tout.notice('Use --create-drafts to create Gmail review drafts.') + + +def _find_or_register(ctx, args, clean_name, link): + """Register a series, handling existing reviews + + If the series was already reviewed, shows the stored reviews and + optionally creates drafts. With --force, deletes old reviews and + re-registers. + + Args: + ctx (ReviewContext): Review context (uses pwork, cser, series_data, + version) + args (Namespace): Command-line arguments + clean_name (str): Cleaned series name + link (str): Patchwork series link/ID + + Returns: + tuple or None: (series_id, svid) or None if already reviewed and + not forcing + """ + result = _register_series(ctx.cser, clean_name, ctx.version, link, + ctx.series_data) + if result is not None: + return result + + existing = ctx.cser.db.series_find_by_link(link) + if not existing: + return None + + _, _, _, svid = existing + reviews = ctx.cser.db.review_get_for_version(svid) + + if reviews and not args.force: + _, db_name, db_version, _ = existing + tout.notice(f"Already reviewed: '{db_name}' v{db_version}") + _show_reviews(reviews, ctx.series_data) + if args.create_drafts: + _draft_stored_reviews(args, reviews, ctx.series_data, ctx.pwork, + ctx.cser) + return None + + if reviews and args.force: + ctx.cser.db.review_delete_for_version(svid) + ctx.cser.commit() + tout.notice('Re-reviewing (forced)') + + # Re-register for re-review + return _register_series(ctx.cser, clean_name, ctx.version, link, + ctx.series_data) + + +def do_review(args, pwork, cser): + """Run the review command + + Dispatches to learn-voice, sync, or the main review flow + which fetches, applies, reviews and optionally drafts. + + Args: + args (Namespace): Command-line arguments + pwork (Patchwork): Configured patchwork instance + cser (Cseries): Open cseries instance + """ + if args.learn_voice: + return _do_learn_voice(args, pwork) + + if args.sync: + return _do_sync(args, cser) + + if not args.pw_link and not args.title: + raise ValueError("Please provide -l <link> or -t <title> " + "to identify the series") - if args.apply_only: - tout.notice('Apply-only mode; skipping review') + link = args.pw_link + if not link: + link = search_series(pwork, args.title) + + series_data, clean_name, version, patch_count = \ + _fetch_series(pwork, link) + + ctx = ReviewContext(pwork, cser, series_data) + ctx.version = version + ctx.patch_count = patch_count + + result = _find_or_register(ctx, args, clean_name, link) + if result is None: return 0 + ctx.series_id, ctx.svid = result - # TODO: Stage 2 - AI review of each patch - # TODO: Stage 3 - Gmail draft creation - tout.notice( - 'Patch application complete. ' - 'AI review not yet implemented.') + orig_branch = None + had_stash = False + + try: + ctx.upstream_branch = _get_upstream_branch(args, cser) + ctx.repo_path = gitutil.get_top_level() + orig_branch = command.output('git', 'rev-parse', '--abbrev-ref', + 'HEAD', cwd=ctx.repo_path).strip() + try: + stash_out = command.output('git', 'stash', cwd=ctx.repo_path) + if 'No local changes' not in stash_out: + had_stash = True + except command.CommandExc: + pass + + ctx.branch_name = _apply_and_check(ctx, link) + if not ctx.branch_name: + return 1 + + if args.apply_only: + tout.notice('Apply-only mode; skipping review') + return 0 + + ctx.reviewer_name, ctx.reviewer_email = _parse_reviewer(args) + ctx.signoff = getattr(args, 'signoff', '') or None + if ctx.signoff: + ctx.signoff = ctx.signoff.replace('\\n', '\n') + ctx.spelling = getattr(args, 'spelling', 'British') + ctx.comments_path = _write_comments_file(series_data, pwork) + + _run_and_store_reviews(ctx, args) + + _git_restore(orig_branch, had_stash, ctx.repo_path) + orig_branch = None + + finally: + _git_restore(orig_branch, had_stash, ctx.repo_path) return 0 + + +VOICE_PATH = os.path.join(os.path.expanduser('~/.config/patman.d'), + 'voice.md') + +_VOICE_PROMPT = '''Analyse the following email reviews written by a single +person. These are code-review replies on the U-Boot mailing list. + +Study the writing style carefully and produce a concise style guide that +captures this person's voice. Focus on: + +- Tone (formal, casual, terse, friendly, etc.) +- Typical greeting and sign-off patterns +- How they quote code and structure comments +- Common phrases or idioms they use +- Level of detail in explanations +- How they phrase requests for changes vs suggestions +- How they express approval + +Output ONLY the style guide in markdown, with no preamble, explanation, +or commentary before or after it. It should be usable as-is by an AI to +replicate this person's voice when writing reviews. Keep it under 50 lines. + +=== EMAILS === +{emails} +''' + + +VoiceParams = namedtuple('VoiceParams', + 'source account pwork user_email list_email count') + + +async def _fetch_voice_gmail(account, list_email, user_email, max_results=20): + """Fetch review emails from Gmail for voice learning + + Args: + account (str or None): Gmail account to read from + list_email (str): Mailing list email to filter by + user_email (str): Reviewer's email, to skip own patch threads + max_results (int): Maximum emails to fetch + + Returns: + list of str: Review email bodies, or None on failure + """ + + if not gmail.check_available(): + return None + + tout.notice(f'Fetching sent review emails to {list_email}...') + service = gmail.get_service(account) + return gmail.fetch_sent_reviews(service, list_email, user_email, + max_results) + + +async def _fetch_voice_patchwork(pwork, user_email, max_comments=20): + """Fetch review comments from patchwork for voice learning + + Args: + pwork (Patchwork): Configured patchwork instance + user_email (str): Email to search for in comments + max_comments (int): Number of comments to collect + + Returns: + list of str: Comment bodies, or None on failure + """ + + if not pwork or not pwork.proj_id: + tout.error('Patchwork project not configured') + return None + + tout.notice(f'Fetching comments by {user_email} from patchwork...') + async with aiohttp.ClientSession() as client: + return await pwork.fetch_user_comments(client, user_email, + max_comments) + + +async def learn_voice(vp): + """Analyse past reviews and create a voice profile + + Fetches review text from the chosen source, sends it to a Claude agent + for style analysis, and saves the result as voice.md + + Args: + vp (VoiceParams): Voice learning parameters + + Returns: + bool: True if the voice profile was created successfully + """ + if not claude_mod.check_available(): + return False + + if vp.source == 'patchwork': + bodies = await _fetch_voice_patchwork(vp.pwork, vp.user_email, + vp.count) + else: + if not vp.list_email: + tout.error('No mailing list email known; use -U to specify ' + 'an upstream') + return False + if not vp.user_email: + tout.error('No reviewer email known; use --reviewer') + return False + bodies = await _fetch_voice_gmail(vp.account, vp.list_email, + vp.user_email, vp.count) + + if not bodies: + tout.error(f'No review text found from {vp.source}') + return False + + tout.notice(f'Found {len(bodies)} reviews, analysing style...') + + # Write reviews to a temp file so the agent can read them + # (avoids exceeding prompt size limits with many reviews) + reviews_path = os.path.join(tempfile.gettempdir(), + 'patman_voice_reviews.txt') + combined = '\n\n--- NEXT EMAIL ---\n\n'.join(bodies) + tools.write_file(reviews_path, combined, binary=False) + + prompt = _VOICE_PROMPT.format(emails=f'(see file: {reviews_path})') + options = ClaudeAgentOptions(allowed_tools=['Read'], + max_buffer_size=claude_mod.MAX_BUFFER_SIZE) + + success, result = await claude_mod.run_agent_collect(prompt, options) + + # Clean up temp file + try: + os.unlink(reviews_path) + except OSError: + pass + + if not success or not result.strip(): + tout.error('Failed to analyse writing style') + return False + + os.makedirs(os.path.dirname(VOICE_PATH), exist_ok=True) + tools.write_file(VOICE_PATH, result.strip() + '\n', binary=False) + + tout.notice(f'Voice profile saved to {VOICE_PATH}') + return True + + +def learn_voice_sync(vp): + """Synchronous wrapper for learn_voice()""" + loop = asyncio.get_event_loop() + return loop.run_until_complete(learn_voice(vp)) + + +def get_voice(): + """Load the voice profile if it exists + + Returns: + str or None: Voice profile text, or None if not configured + """ + if os.path.exists(VOICE_PATH): + return tools.read_file(VOICE_PATH, binary=False).strip() + return None + + +_REFINE_PROMPT = '''Compare the AI-generated draft review with the review +that was actually sent by the reviewer. The differences reveal the +reviewer's preferences. + +Read the two files below, then analyse what changed and why. Focus on: +- Phrasing the reviewer preferred over the AI's version +- Content the reviewer added or removed +- Tone or style adjustments +- Structural changes (ordering, quoting style, etc.) + +AI DRAFT: +(see file: {draft_path}) + +ACTUALLY SENT: +(see file: {sent_path}) + +CURRENT VOICE PROFILE: +(see file: {voice_path}) + +Now output ONLY an updated voice profile in markdown. Keep everything +from the current profile that is still accurate, and add or revise +entries based on the draft-vs-sent differences. Keep it under 60 lines. +Do not include any preamble or commentary. +''' + + +async def refine_voice(draft_body, sent_body): + """Refine the voice profile by comparing a draft with what was sent + + Args: + draft_body (str): The AI-generated review text + sent_body (str): The review text actually sent by the user + + Returns: + bool: True if the voice profile was updated + """ + + if not claude_mod.check_available(): + return False + + voice = get_voice() + if not voice: + tout.notice('No voice profile to refine; run --learn-voice first') + return False + + # Write files for the agent to read + draft_path = os.path.join(tempfile.gettempdir(), 'patman_voice_draft.txt') + sent_path = os.path.join(tempfile.gettempdir(), 'patman_voice_sent.txt') + tools.write_file(draft_path, draft_body, binary=False) + tools.write_file(sent_path, sent_body, binary=False) + + prompt = _REFINE_PROMPT.format(draft_path=draft_path, + sent_path=sent_path, voice_path=VOICE_PATH) + options = ClaudeAgentOptions(allowed_tools=['Read'], + max_buffer_size=claude_mod.MAX_BUFFER_SIZE) + + tout.notice('Analysing draft vs sent differences...') + success, result = await claude_mod.run_agent_collect(prompt, options) + + for path in (draft_path, sent_path): + if os.path.exists(path): + os.unlink(path) + + if not success or not result.strip(): + tout.error('Failed to refine voice profile') + return False + + tools.write_file(VOICE_PATH, result.strip() + '\n', binary=False) + + tout.notice(f'Voice profile updated: {VOICE_PATH}') + return True + + +def refine_voice_sync(draft_body, sent_body): + """Synchronous wrapper for refine_voice()""" + loop = asyncio.get_event_loop() + return loop.run_until_complete(refine_voice(draft_body, sent_body)) + + +_REPLY_PROMPT = '''You are a U-Boot maintainer. You previously reviewed a +patch and the author (or another reviewer) has replied to your review. +Decide whether a response is needed, and if so, draft one. + +YOUR ORIGINAL REVIEW: +(see file: {review_path}) + +REPLY FROM {reply_from}: +(see file: {reply_path}) + +The patches are applied to the current source tree. You can use +'git show', Read, and Grep to examine the code. + +IMPORTANT: +- Do NOT run 'git checkout' or switch branches +- Do NOT 'cd' to another directory + +TASK: +1. Read the reply carefully +2. Decide if a response is needed: + - If the author agrees and will fix → no response needed + - If the author asks a question → research in the code and answer + - If the author pushes back → evaluate their argument; agree if + they are right, explain why if they are wrong + - If another reviewer comments → only respond if you disagree or + have additional information +3. If no response is needed, output ONLY: VERDICT: skip +4. If a response is needed, output in this format: + +GREETING: <first name> +COMMENT: +> quoted text from their reply + +Your response. + +VERDICT: changes_needed + +Rules: +- NEVER use backticks — this is plain-text email, not markdown. + For functions, always use parentheses with no quotes: malloc() not + 'malloc()' or `malloc`. For other identifiers do not quote them + unless they are common English words that might confuse the reader +- Use {spelling} spelling +- Be brief and direct +- If the author is right, concede gracefully +- If you need to push back, explain concisely with evidence from code +''' + + +async def handle_reply(ctx, review_body, reply_from, reply_body): + """Generate a response to a reply on our review + + Args: + ctx (ReviewContext): Review context (uses reviewer_name, + reviewer_email, repo_path, signoff, spelling) + review_body (str): Our original review text + reply_from (str): Who replied (name <email>) + reply_body (str): The reply text + + Returns: + str or None: Response email body, or None if no response needed + """ + + if not claude_mod.check_available(): + return None + + # Write our review and the reply to temp files + review_path = os.path.join(tempfile.gettempdir(), + 'patman_reply_our_review.txt') + reply_path = os.path.join(tempfile.gettempdir(), + 'patman_reply_their_reply.txt') + tools.write_file(review_path, review_body, binary=False) + tools.write_file(reply_path, reply_body, binary=False) + + prompt = _REPLY_PROMPT.format(review_path=review_path, + reply_path=reply_path, reply_from=reply_from, + spelling=ctx.spelling) + options = ClaudeAgentOptions(allowed_tools=['Bash', 'Read', 'Grep'], + cwd=ctx.repo_path, max_buffer_size=claude_mod.MAX_BUFFER_SIZE) + + success, log = await claude_mod.run_agent_collect(prompt, options) + + # Clean up + for path in (review_path, reply_path): + if os.path.exists(path): + os.unlink(path) + + if not success or not log.strip(): + return None + + greeting, verdict, comments = parse_review_output(log) + if verdict == 'skip': + return None + + if '<' in reply_from: + reply_name = reply_from.split('<')[0].strip() + reply_email = reply_from.split('<')[1].rstrip('>') + else: + reply_name = reply_from + reply_email = '' + + # Build a context for the reply — the "author" here is the person + # we are replying to, not the original series submitter + reply_ctx = ReviewContext(ctx.pwork, ctx.cser, + {'submitter': {'name': reply_name, 'email': reply_email}, 'date': ''}) + reply_ctx.reviewer_name = ctx.reviewer_name + reply_ctx.reviewer_email = ctx.reviewer_email + reply_ctx.signoff = ctx.signoff + return format_review_email(reply_ctx, greeting, verdict, comments) + + +def handle_reply_sync(ctx, review_body, reply_from, reply_body): + """Synchronous wrapper for handle_reply()""" + loop = asyncio.get_event_loop() + return loop.run_until_complete( + handle_reply(ctx, review_body, reply_from, reply_body)) -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add series subcommands for storing and displaying review-handling notes: - 'patman s save-notes' reads review-notes.txt and stores it against the current series version in the database - 'patman s show-notes' displays notes from all previous versions Also wire up the review command to handle --learn-voice and --sync modes, and use _setup_patchwork() for consistent patchwork resolution. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/control.py | 21 ++++++++++++++------- tools/patman/cseries.py | 36 ++++++++++++++++++++++++++++++++++++ 2 files changed, 50 insertions(+), 7 deletions(-) diff --git a/tools/patman/control.py b/tools/patman/control.py index 9ba9b6e0b8e..e664c93dafd 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -105,6 +105,7 @@ def patchwork_status(branch, count, start, end, dest_branch, force, single_thread) + def _setup_patchwork(cser, pwork, ups, pw_url): """Set up a Patchwork instance from upstream and project settings @@ -230,6 +231,10 @@ def do_series(args, test_db=None, pwork=None, cser=None): cser.remove(args.series, dry_run=args.dry_run) elif args.subcmd == 'rm-version': cser.version_remove(args.series, args.version, dry_run=args.dry_run) + elif args.subcmd == 'save-notes': + cser.save_notes(args.series, args.notes_file) + elif args.subcmd == 'show-notes': + cser.show_notes(args.series) elif args.subcmd == 'rename': cser.rename(args.series, args.new_name, dry_run=args.dry_run) elif args.subcmd == 'set-upstream': @@ -365,7 +370,7 @@ def patchwork(args, test_db=None, pwork=None): cser.db.patchwork_delete(args.remote) cser.commit() ups_str = f" for upstream '{args.remote}'" if args.remote else '' - tout.info(f'Deleted patchwork project{ups_str}') + tout.notice(f'Deleted patchwork project{ups_str}') elif args.subcmd == 'ls': cser.project_list() else: @@ -419,11 +424,13 @@ def do_review(args, test_db=None, pwork=None, cser=None): try: cser.open_database() - ups = args.upstream - if not ups: - ups = cser.db.upstream_get_default() - pwork = _setup_patchwork( - cser, pwork, ups, args.patchwork_url) + # Resolve patchwork URL + if not pwork and not args.learn_voice and not args.sync: + ups = args.upstream + if not ups: + ups = cser.db.upstream_get_default() + pwork = _setup_patchwork( + cser, pwork, ups, args.patchwork_url) return review_mod.do_review(args, pwork, cser) finally: @@ -480,7 +487,7 @@ def do_patman(args, test_db=None, pwork=None, cser=None): elif args.cmd == 'patchwork': patchwork(args, test_db, pwork) elif args.cmd == 'review': - do_review(args, test_db, pwork, cser) + ret_code = do_review(args, test_db, pwork, cser) or 0 elif args.cmd == 'workflow': do_workflow(args, test_db) except Exception as exc: # pylint: disable=W0718 diff --git a/tools/patman/cseries.py b/tools/patman/cseries.py index e272a5839fc..3af9ef19eab 100644 --- a/tools/patman/cseries.py +++ b/tools/patman/cseries.py @@ -7,11 +7,13 @@ import asyncio from collections import OrderedDict, defaultdict +import os import pygit2 from u_boot_pylib import cros_subprocess from u_boot_pylib import gitutil from u_boot_pylib import terminal +from u_boot_pylib import tools from u_boot_pylib import tout from patman import patchstream @@ -861,6 +863,40 @@ class Cseries(cser_helper.CseriesHelper): if dry_run: tout.info('Dry run completed') + def save_notes(self, series, notes_file='review-notes.txt'): + """Save review-handling notes for the current series version + + Args: + series (str): Series name, or None for current branch + notes_file (str): Path to the notes file + """ + if not os.path.exists(notes_file): + raise FileNotFoundError(f"Notes file not found: {notes_file}") + + notes = tools.read_file(notes_file, binary=False).strip() + ser, version = self._parse_series_and_version(series, None) + svid = self.get_series_svid(ser.idnum, version) + self.db.ser_ver_set_notes(svid, notes) + self.commit() + tout.notice(f"Saved notes for '{ser.name}' v{version}") + + def show_notes(self, series): + """Show review-handling notes from all versions of a series + + Args: + series (str): Series name, or None for current branch + """ + ser, _ = self._parse_series_and_version(series, None) + all_notes = self.db.ser_ver_get_all_notes(ser.idnum) + if not all_notes: + tout.notice(f"No review notes for '{ser.name}'") + return + for version, notes in all_notes: + terminal.tprint(f'\n--- v{version} ---', + colour=terminal.Color.YELLOW) + print(notes) + print() + def show_info(self, series): """Show detailed information about a series and all its versions -- 2.43.0
From: Simon Glass <sjg@chromium.org> Place u_boot_pylib (third-party) imports before patman (local) imports to satisfy pylint C0411 (wrong-import-order). Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/control.py | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/tools/patman/control.py b/tools/patman/control.py index e664c93dafd..28d53ef60d2 100644 --- a/tools/patman/control.py +++ b/tools/patman/control.py @@ -17,6 +17,11 @@ except ImportError: # for Python 3.6 import importlib_resources as resources +from u_boot_pylib import gitutil +from u_boot_pylib import terminal +from u_boot_pylib import tools +from u_boot_pylib import tout + from patman import cseries from patman import patchstream from patman import review as review_mod @@ -25,10 +30,6 @@ from patman import settings from patman import status from patman import workflow from patman.patchwork import Patchwork -from u_boot_pylib import gitutil -from u_boot_pylib import terminal -from u_boot_pylib import tools -from u_boot_pylib import tout def setup(): -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add fetch_user_comments() to the Patchwork class, which retrieves recent review comments by a given user email. This is used by the voice-learning feature to build a style profile from past reviews posted to patchwork. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/patchwork.py | 49 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 49 insertions(+) diff --git a/tools/patman/patchwork.py b/tools/patman/patchwork.py index 3e8f7c6c62c..5613cbb3383 100644 --- a/tools/patman/patchwork.py +++ b/tools/patman/patchwork.py @@ -12,6 +12,7 @@ import aiohttp from collections import namedtuple from u_boot_pylib import terminal +from u_boot_pylib import tout # Information passed to series_get_states() # link (str): Patchwork link for series @@ -808,6 +809,54 @@ On Tue, 4 Mar 2025 at 06:09, Simon Glass <sjg@chromium.org> wrote: cover = COVER(cover_id, len(info), cover['name'], info) return cover + async def fetch_user_comments(self, client, user_email, max_comments=20): + """Fetch comments made by a user on recent patches + + Paginates through recent patches for the project, fetching + comments on each until the target count is reached. + + Args: + client (aiohttp.ClientSession): Session to use + user_email (str): Email address to match + max_comments (int): Number of comments to collect + + Returns: + list of str: Comment body texts + """ + comments = [] + page = 1 + per_page = 50 + patches_scanned = 0 + + while len(comments) < max_comments: + patches = await self._request( + client, f'patches/?project={self.proj_id}&order=-date' + f'&per_page={per_page}&page={page}') + if not patches: + break + + for patch in patches: + if len(comments) >= max_comments: + break + patches_scanned += 1 + tout.progress( + f'Scanned {patches_scanned} patches, ' + f'found {len(comments)}/{max_comments} comments') + patch_comments = await self._request( + client, f"patches/{patch['id']}/comments/") + for comment in patch_comments: + submitter = comment.get('submitter', {}) + if submitter.get('email') == user_email: + content = comment.get('content', '') + if content and '>' in content: + comments.append(content) + if len(comments) >= max_comments: + break + + page += 1 + tout.clear_progress() + return comments + async def series_get_state(self, client, link, read_comments, read_cover_comments): """Sync the series information against patchwork, to find patch status -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add new arguments to the review subparser: - --gmail-account, --signoff, --spelling for draft creation - --learn-voice/--voice-count for voice profile learning - --sync for draft status synchronisation - -f/--force for re-reviewing completed series Add series subcommands: - save-notes: store review-handling notes in the database - show-notes: display notes from all previous versions Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/cmdline.py | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/tools/patman/cmdline.py b/tools/patman/cmdline.py index bee9fd21483..cbb6e4742b7 100644 --- a/tools/patman/cmdline.py +++ b/tools/patman/cmdline.py @@ -324,6 +324,13 @@ def add_series_subparser(subparsers): series_subparsers.add_parser('rm') + snotes = series_subparsers.add_parser('save-notes') + snotes.add_argument( + 'notes_file', nargs='?', default='review-notes.txt', + help='Path to the review notes file (default: review-notes.txt)') + + series_subparsers.add_parser('show-notes') + sup = series_subparsers.add_parser('set-upstream') sup.add_argument('upstream_name', nargs='?', help='Name of the upstream for this series') @@ -548,6 +555,9 @@ def add_review_subparser(subparsers): review.add_argument( '--create-drafts', action='store_true', help='Create Gmail draft emails for each review') + review.add_argument( + '--gmail-account', type=str, default=None, + help='Gmail account to create drafts in (e.g. user@gmail.com)') review.add_argument( '--no-cover', action='store_true', help='Skip reviewing the cover letter') @@ -560,6 +570,29 @@ def add_review_subparser(subparsers): review.add_argument( '--apply-only', action='store_true', help='Only download and apply patches, skip AI review') + review.add_argument( + '--signoff', type=str, default='', + help="Sign-off for reviews with comments (from .patman settings)") + review.add_argument( + '--spelling', type=str, default='British', + help="Spelling convention for review comments (from .patman " + "settings)") + review.add_argument( + '--learn-voice', type=str, nargs='?', const='gmail', + choices=['gmail', 'patchwork'], + help="Analyse past reviews to build a voice profile " + "(from 'gmail' or 'patchwork', default: gmail)") + review.add_argument( + '--voice-count', type=int, default=20, + help='Number of review emails/comments to collect for ' + '--learn-voice (default: 20)') + review.add_argument( + '--sync', action='store_true', + help='Check if review drafts have been sent and record the ' + 'final email content') + review.add_argument( + '-f', '--force', action='store_true', + help='Force re-review even if the series was already reviewed') return review -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add 26 tests covering the review feature: - Integration tests: new series, already-reviewed, new version, title search, no-link-or-title error, apply failure, draft creation (dry run and actual) - Review parsing: approved, skip verdict, changes with comments - Greeting: guess from email, fallback when empty - Refinement: skips approved reviews, processes reviews with comments - Cleanup: backtick removal, function quoting - Commit message: deduplication, subject + body quoting - Gmail: subject preserves [PATCH], falls back to name, From header set/omitted - Database: review_delete_for_version, series_set_source - Notes: save and retrieve, empty version filtering Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/review.py | 11 + tools/patman/test_cseries.py | 690 +++++++++++++++++++++++++++++++++++ 2 files changed, 701 insertions(+) diff --git a/tools/patman/review.py b/tools/patman/review.py index 74bc81a2a30..f0fa4e865e4 100644 --- a/tools/patman/review.py +++ b/tools/patman/review.py @@ -1479,6 +1479,17 @@ def _register_series(cser, clean_name, version, link, series_data): if pcommits: cser.db.pcommit_add_list(svid, pcommits) + # pcommit_add_list only stores seq/subject/change_id; update + # patch_id from the patchwork data + pclist = cser.db.pcommit_get_list(svid) + for pcm, patch in zip(pclist, patches): + patch_id = patch.get('id') + if patch_id: + cser.db.pcommit_update(database.Pcommit( + idnum=pcm.idnum, seq=pcm.seq, subject=pcm.subject, + svid=svid, change_id=pcm.change_id, state=pcm.state, + patch_id=patch_id, num_comments=pcm.num_comments)) + cser.commit() tout.notice(f"Added series '{clean_name}' v{version} to database") return series_id, svid diff --git a/tools/patman/test_cseries.py b/tools/patman/test_cseries.py index 5974c69253a..7d6a6179c1e 100644 --- a/tools/patman/test_cseries.py +++ b/tools/patman/test_cseries.py @@ -5,6 +5,7 @@ """Functional tests for checking that patman behaves correctly""" import asyncio +import contextlib from datetime import datetime import os import re @@ -4480,3 +4481,692 @@ Date: .* self.assertIn('Version 2:', output) self.assertIn('Second version desc', output) self.assertIn('Notes: Fixed review feedback', output) + + # Series link used by the review tests + REVIEW_LINK = 497923 + REVIEW_LINK_V2 = 497924 + REVIEW_NAME = 'boot/bootm: Disable interrupts after loading the image' + + def _fake_patchwork_review(self, subpath): + """Fake Patchwork server for review tests + + Args: + subpath (str): URL subpath to use + """ + if subpath == 'projects/': + return [ + {'id': self.PROJ_ID, 'name': 'U-Boot', + 'link_name': self.PROJ_LINK_NAME}, + ] + + re_search = re.match(r'series/\?project=(\d+)&q=(.*)$', subpath) + if re_search: + return [ + {'id': self.REVIEW_LINK, 'name': self.REVIEW_NAME, + 'version': 1, 'date': '2026-03-29T15:17:33'}, + {'id': self.REVIEW_LINK_V2, 'name': self.REVIEW_NAME, + 'version': 2, 'date': '2026-04-01T10:00:00'}, + ] + + m_series = re.match(r'series/(\d+)/$', subpath) + if m_series: + series_id = int(m_series.group(1)) + if series_id == self.REVIEW_LINK: + return { + 'name': f'[PATCH] {self.REVIEW_NAME}', + 'version': 1, + 'received_total': 1, + 'mbox': f'https://my-fake-url/series/{series_id}/mbox/', + 'submitter': {'name': 'Test Author', + 'email': 'author@example.com'}, + 'project': {'list_email': 'u-boot@lists.denx.de'}, + 'cover_letter': None, + 'patches': [ + {'id': 900, + 'name': f'[PATCH] {self.REVIEW_NAME}', + 'msgid': '<20260329-bootm-v1-1-abc@posteo.net>'}, + ], + } + if series_id == self.REVIEW_LINK_V2: + return { + 'name': f'[PATCH v2] {self.REVIEW_NAME}', + 'version': 2, + 'received_total': 1, + 'mbox': f'https://my-fake-url/series/{series_id}/mbox/', + 'submitter': {'name': 'Test Author', + 'email': 'author@example.com'}, + 'project': {'list_email': 'u-boot@lists.denx.de'}, + 'cover_letter': None, + 'patches': [ + {'id': 901, + 'name': f'[PATCH,v2] {self.REVIEW_NAME}', + 'msgid': '<20260401-bootm-v2-1-def@posteo.net>'}, + ], + } + raise ValueError( + f'Fake Patchwork unknown series_id: {series_id}') + + m_patch = re.match(r'patches/(\d+)/$', subpath) + if m_patch: + return { + 'headers': { + 'Reply-To': 'author@posteo.net', + 'To': 'u-boot@lists.denx.de', + 'Cc': 'Tom Rini <trini@konsulko.com>', + }, + } + + m_pcomm = re.match(r'patches/(\d+)/comments/$', subpath) + if m_pcomm: + return [] + + m_ccomm = re.match(r'covers/(\d+)/comments/$', subpath) + if m_ccomm: + return [] + + raise ValueError(f'Fake Patchwork unhandled URL: {subpath}') + + REVIEWER = 'Test Reviewer <test@test.com>' + + def run_review(self, *argv, **kwargs): + """Run a review command with the test reviewer identity""" + return self.run_args('review', '--reviewer', self.REVIEWER, + *argv, **kwargs) + + def _mock_review(self): + """Context manager to mock apply, upstream, git and AI review""" + fake_review = {1: f'Reviewed-by: {self.REVIEWER}'} + return (mock.patch('patman.review._apply_and_check', + return_value='review1'), + mock.patch('patman.review._get_upstream_branch', + return_value='origin/master'), + mock.patch('patman.review.review_patches_sync', + return_value=fake_review), + mock.patch('patman.review.gitutil.get_top_level', + return_value=self.tmpdir), + mock.patch('patman.review.command.output', + return_value='pati'), + mock.patch('patman.review._git_restore')) + + def test_review_new_series(self): + """Test reviewing a new series creates database records""" + cser = self.get_cser() + pwork = Patchwork.for_testing(self._fake_patchwork_review) + pwork.project_set(self.PROJ_ID, self.PROJ_LINK_NAME) + + mocks = self._mock_review() + with contextlib.ExitStack() as stack: + for m in mocks: + stack.enter_context(m) + with terminal.capture() as _: + self.run_review( '-l', str(self.REVIEW_LINK), + pwork=pwork) + + # Check the series was created with source='review' + self.db_open() + result = cser.db.series_find_by_link(str(self.REVIEW_LINK)) + self.assertIsNotNone(result) + series_id, name, version, svid = result + self.assertEqual(self.REVIEW_NAME, name) + self.assertEqual(1, version) + + # Check source is 'review' + res = cser.db.execute( + 'SELECT source FROM series WHERE id = ?', (series_id,)) + self.assertEqual('review', res.fetchone()[0]) + + # Check pcommit was created + pclist = cser.db.pcommit_get_list(svid) + self.assertEqual(1, len(pclist)) + self.assertEqual(900, pclist[0].patch_id) + + def test_review_already_reviewed(self): + """Test that reviewing the same link again is detected""" + cser = self.get_cser() + pwork = Patchwork.for_testing(self._fake_patchwork_review) + pwork.project_set(self.PROJ_ID, self.PROJ_LINK_NAME) + + mocks = self._mock_review() + with contextlib.ExitStack() as stack: + for m in mocks: + stack.enter_context(m) + with terminal.capture() as _: + self.run_review( '-l', str(self.REVIEW_LINK), + pwork=pwork) + + # Review the same link again + mocks = self._mock_review() + with contextlib.ExitStack() as stack: + for m in mocks: + stack.enter_context(m) + with terminal.capture() as (out, err): + self.run_review('-l', str(self.REVIEW_LINK), pwork=pwork) + self.assertIn('Already reviewed', out.getvalue()) + + def test_review_new_version(self): + """Test that reviewing v2 detects v1 as previously reviewed""" + cser = self.get_cser() + pwork = Patchwork.for_testing(self._fake_patchwork_review) + pwork.project_set(self.PROJ_ID, self.PROJ_LINK_NAME) + + # Review v1 first + mocks = self._mock_review() + with contextlib.ExitStack() as stack: + for m in mocks: + stack.enter_context(m) + with terminal.capture() as _: + self.run_review( '-l', str(self.REVIEW_LINK), + pwork=pwork) + + # Now review v2 - should detect the previous review + mocks = self._mock_review() + with contextlib.ExitStack() as stack: + for m in mocks: + stack.enter_context(m) + with terminal.capture() as (out, _): + self.run_review( '-l', str(self.REVIEW_LINK_V2), + pwork=pwork) + self.assertIn('Previously reviewed', out.getvalue()) + + # Check both versions are under the same series + self.db_open() + v1 = cser.db.series_find_by_link(str(self.REVIEW_LINK)) + v2 = cser.db.series_find_by_link(str(self.REVIEW_LINK_V2)) + self.assertEqual(v1[0], v2[0]) # same series_id + self.assertEqual(1, v1[2]) # version 1 + self.assertEqual(2, v2[2]) # version 2 + + def test_review_title_search(self): + """Test searching for a series by title""" + cser = self.get_cser() + pwork = Patchwork.for_testing(self._fake_patchwork_review) + pwork.project_set(self.PROJ_ID, self.PROJ_LINK_NAME) + + mocks = self._mock_review() + with contextlib.ExitStack() as stack: + for m in mocks: + stack.enter_context(m) + with terminal.capture() as (out, _): + self.run_review( '-t', 'Disable interrupts', + pwork=pwork) + # Should pick the most recent (v2) + self.assertIn('Using most recent', out.getvalue()) + + self.db_open() + result = cser.db.series_find_by_link(str(self.REVIEW_LINK_V2)) + self.assertIsNotNone(result) + + def test_review_no_link_or_title(self): + """Test that missing -l and -t gives a proper error""" + self.get_cser() + pwork = Patchwork.for_testing(self._fake_patchwork_review) + pwork.project_set(self.PROJ_ID, self.PROJ_LINK_NAME) + + with terminal.capture() as _: + self.run_review( pwork=pwork, expect_ret=1) + + def test_review_apply_failure(self): + """Test that apply failure is reported""" + self.get_cser() + pwork = Patchwork.for_testing(self._fake_patchwork_review) + pwork.project_set(self.PROJ_ID, self.PROJ_LINK_NAME) + + with mock.patch('patman.review._apply_and_check', + return_value=None), \ + mock.patch('patman.review._get_upstream_branch', + return_value='origin/master'), \ + mock.patch('patman.review.gitutil.get_top_level', + return_value=self.tmpdir), \ + mock.patch('patman.review.command.output', + return_value='test'), \ + mock.patch('patman.review._git_restore'): + with terminal.capture() as _: + self.run_review('-l', str(self.REVIEW_LINK), + pwork=pwork, expect_ret=1) + + def test_review_create_drafts_dry_run(self): + """Test dry-run draft creation shows what would be created""" + self.get_cser() + pwork = Patchwork.for_testing(self._fake_patchwork_review) + pwork.project_set(self.PROJ_ID, self.PROJ_LINK_NAME) + + mocks = self._mock_review() + with contextlib.ExitStack() as stack: + for m in mocks: + stack.enter_context(m) + with terminal.capture() as (out, _): + self.run_review( '-l', str(self.REVIEW_LINK), + '--create-drafts', '-n', pwork=pwork) + output = out.getvalue() + self.assertIn('Would create draft', output) + self.assertIn('author@posteo.net', output) + self.assertIn('trini@konsulko.com', output) + + def test_review_create_drafts(self): + """Test actual draft creation calls Gmail API""" + self.get_cser() + pwork = Patchwork.for_testing(self._fake_patchwork_review) + pwork.project_set(self.PROJ_ID, self.PROJ_LINK_NAME) + + mocks = self._mock_review() + with contextlib.ExitStack() as stack: + for m in mocks: + stack.enter_context(m) + with mock.patch('patman.gmail.check_available', + return_value=True): + with mock.patch('patman.gmail.get_service') as mock_svc: + mock_svc.return_value.users.return_value \ + .drafts.return_value \ + .create.return_value \ + .execute.return_value = {'id': 'draft123'} + mock_svc.return_value.users.return_value \ + .messages.return_value \ + .list.return_value \ + .execute.return_value = {'messages': []} + with terminal.capture() as (out, _): + self.run_review( '-l', + str(self.REVIEW_LINK), + '--create-drafts', pwork=pwork) + output = out.getvalue() + self.assertIn('Created 1 Gmail draft', output) + + def _make_review_ctx(self, reviewer_name='Test', reviewer_email='test@test.com', + author_name='', author_email='', date='', signoff=None, + diffstat=None): + """Build a ReviewContext for testing format_review_email()""" + from patman.review import ReviewContext + + ctx = ReviewContext(None, None, + {'submitter': {'name': author_name, 'email': author_email}, + 'date': date}) + ctx.reviewer_name = reviewer_name + ctx.reviewer_email = reviewer_email + ctx.signoff = signoff + ctx.diffstat = diffstat + return ctx + + def test_review_parse_approved(self): + """Test parsing an approved review""" + from patman.review import parse_review_output, format_review_email + + text = """GREETING: Marek +VERDICT: approved""" + greeting, verdict, comments = parse_review_output(text) + self.assertEqual('Marek', greeting) + self.assertEqual('approved', verdict) + self.assertEqual([], comments) + + ctx = self._make_review_ctx(author_name='Marek Vasut', + author_email='marex@denx.de', date='2026-03-21', + diffstat=' drivers/pci.c | 2 +-\n 1 file changed') + email = format_review_email(ctx, greeting, verdict, comments, + commit_message='pci: Fix the return type\n\nThe return is wrong.') + self.assertNotIn('Hi Marek,', email) + self.assertIn('On 2026-03-21, Marek Vasut', email) + self.assertIn('> pci: Fix the return type', email) + self.assertIn('> The return is wrong.', email) + self.assertIn('> drivers/pci.c', email) + self.assertIn('Reviewed-by: Test <test@test.com>', email) + + def test_review_parse_skip(self): + """Test parsing a skipped review (e.g. cover letter with no issues)""" + from patman.review import parse_review_output + + text = """GREETING: Michal +VERDICT: skip""" + greeting, verdict, comments = parse_review_output(text) + self.assertEqual('Michal', greeting) + self.assertEqual('skip', verdict) + self.assertEqual([], comments) + + def test_review_guess_name(self): + """Test guessing first name from email address""" + from patman.review import guess_name_from_email + + self.assertEqual('Simon', guess_name_from_email('simon.glass@xxx')) + self.assertEqual('Michal', guess_name_from_email('michal@amd.com')) + self.assertEqual('Marek', + guess_name_from_email('marek-vasut@denx.de')) + self.assertEqual('', guess_name_from_email('j@posteo.net')) + self.assertEqual('', guess_name_from_email('')) + self.assertEqual('', guess_name_from_email('12345@test.com')) + + def test_review_cleanup(self): + """Test mechanical cleanup of review text""" + from patman.review import cleanup_review_text + + # Backticks removed + self.assertEqual('Use foo here', + cleanup_review_text('Use `foo` here')) + + # Quoted function references unquoted + self.assertEqual('Call malloc() first', + cleanup_review_text("Call 'malloc()' first")) + self.assertEqual('Call free() after', + cleanup_review_text('Call "free()" after')) + + # Normal quotes preserved + self.assertEqual("Normal 'text' stays", + cleanup_review_text("Normal 'text' stays")) + + # Quoted diff lines not mangled + line = "> +\t`something`" + self.assertEqual('> +\tsomething', cleanup_review_text(line)) + + def test_review_greeting_fallback(self): + """Test greeting falls back to email when name is empty""" + from patman.review import format_review_email + + # Empty greeting should be guessed from email + ctx = self._make_review_ctx(author_name='Simon Glass', + author_email='simon.glass@xxx.com', date='2026-04-01', + signoff='Regards,\nTest') + email = format_review_email(ctx, '', 'changes_needed', + [('> +\tsome code', 'Fix this')]) + self.assertIn('Hi Simon,', email) + + # Unguessable email falls back to bare 'Hi,' + ctx = self._make_review_ctx(author_email='x@test.com', + date='2026-04-01') + email = format_review_email(ctx, '', 'changes_needed', + [('> +\tsome code', 'Fix this')]) + self.assertIn('Hi,', email) + self.assertNotIn('Hi ,', email) + + def test_review_refine_skips_approved(self): + """Test that refinement skips approved reviews without comments""" + import asyncio + from unittest.mock import patch, AsyncMock + + from patman.review import refine_reviews + + # An approved review with only structural lines + approved = ('On 2026-04-01, A <a@b.com> wrote:\n' + '> Some commit message\n' + '>\n' + '> drivers/foo.c | 2 +-\n' + '\n' + 'Reviewed-by: Test <test@test.com>\n') + + # Should be returned unchanged without calling the agent + loop = asyncio.new_event_loop() + with patch('patman.review.get_voice', return_value=None): + result = loop.run_until_complete( + refine_reviews({1: approved})) + loop.close() + self.assertEqual({1: approved}, result) + + def test_review_refine_processes_comments(self): + """Test that refinement processes reviews with comments""" + import asyncio + import sys + import types + from unittest.mock import patch, MagicMock + + review_with_comments = ( + 'Hi Marek,\n\n' + 'On 2026-04-01, Marek <m@d.de> wrote:\n' + '> +\tsome code\n\n' + 'This needs fixing.\n\n' + 'Regards,\nSimon\n') + + # Mock the agent to return a slightly trimmed version + refined = '---SEQ 1---\n' + review_with_comments.replace( + 'This needs fixing.', 'Fix this.') + + async def mock_agent(prompt, options): + return True, refined + + from patman import review as review_mod + + loop = asyncio.new_event_loop() + with patch.object(review_mod, 'get_voice', return_value=None), \ + patch.object(review_mod.claude_mod, 'run_agent_collect', + side_effect=mock_agent), \ + patch.object(review_mod, 'ClaudeAgentOptions', MagicMock()), \ + terminal.capture(): + result = loop.run_until_complete( + review_mod.refine_reviews({1: review_with_comments})) + loop.close() + self.assertIn('Fix this.', result[1]) + + def test_review_parse_changes(self): + """Test parsing a review with comments""" + from patman.review import parse_review_output, format_review_email + + text = """GREETING: J. +COMMENT: +> + if (ret < 0) +> + return ret; + +This should use goto err instead. + +COMMENT: +> + bootm_disable_interrupts(); + +This call should be conditional. + +VERDICT: changes_needed""" + + greeting, verdict, comments = parse_review_output(text) + self.assertEqual('J.', greeting) + self.assertEqual('changes_needed', verdict) + self.assertEqual(2, len(comments)) + self.assertIn('goto err', comments[0][1]) + self.assertIn('conditional', comments[1][1]) + + ctx = self._make_review_ctx(author_name='J. Neuschäfer', + author_email='j.ne@posteo.net', date='2026-03-29', + signoff='Regards,\nSimon') + email = format_review_email(ctx, greeting, verdict, comments) + self.assertIn('Hi J.,', email) + self.assertIn('> +\tif (ret < 0)', email) + self.assertIn('goto err', email) + self.assertNotIn('Reviewed-by', email) + self.assertIn('Regards,\nSimon', email) + + def test_review_commit_msg_no_duplicate(self): + """Test that the commit-msg builder avoids duplicating the subject""" + # Simulate cmt.subject and cmt.msg where msg starts with subject + subject = 'Drop unused macros' + msg_with_dup = 'Drop unused macros\n\nThese macros are never used.' + msg_without_dup = '\nThese macros are never used.' + + # When body starts with subject, use body as-is + body = msg_with_dup.strip() + if body.startswith(subject): + commit_msg = body + else: + commit_msg = (subject + '\n' + body).strip() + self.assertEqual(1, commit_msg.count('Drop unused macros')) + + # When body doesn't start with subject, prepend it + body = msg_without_dup.strip() + if body.startswith(subject): + commit_msg = body + else: + commit_msg = (subject + '\n' + body).strip() + self.assertTrue(commit_msg.startswith('Drop unused macros')) + self.assertIn('These macros are never used.', commit_msg) + + def test_review_commit_msg_with_body(self): + """Test that subject + body are both quoted when body differs""" + from patman.review import format_review_email + + ctx = self._make_review_ctx(author_name='Author', + author_email='a@b.com', date='2026-04-01', + diffstat=' file.c | 1 +\n 1 file changed') + email = format_review_email(ctx, '', 'approved', [], + commit_message='Fix the bug\n\nThe bug causes a crash.') + self.assertIn('> Fix the bug', email) + self.assertIn('> The bug causes a crash.', email) + + def test_gmail_subject_preserves_patch_prefix(self): + """Test that reply subjects use the original Subject header""" + from patman.gmail import create_review_drafts + + series_data = { + 'submitter': {'email': 'a@b.com'}, + 'project': {'list_email': 'list@test.com'}, + 'cover_letter': None, + 'patches': [ + {'id': 1, 'name': 'Fix the bug', + 'msgid': '<1@test.com>'}, + ], + } + patch_headers = { + 1: {'Subject': '[PATCH v2 1/3] Fix the bug', + 'Message-Id': '<1@test.com>'}, + } + review_bodies = {1: 'Reviewed-by: Test <test@test.com>'} + + with terminal.capture(): + create_review_drafts(series_data, review_bodies, + patch_headers=patch_headers, dry_run=True) + + def test_gmail_subject_falls_back_to_name(self): + """Test that reply subjects fall back to patchwork name""" + from patman.gmail import create_review_drafts + + series_data = { + 'submitter': {'email': 'a@b.com'}, + 'project': {'list_email': 'list@test.com'}, + 'cover_letter': None, + 'patches': [ + {'id': 1, 'name': 'Fix the bug', + 'msgid': '<1@test.com>'}, + ], + } + # No Subject in headers — should fall back to patch name + patch_headers = {1: {'Message-Id': '<1@test.com>'}} + review_bodies = {1: 'Reviewed-by: Test <test@test.com>'} + + with terminal.capture(): + create_review_drafts(series_data, review_bodies, + patch_headers=patch_headers, dry_run=True) + + def test_gmail_from_header(self): + """Test that create_draft sets From when sender is provided""" + from patman.gmail import create_draft + from email import message_from_bytes + from unittest.mock import MagicMock, patch + import base64 + + service = MagicMock() + service.users().drafts().create().execute.return_value = { + 'id': 'draft123'} + + draft = create_draft( + service, 'to@test.com', 'Re: test', 'body', + sender='Simon Glass <sjg@chromium.org>') + + # Extract the raw message that was passed to the API + call_kwargs = (service.users().drafts().create + .call_args) + raw = call_kwargs[1]['body']['message']['raw'] + msg = message_from_bytes(base64.urlsafe_b64decode(raw)) + self.assertEqual('Simon Glass <sjg@chromium.org>', msg['from']) + + def test_gmail_no_from_without_sender(self): + """Test that create_draft omits From when sender is None""" + from patman.gmail import create_draft + from email import message_from_bytes + from unittest.mock import MagicMock + import base64 + + service = MagicMock() + service.users().drafts().create().execute.return_value = { + 'id': 'draft123'} + + draft = create_draft( + service, 'to@test.com', 'Re: test', 'body') + + call_kwargs = (service.users().drafts().create + .call_args) + raw = call_kwargs[1]['body']['message']['raw'] + msg = message_from_bytes(base64.urlsafe_b64decode(raw)) + self.assertIsNone(msg['from']) + + def test_review_delete_for_version(self): + """Test deleting all reviews for a series version""" + cser = self.get_database() + + series_id = cser.db.series_add('test-delete', 'Test') + svid = cser.db.ser_ver_add(series_id, 1) + + from datetime import datetime + ts = datetime.now().isoformat() + cser.db.review_add(svid, 1, 'review body 1', True, ts) + cser.db.review_add(svid, 2, 'review body 2', False, ts) + cser.commit() + + reviews = cser.db.review_get_for_version(svid) + self.assertEqual(2, len(reviews)) + + cser.db.review_delete_for_version(svid) + cser.commit() + + reviews = cser.db.review_get_for_version(svid) + self.assertEqual(0, len(reviews)) + + def test_series_set_source(self): + """Test setting the source field on a series""" + cser = self.get_database() + + series_id = cser.db.series_add('test-source', 'Test') + cser.db.series_set_source(series_id, 'review') + cser.commit() + + res = cser.db.execute( + 'SELECT source FROM series WHERE id = ?', (series_id,)) + self.assertEqual('review', res.fetchone()[0]) + + def test_review_notes_save_and_show(self): + """Test saving and retrieving review notes""" + cser = self.get_database() + + # Create a series with two versions + series_id = cser.db.series_add('test-notes', 'Test series') + svid1 = cser.db.ser_ver_add(series_id, 1) + svid2 = cser.db.ser_ver_add(series_id, 2) + + # No notes initially + notes = cser.db.ser_ver_get_all_notes(series_id) + self.assertEqual([], notes) + + # Save notes for v1 + cser.db.ser_ver_set_notes(svid1, 'Fixed the memory leak issue') + cser.commit() + + notes = cser.db.ser_ver_get_all_notes(series_id) + self.assertEqual(1, len(notes)) + self.assertEqual(1, notes[0][0]) + self.assertIn('memory leak', notes[0][1]) + + # Save notes for v2 + cser.db.ser_ver_set_notes(svid2, 'Addressed style feedback') + cser.commit() + + notes = cser.db.ser_ver_get_all_notes(series_id) + self.assertEqual(2, len(notes)) + self.assertEqual(1, notes[0][0]) + self.assertEqual(2, notes[1][0]) + + def test_review_notes_skips_empty(self): + """Test that versions without notes are excluded""" + cser = self.get_database() + + series_id = cser.db.series_add('test-empty', 'Test') + svid1 = cser.db.ser_ver_add(series_id, 1) + svid2 = cser.db.ser_ver_add(series_id, 2) + svid3 = cser.db.ser_ver_add(series_id, 3) + + # Only set notes for v1 and v3 + cser.db.ser_ver_set_notes(svid1, 'v1 notes') + cser.db.ser_ver_set_notes(svid3, 'v3 notes') + cser.commit() + + notes = cser.db.ser_ver_get_all_notes(series_id) + self.assertEqual(2, len(notes)) + self.assertEqual(1, notes[0][0]) + self.assertEqual(3, notes[1][0]) -- 2.43.0
From: Simon Glass <sjg@chromium.org> Document the 'patman review' command and its options in the patman README. Covers series selection, patch application, AI review, Gmail draft creation, voice learning, draft synchronisation and the review workflow. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/patman.rst | 207 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 207 insertions(+) diff --git a/tools/patman/patman.rst b/tools/patman/patman.rst index 87e0984c0d7..4c1dba07e6a 100644 --- a/tools/patman/patman.rst +++ b/tools/patman/patman.rst @@ -1188,3 +1188,210 @@ putting an incorrect tag in a commit may provide a confusing message. There might be a few other features not mentioned in this README. They might be bugs. In particular, tags are case sensitive which is probably a bad thing. + + +AI-assisted patch review +======================== + +Patman can review other people's patches from Patchwork using a Claude +AI agent, and create Gmail draft replies with the review comments. + +Prerequisites +------------- + +Install the required packages:: + + sudo apt install python3-google-auth python3-google-auth-oauthlib \ + python3-google-auth-httplib2 python3-googleapi + pip install claude-agent-sdk + +Setting up Gmail API access +--------------------------- + +1. Go to https://console.cloud.google.com and create (or select) a project. + +2. Enable the **Gmail API**: + + - Navigate to **APIs & Services > Enabled APIs & services** + - Click **Enable APIs and services**, search for "Gmail API", enable it + +3. Configure the **OAuth consent screen**: + + - Navigate to **APIs & Services > OAuth consent screen** + - Set publishing status to **Testing** + - Under **Test users**, add your email address (e.g. sjg@chromium.org) + +4. Create **OAuth client credentials**: + + - Navigate to **APIs & Services > Credentials** + - Click **Create Credentials > OAuth client ID** + - Application type: **Desktop app** + - Download the JSON file + +5. Save the credentials:: + + mkdir -p ~/.config/patman.d + mv ~/Downloads/client_secret_*.json ~/.config/patman.d/client_secret.json + +The first time you create drafts, a browser window opens for OAuth +consent. After authentication, the token is cached in +``~/.config/patman.d/`` and reused for future runs. + +Setting up Patchwork +-------------------- + +Configure an upstream with its Patchwork URL:: + + patman upstream add us https://source.denx.de/u-boot/u-boot.git \ + --patchwork-url https://patchwork.ozlabs.org + patman patchwork set-project U-Boot us + +Basic usage +----------- + +Review a series by Patchwork link:: + + patman review -l 497923 -U us --reviewer 'Your Name <your@email>' + +Or search by cover-letter title:: + + patman review -t 'boot/bootm: Disable interrupts' -U us \ + --reviewer 'Your Name <your@email>' + +To create Gmail drafts threaded under the original emails:: + + patman review -l 497923 -U us \ + --reviewer 'Your Name <your@email>' \ + --create-drafts --gmail-account your@email + +Use ``-n`` with ``--create-drafts`` for a dry run that shows what would +be created without calling the Gmail API. + +Use ``--apply-only`` to download and apply patches without running the +AI review — useful for checking that patches apply cleanly. + +Use ``-f`` / ``--force`` to re-review a series that has already been +reviewed. This deletes the old review records and runs the review +again:: + + patman review -l 497923 -U us -f --reviewer 'Your Name <your@email>' + +If the reviewer email (from ``--reviewer`` or git config) differs from +the ``--gmail-account``, patman sets the From header on the draft so +the email is sent with the correct identity. + +How the review works +-------------------- + +For each patch, the AI agent: + +1. Studies all patches in the series to understand the overall design +2. Re-reads the specific patch in detail +3. Examines surrounding source code for context +4. Checks existing comments from other reviewers on Patchwork and + avoids repeating points already made +5. Produces a structured review (GREETING/COMMENT/VERDICT format) + +After all patches are reviewed, a refinement agent makes a second pass +over the drafts to tighten the language, remove cross-patch duplicates +and check voice consistency. Approved reviews without comments are +excluded from refinement to preserve their quoted commit messages. + +A mechanical cleanup step also runs to remove backticks and fix function +quoting style (e.g. ``malloc()`` not ```malloc```). + +Patchwork subcommands +--------------------- + +Remove a patchwork project configuration:: + + patman patchwork rm [remote] + +If no remote is given, the default (no-upstream) entry is deleted. + +Review notes +------------ + +The ``handle-reviews`` workflow produces a ``review-notes.txt`` file +describing how feedback was addressed. Store it in the database so +future versions can reference it:: + + patman series save-notes [notes-file] + +The default filename is ``review-notes.txt``. Display notes from all +previous versions:: + + patman series show-notes + +Settings +-------- + +Add these to your ``~/.patman`` file to avoid repeating flags: + +.. code-block:: ini + + [settings] + signoff: Regards,\nSimon + spelling: British + +The ``signoff`` is appended to reviews that have comments (not to +clean Reviewed-by-only replies). The ``spelling`` setting controls +the spelling convention used in review comments. + +Voice learning +-------------- + +Build a voice profile so AI reviews match your writing style:: + + # From Gmail (searches your sent reviews to the mailing list) + patman review --learn-voice gmail -U us \ + --gmail-account your@email --reviewer 'Your Name <your@email>' + + # From Patchwork (scans your comments on recent patches) + patman review --learn-voice patchwork -U us \ + --reviewer 'Your Name <your@email>' + +Use ``--voice-count N`` to control how many reviews to collect +(default: 20). The voice profile is saved to +``~/.config/patman.d/voice.md`` and automatically used in subsequent +reviews. + +Syncing drafts +-------------- + +After editing and sending (or deleting) Gmail drafts:: + + patman review --sync --gmail-account your@email \ + --reviewer 'Your Name <your@email>' + +This: + +- Detects which drafts have been sent and records the final email + content in the database (for context when reviewing future versions) +- Detects deleted drafts (review not sent) +- Refines the voice profile if the sent text differs from the AI draft +- Detects replies from the patch author or other reviewers +- Generates response drafts when appropriate (e.g. answering + questions, pushing back on objections, or conceding gracefully) + +Review lifecycle +---------------- + +Each review goes through these states: + +- **new**: AI review generated, not yet sent +- **draft**: Gmail draft created +- **sent**: Draft was sent; body updated with actual sent content +- **deleted**: Draft was deleted without sending +- **replied**: Author or another reviewer has replied to our review + +When reviewing a new version of a previously reviewed series, patman +loads the previous review as context for the AI, so it can check +whether earlier issues have been addressed. + +Aliases +------- + +The ``review`` command supports aliases ``r`` and ``rev``:: + + patman r -l 497923 -U us --reviewer 'Your Name <your@email>' -- 2.43.0
From: Simon Glass <sjg@chromium.org> Document the SQLite database schema in patman.rst including all tables, columns, types, foreign key relationships and the schema version each column is added in. This covers the full schema at v10 with series, ser_ver, pcommit, review, upstream, patchwork, workflow and schema_version tables. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/patman.rst | 214 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 213 insertions(+), 1 deletion(-) diff --git a/tools/patman/patman.rst b/tools/patman/patman.rst index 4c1dba07e6a..0644f4d0e08 100644 --- a/tools/patman/patman.rst +++ b/tools/patman/patman.rst @@ -1268,7 +1268,7 @@ Use ``-n`` with ``--create-drafts`` for a dry run that shows what would be created without calling the Gmail API. Use ``--apply-only`` to download and apply patches without running the -AI review — useful for checking that patches apply cleanly. +AI review - useful for checking that patches apply cleanly. Use ``-f`` / ``--force`` to re-review a series that has already been reviewed. This deletes the old review records and runs the review @@ -1395,3 +1395,215 @@ Aliases The ``review`` command supports aliases ``r`` and ``rev``:: patman r -l 497923 -U us --reviewer 'Your Name <your@email>' + + +Database schema +=============== + +Patman stores series tracking, review and workflow state in a SQLite +database (``.patman.db``) in the top-level git directory. The schema is +versioned and auto-migrated on startup (currently at v10). + +The database allows patman to track a patch series across multiple +versions, recording which patches belong to each version and how they +map to patchwork entries. This means patman can detect when a new +version of a series arrives, carry forward review tags and change logs +from earlier versions, and show upstream progress without re-querying +patchwork each time. + +For AI-assisted review of other people's patches, the database stores +the generated review text, Gmail draft IDs and thread state so that +patman can resume where it left off - showing stored reviews, creating +drafts for reviews that were not yet sent, detecting when drafts have +been sent or deleted, and providing previous review context when a new +version of the series is posted. Review-handling notes from the +``handle-reviews`` workflow are also stored per-version so the AI has +context about what was changed and why. + +Entity relationships +-------------------- + +:: + + upstream 1---* patchwork (patchwork.upstream references + upstream.name) + + series 1---* ser_ver (ser_ver.series_id -> series.id) + + ser_ver 1---* pcommit (pcommit.svid -> ser_ver.id) + + ser_ver 1---* review (review.svid -> ser_ver.id) + + series 1---* workflow (workflow.series_id -> series.id) + +A **series** represents a named patch series across all its versions. +Each **ser_ver** is one version of that series (e.g. v1, v2), linked to +patchwork by a series link. Each ser_ver has one **pcommit** per patch +and zero or more **review** records (one per AI review). + +Tables +------ + +series +~~~~~~ + +Patman tracks series that you are working on so it can keep an eye on +any feedback from other people (from patchwork). Series that are being +reviewed (other people's patches) are also stored here with +``source='review'``. + +========== ======== ========================================== +Column Type Description +========== ======== ========================================== +id INTEGER Primary key (auto-increment) +name TEXT Series name (unique) +desc TEXT Series description / cover-letter title +archived BIT True if series is archived +upstream TEXT Upstream name (added v5) +source TEXT 'review' for AI-reviewed series (added v8) +========== ======== ========================================== + +ser_ver +~~~~~~~ + +Each time you send a new version of a series, patman creates a ser_ver +record linking it to patchwork. This lets patman track review tags, +change logs and upstream progress separately for each version. For +AI-reviewed series, review-handling notes are stored here so the next +version has context about what was changed. + +================== ======== ========================================== +Column Type Description +================== ======== ========================================== +id INTEGER Primary key (auto-increment) +series_id INTEGER FK -> series.id +version INTEGER Version number (1, 2, ...) +link TEXT Patchwork series link/ID +cover_id INTEGER Patchwork cover letter ID (added v3) +cover_num_comments INTEGER Number of comments on cover (added v3) +name TEXT Cover letter name (added v3) +archive_tag TEXT Git tag for archived version (added v4) +desc TEXT Version description (added v6) +notes TEXT Review-handling notes (added v10) +================== ======== ========================================== + +pcommit +~~~~~~~ + +Each patch in a series version gets a pcommit record. Patman uses the +Change-Id to match patches across versions (so it knows which patch in +v2 corresponds to which patch in v1) and tracks the patchwork state and +comment count so ``patman series progress`` can show upstream status. + +============== ======== ========================================== +Column Type Description +============== ======== ========================================== +id INTEGER Primary key (auto-increment) +svid INTEGER FK -> ser_ver.id +seq INTEGER Patch sequence (0-based) +subject TEXT Patch subject line +patch_id INTEGER Patchwork patch ID +change_id TEXT Change-Id tag value +state TEXT Patchwork state (e.g. 'new', 'accepted') +num_comments INTEGER Number of patchwork comments +============== ======== ========================================== + +review +~~~~~~ + +When patman reviews someone else's patches, it stores the generated +review email text here - one record per patch plus optionally the cover +letter. This allows patman to show the review again later, create Gmail +drafts from stored reviews, detect when drafts have been sent or +deleted, and provide the previous review as context when a new version +of the series arrives. When ``--sync`` detects that a draft was sent, +the body is updated to the final text the user actually sent (after any +manual edits), so the stored review reflects what was really posted to +the mailing list. + +================ ======== ========================================== +Column Type Description +================ ======== ========================================== +id INTEGER Primary key (auto-increment) +svid INTEGER FK -> ser_ver.id +seq INTEGER Patch sequence (0=cover, 1..N=patches) +body TEXT Review email body text +approved BIT True if Reviewed-by was given +timestamp TEXT ISO datetime of review creation +draft_id TEXT Gmail draft ID (added v9) +status TEXT 'new', 'draft', 'sent', 'deleted', + 'replied' (added v9) +gmail_msg_id TEXT Gmail message ID after sending (added v9) +gmail_thread_id TEXT Gmail thread ID for replies (added v9) +================ ======== ========================================== + +upstream +~~~~~~~~ + +Patman needs to know about the upstream repositories you send patches +to. Each upstream has a git remote name, URL and optional patchwork +settings. One upstream can be marked as the default so you do not need +to specify ``-U`` on every command. + +================ ======== ========================================== +Column Type Description +================ ======== ========================================== +name TEXT Git remote name (unique, e.g. 'us') +url TEXT Repository URL +is_default BIT True if this is the default upstream +patchwork_url TEXT Patchwork server URL (added v6) +identity TEXT Git sendemail identity (added v6) +series_to TEXT Default To: alias for series (added v6) +no_maintainers BIT Skip get_maintainer.pl (added v6) +no_tags BIT Skip subject-tag processing (added v6) +================ ======== ========================================== + +patchwork +~~~~~~~~~ + +Each upstream can have a patchwork project associated with it. This +stores the project name, ID and link name so patman can query patchwork +for series status, review tags and comments. Multiple upstreams can +point to different patchwork projects (e.g. U-Boot mainline vs a +custodian tree). + +============== ======== ========================================== +Column Type Description +============== ======== ========================================== +name TEXT Project name (e.g. 'U-Boot') +proj_id INTEGER Patchwork project ID +link_name TEXT Patchwork link name (e.g. 'uboot') +upstream TEXT Upstream name, or NULL for default +============== ======== ========================================== + +workflow +~~~~~~~~ + +Patman records workflow events such as when a series was sent, when +review feedback arrived and what needs attention. The ``patman workflow +todo`` command uses this to show outstanding tasks across all your +series. + +============== ======== ========================================== +Column Type Description +============== ======== ========================================== +id INTEGER Primary key (auto-increment) +series_id INTEGER FK -> series.id +action TEXT Workflow action (e.g. 'sent', 'todo') +timestamp TEXT ISO datetime +data TEXT JSON payload +ser_ver_id INTEGER FK -> ser_ver.id (added v7b) +============== ======== ========================================== + +schema_version +~~~~~~~~~~~~~~ + +A single-row table that records which schema version the database is at. +Patman checks this on startup and runs any needed migrations +automatically. + +============== ======== ========================================== +Column Type Description +============== ======== ========================================== +version INTEGER Current schema version (currently 10) +============== ======== ========================================== -- 2.43.0
From: Simon Glass <sjg@chromium.org> Add claude-agent-sdk, google-api-python-client, google-auth-oauthlib and google-auth-httplib2 to requirements.txt for the AI-assisted review and Gmail draft-creation features. Signed-off-by: Simon Glass <sjg@chromium.org> --- tools/patman/requirements.txt | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/tools/patman/requirements.txt b/tools/patman/requirements.txt index d4fcb1061c2..c3de6979aa8 100644 --- a/tools/patman/requirements.txt +++ b/tools/patman/requirements.txt @@ -1,5 +1,9 @@ aiohttp==3.10.11 +claude-agent-sdk ConfigParser==7.1.0 +google-api-python-client +google-auth-httplib2 +google-auth-oauthlib importlib_resources==6.5.2 pygit2==1.14.1 requests==2.32.4 -- 2.43.0
participants (1)
-
Simon Glass