From nobody Sat Feb 7 08:58:39 2026 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85B4B1422A8 for ; Wed, 21 Aug 2024 22:30:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724279424; cv=none; b=uqp9VOVaOZF72QZaAr3wAQ3l/o3YUvPTgnxSJ5O/J4dkzjP+QYJEn3LALblWalA1rRPOmOfRqBHIdt8l/zwNXIePIL00vOJ6YNoM1VmwDeBw4Jl7TP0WYqGwUgppsW+RCNLwdngz+ZVOIK/eTw0r0wa2bqE8Qfx+BT1PIFDB40s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724279424; c=relaxed/simple; bh=s9IsiDKVwYuT65wq319ugFV7IBLWrRQ+QIpTmFazwMA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ECz+5y17rTgzgl2/ojfTcv0eTW0jBlhGb6yPq+2NZnNpCqWrHtR6i30zje0zZPIUC5Qotv02LOApQ1oZ7uK35CXAViF2p9rsyZlbPnWCaX4hzLakRT6ubnu9MENg48rf9ei+ANudsERo6DX4KqgggdtaC88CrK6z0BAwIf9hc28= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vipinsh.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JupYPin7; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vipinsh.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JupYPin7" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6b0f068833bso19517897b3.0 for ; Wed, 21 Aug 2024 15:30:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724279421; x=1724884221; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nskIx7LfF1mBL9LnPca+VoAlBy/p/Mz7+Hqtgv8vqNA=; b=JupYPin7cRVgHgnrNWxpxMwUbZQpyxZcTXq5ubOGeCV2CfzvNiOMH575HRhjzub/pN lGgzHYH/MOCH3Cw2YElow6kK+5qG5rSK7LA4KOk/VKFDvXQXZckJk9zU2HDVrJ0oa45K a0H2mdtLmfzTohNGfOYsmRObOkZn9vb8jIJyWKnkMerCeHgZ7xuQ3E5o7SKd4s96ua/m 3fXCKbioX89F6KxWcNTjk1c9obs1D0LjetiGin5tz/1MgubGx/YxJ12B8KGJ1jctGnLU VUB725s9Y48opOQIHjPUD1Off+kTakg3qPriJL6oUL84z9egrdUpWPtpvu/Zeig/hmdK A+Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724279421; x=1724884221; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nskIx7LfF1mBL9LnPca+VoAlBy/p/Mz7+Hqtgv8vqNA=; b=D5B2XKORqcxbBmRHiNRTB+hYccr4fuiWpDh+iHbfm7O0mcWB1g6cV19bBPYWs7KMl+ xikGy6mXjvc8I+1aIX96vZag7/5Y5jKiCLSz2cWcpnA3xZO2Q8iMyaBlEsUm3vpckwlL fWAHoUNhU1FnkXyGWVoKLQapG5HCaCaprT+ZI+hfJXsCoq551/lgxeWJpZBudIctkGOX gpSIea6clNFubyAf7CXOC6vBvXPPTlLTENqm2dwPhAYqM/1JzQ6WO7B++Cg4Z1lD55ae GLxZjJ5INjYsKCEn55lg4pO27u7uQgnk6cOlOEvyny+Cn6n8zhwCZjmQQTmN69Sf1tQK a4PA== X-Gm-Message-State: AOJu0Yz0ZArhQ5mxXh6n4h7r5HMShWW6vjeEg9utP8R56+wcKejnaXSm Qx6HMitL+aJOl/Dah6OqbPT7BqwDHkAR5QqaS+Ecy6YVEvLzwzEEOBnMioQvXfO031rqrmkDsJa 6XZZyJw== X-Google-Smtp-Source: AGHT+IEx/fflGc8igNbe+q+wUNIu1x2rTtD8YdTRpV70LrKtOPBWCDBKJVr7RXMNmvYd54bBlLnkLeaGLJAT X-Received: from vipin.c.googlers.com ([34.105.13.176]) (user=vipinsh job=sendgmr) by 2002:a25:ce81:0:b0:e16:6c06:f809 with SMTP id 3f1490d57ef6-e179013bb01mr213276.0.1724279421356; Wed, 21 Aug 2024 15:30:21 -0700 (PDT) Date: Wed, 21 Aug 2024 15:30:12 -0700 In-Reply-To: <20240821223012.3757828-1-vipinsh@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240821223012.3757828-1-vipinsh@google.com> X-Mailer: git-send-email 2.46.0.184.g6999bdac58-goog Message-ID: <20240821223012.3757828-2-vipinsh@google.com> Subject: [PATCH 1/1] KVM: selftestsi: Create KVM selftests runnner to run interesting tests From: Vipin Sharma To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Vipin Sharma Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Create a selftest runner "runner.py" for KVM which can run tests with more interesting configurations other than the default values. Read those configurations from "tests.json". Provide runner some options to run differently: 1. Run using different configuration files. 2. Run specific test suite or test in a specific suite. 3. Allow some setup and teardown capability for each test and test suite execution. 4. Timeout value for tests. 5. Run test suite parallelly. 6. Dump stdout and stderror in hierarchical folder structure. 7. Run/skip tests based on platform it is executing on. Print summary of the run at the end. Add a starter test configuration file "tests.json" with some sample tests which runner can use to execute tests. Signed-off-by: Vipin Sharma --- tools/testing/selftests/kvm/runner.py | 282 +++++++++++++++++++++++++ tools/testing/selftests/kvm/tests.json | 60 ++++++ 2 files changed, 342 insertions(+) create mode 100755 tools/testing/selftests/kvm/runner.py create mode 100644 tools/testing/selftests/kvm/tests.json diff --git a/tools/testing/selftests/kvm/runner.py b/tools/testing/selftest= s/kvm/runner.py new file mode 100755 index 000000000000..46f6c1c8ce2c --- /dev/null +++ b/tools/testing/selftests/kvm/runner.py @@ -0,0 +1,282 @@ +#!/usr/bin/env python3 + +import argparse +import json +import subprocess +import os +import platform +import logging +import contextlib +import textwrap +import shutil + +from pathlib import Path +from multiprocessing import Pool + +logging.basicConfig(level=3Dlogging.INFO, + format =3D "%(asctime)s | %(process)d | %(levelname)8s= | %(message)s") + +class Command: + """Executes a command + + Execute a command. + """ + def __init__(self, id, command, timeout=3DNone, command_artifacts_dir= =3DNone): + self.id =3D id + self.args =3D command + self.timeout =3D timeout + self.command_artifacts_dir =3D command_artifacts_dir + + def __run(self, command, timeout=3DNone, output=3DNone, error=3DNone): + proc=3Dsubprocess.run(command, stdout=3Doutput, + stderr=3Derror, universal_newlines=3DTrue, + shell=3DTrue, timeout=3Dtimeout) + return proc.returncode + + def run(self): + output =3D None + error =3D None + with contextlib.ExitStack() as stack: + if self.command_artifacts_dir is not None: + output_path =3D os.path.join(self.command_artifacts_dir, f= "{self.id}.stdout") + error_path =3D os.path.join(self.command_artifacts_dir, f"= {self.id}.stderr") + output =3D stack.enter_context(open(output_path, encoding= =3D"utf-8", mode =3D "w")) + error =3D stack.enter_context(open(error_path, encoding=3D= "utf-8", mode =3D "w")) + return self.__run(self.args, self.timeout, output, error) + +COMMAND_TIMED_OUT =3D "TIMED_OUT" +COMMAND_PASSED =3D "PASSED" +COMMAND_FAILED =3D "FAILED" +COMMAND_SKIPPED =3D "SKIPPED" +SETUP_FAILED =3D "SETUP_FAILED" +TEARDOWN_FAILED =3D "TEARDOWN_FAILED" + +def run_command(command): + if command is None: + return COMMAND_PASSED + + try: + ret =3D command.run() + if ret =3D=3D 0: + return COMMAND_PASSED + elif ret =3D=3D 4: + return COMMAND_SKIPPED + else: + return COMMAND_FAILED + except subprocess.TimeoutExpired as e: + logging.error(type(e).__name__ + str(e)) + return COMMAND_TIMED_OUT + +class Test: + """A single test. + + A test which can be run on its own. + """ + def __init__(self, test_json, timeout=3DNone, suite_dir=3DNone): + self.name =3D test_json["name"] + self.test_artifacts_dir =3D None + self.setup_command =3D None + self.teardown_command =3D None + + if suite_dir is not None: + self.test_artifacts_dir =3D os.path.join(suite_dir, self.name) + + test_timeout =3D test_json.get("timeout_s", timeout) + + self.test_command =3D Command("command", test_json["command"], tes= t_timeout, self.test_artifacts_dir) + if "setup" in test_json: + self.setup_command =3D Command("setup", test_json["setup"], te= st_timeout, self.test_artifacts_dir) + if "teardown" in test_json: + self.teardown_command =3D Command("teardown", test_json["teard= own"], test_timeout, self.test_artifacts_dir) + + def run(self): + if self.test_artifacts_dir is not None: + Path(self.test_artifacts_dir).mkdir(parents=3DTrue, exist_ok= =3DTrue) + + setup_status =3D run_command(self.setup_command) + if setup_status !=3D COMMAND_PASSED: + return SETUP_FAILED + + try: + status =3D run_command(self.test_command) + return status + finally: + teardown_status =3D run_command(self.teardown_command) + if (teardown_status !=3D COMMAND_PASSED + and (status =3D=3D COMMAND_PASSED or status =3D=3D COM= MAND_SKIPPED)): + return TEARDOWN_FAILED + +def run_test(test): + return test.run() + +class Suite: + """Collection of tests to run + + Group of tests. + """ + def __init__(self, suite_json, platform_arch, artifacts_dir, test_filt= er): + self.suite_name =3D suite_json["suite"] + self.suite_artifacts_dir =3D None + self.setup_command =3D None + self.teardown_command =3D None + timeout =3D suite_json.get("timeout_s", None) + + if artifacts_dir is not None: + self.suite_artifacts_dir =3D os.path.join(artifacts_dir, self.= suite_name) + + if "setup" in suite_json: + self.setup_command =3D Command("setup", suite_json["setup"], t= imeout, self.suite_artifacts_dir) + if "teardown" in suite_json: + self.teardown_command =3D Command("teardown", suite_json["tear= down"], timeout, self.suite_artifacts_dir) + + self.tests =3D [] + for test_json in suite_json["tests"]: + if len(test_filter) > 0 and test_json["name"] not in test_filt= er: + continue; + if test_json.get("arch") is None or test_json["arch"] =3D=3D p= latform_arch: + self.tests.append(Test(test_json, timeout, self.suite_arti= facts_dir)) + + def run(self, jobs=3D1): + result =3D {} + if len(self.tests) =3D=3D 0: + return COMMAND_PASSED, result + + if self.suite_artifacts_dir is not None: + Path(self.suite_artifacts_dir).mkdir(parents =3D True, exist_o= k =3D True) + + setup_status =3D run_command(self.setup_command) + if setup_status !=3D COMMAND_PASSED: + return SETUP_FAILED, result + + + if jobs > 1: + with Pool(jobs) as p: + tests_status =3D p.map(run_test, self.tests) + for i,test in enumerate(self.tests): + logging.info(f"{tests_status[i]}: {self.suite_name}/{test.= name}") + result[test.name] =3D tests_status[i] + else: + for test in self.tests: + status =3D run_test(test) + logging.info(f"{status}: {self.suite_name}/{test.name}") + result[test.name] =3D status + + teardown_status =3D run_command(self.teardown_command) + if teardown_status !=3D COMMAND_PASSED: + return TEARDOWN_FAILED, result + + + return COMMAND_PASSED, result + +def load_tests(path): + with open(path) as f: + tests =3D json.load(f) + return tests + + +def run_suites(suites, jobs): + """Runs the tests. + + Run test suits in the tests file. + """ + result =3D {} + for suite in suites: + result[suite.suite_name] =3D suite.run(jobs) + return result + +def parse_test_filter(test_suite_or_test): + test_filter =3D {} + if len(test_suite_or_test) =3D=3D 0: + return test_filter + for test in test_suite_or_test: + test_parts =3D test.split("/") + if len(test_parts) > 2: + raise ValueError("Incorrect format of suite/test_name combo") + if test_parts[0] not in test_filter: + test_filter[test_parts[0]] =3D [] + if len(test_parts) =3D=3D 2: + test_filter[test_parts[0]].append(test_parts[1]) + + return test_filter + +def parse_suites(suites_json, platform_arch, artifacts_dir, test_suite_or_= test): + suites =3D [] + test_filter =3D parse_test_filter(test_suite_or_test) + for suite_json in suites_json: + if len(test_filter) > 0 and suite_json["suite"] not in test_filter: + continue + if suite_json.get("arch") is None or suite_json["arch"] =3D=3D pla= tform_arch: + suites.append(Suite(suite_json, + platform_arch, + artifacts_dir, + test_filter.get(suite_json["suite"], []))) + return suites + + +def pretty_print(result): + logging.info("--------------------------------------------------------= ------------------") + if not result: + logging.warning("No test executed.") + return + logging.info("Test runner result:") + suite_count =3D 0 + test_count =3D 0 + for suite_name, suite_result in result.items(): + suite_count +=3D 1 + logging.info(f"{suite_count}) {suite_name}:") + if suite_result[0] !=3D COMMAND_PASSED: + logging.info(f"\t{suite_result[0]}") + test_count =3D 0 + for test_name, test_result in suite_result[1].items(): + test_count +=3D 1 + if test_result =3D=3D "PASSED": + logging.info(f"\t{test_count}) {test_result}: {test_name}") + else: + logging.error(f"\t{test_count}) {test_result}: {test_name}= ") + logging.info("--------------------------------------------------------= ------------------") + +def args_parser(): + parser =3D argparse.ArgumentParser( + prog =3D "KVM Selftests Runner", + description =3D "Run KVM selftests with different configurations", + formatter_class=3Dargparse.RawTextHelpFormatter + ) + + parser.add_argument("-o","--output", + help=3D"Creates a folder to dump test results.") + parser.add_argument("-j", "--jobs", default =3D 1, type =3D int, + help=3D"Number of parallel executions in a suite") + parser.add_argument("test_suites_json", + help =3D "File containing test suites to run") + + test_suite_or_test_help =3D textwrap.dedent("""\ + Run specific test suite or specific test fr= om the test suite. + If nothing specified then run all of the te= sts. + + Example: + runner.py tests.json A/a1 A/a4 B C/c1 + + Assuming capital letters are test suites an= d small letters are tests. + Runner will: + - Run test a1 and a4 from the test suite A + - Run all tests from the test suite B + - Run test c1 from the test suite C""" + ) + parser.add_argument("test_suite_or_test", nargs=3D"*", help=3Dtest_sui= te_or_test_help) + + + return parser.parse_args(); + +def main(): + args =3D args_parser() + suites_json =3D load_tests(args.test_suites_json) + suites =3D parse_suites(suites_json, platform.machine(), + args.output, args.test_suite_or_test) + + if args.output is not None: + shutil.rmtree(args.output, ignore_errors=3DTrue) + result =3D run_suites(suites, args.jobs) + pretty_print(result) + +if __name__ =3D=3D "__main__": + main() diff --git a/tools/testing/selftests/kvm/tests.json b/tools/testing/selftes= ts/kvm/tests.json new file mode 100644 index 000000000000..1c1c15a0e880 --- /dev/null +++ b/tools/testing/selftests/kvm/tests.json @@ -0,0 +1,60 @@ +[ + { + "suite": "dirty_log_perf_tests", + "timeout_s": 300, + "tests": [ + { + "name": "dirty_log_perf_test_max_vcpu_no_m= anual_protect", + "command": "./dirty_log_perf_test -v $(gre= p -c ^processor /proc/cpuinfo) -g" + }, + { + "name": "dirty_log_perf_test_max_vcpu_manu= al_protect", + "command": "./dirty_log_perf_test -v $(gre= p -c ^processor /proc/cpuinfo)" + }, + { + "name": "dirty_log_perf_test_max_vcpu_manu= al_protect_random_access", + "command": "./dirty_log_perf_test -v $(gre= p -c ^processor /proc/cpuinfo) -a" + }, + { + "name": "dirty_log_perf_test_max_10_vcpu_h= ugetlb", + "setup": "echo 5120 > /sys/kernel/mm/hugep= ages/hugepages-2048kB/nr_hugepages", + "command": "./dirty_log_perf_test -v 10 -s= anonymous_hugetlb_2mb", + "teardown": "echo 0 > /sys/kernel/mm/hugep= ages/hugepages-2048kB/nr_hugepages" + } + ] + }, + { + "suite": "x86_sanity_tests", + "arch" : "x86_64", + "tests": [ + { + "name": "vmx_msrs_test", + "command": "./x86_64/vmx_msrs_test" + }, + { + "name": "private_mem_conversions_test", + "command": "./x86_64/private_mem_conversio= ns_test" + }, + { + "name": "apic_bus_clock_test", + "command": "./x86_64/apic_bus_clock_test" + }, + { + "name": "dirty_log_page_splitting_test", + "command": "./x86_64/dirty_log_page_splitt= ing_test -b 2G -s anonymous_hugetlb_2mb", + "setup": "echo 2560 > /sys/kernel/mm/hugep= ages/hugepages-2048kB/nr_hugepages", + "teardown": "echo 0 > /sys/kernel/mm/hugep= ages/hugepages-2048kB/nr_hugepages" + } + ] + }, + { + "suite": "arm_sanity_test", + "arch" : "aarch64", + "tests": [ + { + "name": "page_fault_test", + "command": "./aarch64/page_fault_test" + } + ] + } +] \ No newline at end of file --=20 2.46.0.184.g6999bdac58-goog