From nobody Tue Apr 7 16:28:40 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96E8DECAAD4 for ; Fri, 26 Aug 2022 15:16:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344116AbiHZPQU (ORCPT ); Fri, 26 Aug 2022 11:16:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344385AbiHZPOl (ORCPT ); Fri, 26 Aug 2022 11:14:41 -0400 Received: from mail-ej1-x64a.google.com (mail-ej1-x64a.google.com [IPv6:2a00:1450:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2DF375385 for ; Fri, 26 Aug 2022 08:10:02 -0700 (PDT) Received: by mail-ej1-x64a.google.com with SMTP id hr32-20020a1709073fa000b00730a39f36ddso715730ejc.5 for ; Fri, 26 Aug 2022 08:10:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=uVzdHqFGYZKd4Qu5GJB2AnYrpVyvjfHfk7LqoEQIkSc=; b=BC1C5JPZyis+Kes9Uu/ankA8YmaxeSDQUFcfBHXb3GZVLgly9+DxFLSdN6o6y7p7/P zVWe/gK4/+LWKxQGuvyhxewoxvI19U5hTqi2IpBvOJg3uq8Ik8xr0FYeUi72cpJlmij9 nf0JKqqIO86qyQ9jue4hqNcyWyObBn/CgX3JM/gkgO+Veqds0ttkLZ3Zg/KDXgiSSHRr oeNOglxG71/shVVy6M1hhBJW9QDDOpd1SAOcY4gcwkPldDcHeI0LxKdaN/3bAC+XERwl jFbWJk+m8LHqzyufpaIWhW/s3AL9MyZZGADD18fKDUyYSRK0x9/0JYGypV1K9dG68puO R0ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=uVzdHqFGYZKd4Qu5GJB2AnYrpVyvjfHfk7LqoEQIkSc=; b=SSlav8MsRadaguA8I4Z8EBPnWR03kDoTXYy0cN/fznYzHQYFC3acz/OEJbDa3vnKFY JNoS9RHxDX58pu/tnF+3Tkq08uJOcjxL71TAd4BViIIBrjouqMCYsouTtp37S9essNL1 jtEMSu7fM8CPrQtkXsXCClMql+rqciJ/hIagmRW+A8BDQ4DMhCAD9QTdcNvtUfhodEmW xbaz8Lm9kwMYtqDTUDGP+rHN0GOqQSiS4O89WQcVrZTO0t7AjQnr+8QM8DecjilUq56j IiYbXzlmwDjTJWOFjqd4CfFt+gPAcgvuq0CRDejLELme6t6uZtMHqMSPGkJBM85oTUqP wtgA== X-Gm-Message-State: ACgBeo29HoxFDol7TNRla5mBF2AD/EVq5aNyx1dlq+3rXiIsdil/RXZ6 +gj5SMQi5rypY/JdsCca8RCm6divgt8= X-Google-Smtp-Source: AA6agR46uXnxQGKu9ULnyAc1BLhctUrVNAcVdLs11pJsS5/NKYX8LczIxzp8tXMl7eS110R8HE5vOzH3J7w= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:5207:ac36:fdd3:502d]) (user=glider job=sendgmr) by 2002:a05:6402:368c:b0:446:48d9:2be with SMTP id ej12-20020a056402368c00b0044648d902bemr6858842edb.167.1661526593376; Fri, 26 Aug 2022 08:09:53 -0700 (PDT) Date: Fri, 26 Aug 2022 17:07:59 +0200 In-Reply-To: <20220826150807.723137-1-glider@google.com> Mime-Version: 1.0 References: <20220826150807.723137-1-glider@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220826150807.723137-37-glider@google.com> Subject: [PATCH v5 36/44] x86: kmsan: use __msan_ string functions where possible. From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Unless stated otherwise (by explicitly calling __memcpy(), __memset() or __memmove()) we want all string functions to call their __msan_ versions (e.g. __msan_memcpy() instead of memcpy()), so that shadow and origin values are updated accordingly. Bootloader must still use the default string functions to avoid crashes. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I7ca9bd6b4f5c9b9816404862ae8= 7ca7984395f33 --- arch/x86/include/asm/string_64.h | 23 +++++++++++++++++++++-- include/linux/fortify-string.h | 2 ++ 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string= _64.h index 6e450827f677a..3b87d889b6e16 100644 --- a/arch/x86/include/asm/string_64.h +++ b/arch/x86/include/asm/string_64.h @@ -11,11 +11,23 @@ function. */ =20 #define __HAVE_ARCH_MEMCPY 1 +#if defined(__SANITIZE_MEMORY__) +#undef memcpy +void *__msan_memcpy(void *dst, const void *src, size_t size); +#define memcpy __msan_memcpy +#else extern void *memcpy(void *to, const void *from, size_t len); +#endif extern void *__memcpy(void *to, const void *from, size_t len); =20 #define __HAVE_ARCH_MEMSET +#if defined(__SANITIZE_MEMORY__) +extern void *__msan_memset(void *s, int c, size_t n); +#undef memset +#define memset __msan_memset +#else void *memset(void *s, int c, size_t n); +#endif void *__memset(void *s, int c, size_t n); =20 #define __HAVE_ARCH_MEMSET16 @@ -55,7 +67,13 @@ static inline void *memset64(uint64_t *s, uint64_t v, si= ze_t n) } =20 #define __HAVE_ARCH_MEMMOVE +#if defined(__SANITIZE_MEMORY__) +#undef memmove +void *__msan_memmove(void *dest, const void *src, size_t len); +#define memmove __msan_memmove +#else void *memmove(void *dest, const void *src, size_t count); +#endif void *__memmove(void *dest, const void *src, size_t count); =20 int memcmp(const void *cs, const void *ct, size_t count); @@ -64,8 +82,7 @@ char *strcpy(char *dest, const char *src); char *strcat(char *dest, const char *src); int strcmp(const char *cs, const char *ct); =20 -#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) - +#if (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)) /* * For files that not instrumented (e.g. mm/slub.c) we * should use not instrumented version of mem* functions. @@ -73,7 +90,9 @@ int strcmp(const char *cs, const char *ct); =20 #undef memcpy #define memcpy(dst, src, len) __memcpy(dst, src, len) +#undef memmove #define memmove(dst, src, len) __memmove(dst, src, len) +#undef memset #define memset(s, c, n) __memset(s, c, n) =20 #ifndef __NO_FORTIFY diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h index 3b401fa0f3746..6c8a1a29d0b63 100644 --- a/include/linux/fortify-string.h +++ b/include/linux/fortify-string.h @@ -285,8 +285,10 @@ __FORTIFY_INLINE void fortify_memset_chk(__kernel_size= _t size, * __builtin_object_size() must be captured here to avoid evaluating argum= ent * side-effects further into the macro layers. */ +#ifndef CONFIG_KMSAN #define memset(p, c, s) __fortify_memset_chk(p, c, s, \ __builtin_object_size(p, 0), __builtin_object_size(p, 1)) +#endif =20 /* * To make sure the compiler can enforce protection against buffer overflo= ws, --=20 2.37.2.672.g94769d06f0-goog