From nobody Sun Dec 28 02:44:46 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2291EC4332F for ; Thu, 14 Dec 2023 11:06:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443924AbjLNLGs (ORCPT ); Thu, 14 Dec 2023 06:06:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1443916AbjLNLGo (ORCPT ); Thu, 14 Dec 2023 06:06:44 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05A6D121 for ; Thu, 14 Dec 2023 03:06:50 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5e33a2f0cacso3382427b3.0 for ; Thu, 14 Dec 2023 03:06:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702552009; x=1703156809; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EyXxVtW+rdQsX3V7pnuyU4+nkx4xXk2D+H4JW0Q7bfM=; b=ORiigBak76Q7vNINF+/57DM7oV0LGo6TmBnYP5jwjHVS0mehV7EjB67KE2bm6bgrUP CKbU0+1oA/r5r69s/hStCi0P2LRmG4cZvgs+rEeN8MyU+S2dRwq4hlUHyiXjqkVQ5k0d 8Rdwi1zZIdemUXjdCVdXX+LMv80jJFSfecbAdy8+4kHZFur1pmicnfl0sqTyvJ9gOLK4 TEfAQkySnMyR9B3pW8Tuu2PEO4ezcBJCABFgG9j2R8LfW5qeOKoL/KxPlUo178g3yjt8 Cze+sNTcHac1gH7eUIbnu+ys75rQJdqMMZJHnxT6u+yhet5S+tphU8fCgh2cEdxYaEbv nwcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702552009; x=1703156809; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EyXxVtW+rdQsX3V7pnuyU4+nkx4xXk2D+H4JW0Q7bfM=; b=HejZxyWQldq0hay/jNCYEJq8PF/0Cn2kkTYabLd56HG3VOvqxaOEITb0BeMF9ZICzn xEDivyD+gbv2n3WLNUtE/2Iyx5rtEy3EEXnVOAKNG21f3roWsWdHMGpu5l6KNuGFzOLB ofXtWTjiS7IAo3YbvYtZ1nYoxbbOSyQoeXScK7ZWL9sYptE+0MQEqNlh3CZI1LVmjc1Q kp0OE/HdQozl8C8KgOq6Z7MqkJpTMrf6skqpE4MEGRdcNalIOK7i6Jz/7TOpjU6O/taz K3eeqLUvxSHjsFp+ej0bZDVjs//Nj7wQqbsOS07wOpI/wFvyYhcg0l8eEQJnTAzTt43/ StpQ== X-Gm-Message-State: AOJu0YzDD7mxKUd1cAymZ4UAvQc3S4qKYSS4UidO4hzZ0WtQW6hFd6md sIxTUWxwuY5aLcrJop5qjFjTPaez+Ic= X-Google-Smtp-Source: AGHT+IGW0HdZBurxxyqI6QmXx4AKMYKZD446PLJHAHdSqHiY2mQ2I9QfmLcwZ4od/25icUASR9vq/iRmuJQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:8447:3e89:1f77:ff8a]) (user=glider job=sendgmr) by 2002:a05:690c:a8c:b0:5d3:e8b8:e1fd with SMTP id ci12-20020a05690c0a8c00b005d3e8b8e1fdmr182658ywb.3.1702552009224; Thu, 14 Dec 2023 03:06:49 -0800 (PST) Date: Thu, 14 Dec 2023 12:06:33 +0100 In-Reply-To: <20231214110639.2294687-1-glider@google.com> Mime-Version: 1.0 References: <20231214110639.2294687-1-glider@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214110639.2294687-2-glider@google.com> Subject: [PATCH v10-mte 1/7] lib/bitmap: add bitmap_{read,write}() From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org, Arnd Bergmann Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Syed Nayyar Waris The two new functions allow reading/writing values of length up to BITS_PER_LONG bits at arbitrary position in the bitmap. The code was taken from "bitops: Introduce the for_each_set_clump macro" by Syed Nayyar Waris with a number of changes and simplifications: - instead of using roundup(), which adds an unnecessary dependency on , we calculate space as BITS_PER_LONG-offset; - indentation is reduced by not using else-clauses (suggested by checkpatch for bitmap_get_value()); - bitmap_get_value()/bitmap_set_value() are renamed to bitmap_read() and bitmap_write(); - some redundant computations are omitted. Cc: Arnd Bergmann Signed-off-by: Syed Nayyar Waris Signed-off-by: William Breathitt Gray Link: https://lore.kernel.org/lkml/fe12eedf3666f4af5138de0e70b67a07c7f40338= .1592224129.git.syednwaris@gmail.com/ Suggested-by: Yury Norov Co-developed-by: Alexander Potapenko Signed-off-by: Alexander Potapenko Reviewed-by: Andy Shevchenko Acked-by: Yury Norov --- v10-mte: - send this patch together with the "Implement MTE tag compression for swapped pages" Revisions v8-v12 of bitmap patches were reviewed separately from the "Implement MTE tag compression for swapped pages" series (https://lore.kernel.org/lkml/20231109151106.2385155-1-glider@google.com/) This patch was previously called "lib/bitmap: add bitmap_{set,get}_value()" (https://lore.kernel.org/lkml/20230720173956.3674987-2-glider@google.com/) v11: - rearrange whitespace as requested by Andy Shevchenko, add Reviewed-by:, update a comment v10: - update comments as requested by Andy Shevchenko v8: - as suggested by Andy Shevchenko, handle reads/writes of more than BITS_PER_LONG bits, add a note for 32-bit systems v7: - Address comments by Yury Norov, Andy Shevchenko, Rasmus Villemoes: - update code comments; - get rid of GENMASK(); - s/assign_bit/__assign_bit; - more vertical whitespace for better readability; - more compact code for bitmap_write() (now for real) v6: - As suggested by Yury Norov, do not require bitmap_read(..., 0) to return 0. v5: - Address comments by Yury Norov: - updated code comments and patch title/description - replace GENMASK(nbits - 1, 0) with BITMAP_LAST_WORD_MASK(nbits) - more compact bitmap_write() implementation v4: - Address comments by Andy Shevchenko and Yury Norov: - prevent passing values >=3D 64 to GENMASK() - fix commit authorship - change comments - check for unlikely(nbits=3D=3D0) - drop unnecessary const declarations - fix kernel-doc comments - rename bitmap_{get,set}_value() to bitmap_{read,write}() --- include/linux/bitmap.h | 77 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 99451431e4d65..7ca0379be8c13 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -79,6 +79,10 @@ struct device; * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64= [] dst * bitmap_get_value8(map, start) Get 8bit value from map at= start * bitmap_set_value8(map, value, start) Set 8bit value to map at s= tart + * bitmap_read(map, start, nbits) Read an nbits-sized value = from + * map at start + * bitmap_write(map, value, start, nbits) Write an nbits-sized value= to + * map at start * * Note, bitmap_zero() and bitmap_fill() operate over the region of * unsigned longs, that is, bits behind bitmap till the unsigned long @@ -636,6 +640,79 @@ static inline void bitmap_set_value8(unsigned long *ma= p, unsigned long value, map[index] |=3D value << offset; } =20 +/** + * bitmap_read - read a value of n-bits from the memory region + * @map: address to the bitmap memory region + * @start: bit offset of the n-bit value + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG + * + * Returns: value of @nbits bits located at the @start bit offset within t= he + * @map memory region. For @nbits =3D 0 and @nbits > BITS_PER_LONG the ret= urn + * value is undefined. + */ +static inline unsigned long bitmap_read(const unsigned long *map, + unsigned long start, + unsigned long nbits) +{ + size_t index =3D BIT_WORD(start); + unsigned long offset =3D start % BITS_PER_LONG; + unsigned long space =3D BITS_PER_LONG - offset; + unsigned long value_low, value_high; + + if (unlikely(!nbits || nbits > BITS_PER_LONG)) + return 0; + + if (space >=3D nbits) + return (map[index] >> offset) & BITMAP_LAST_WORD_MASK(nbits); + + value_low =3D map[index] & BITMAP_FIRST_WORD_MASK(start); + value_high =3D map[index + 1] & BITMAP_LAST_WORD_MASK(start + nbits); + return (value_low >> offset) | (value_high << space); +} + +/** + * bitmap_write - write n-bit value within a memory region + * @map: address to the bitmap memory region + * @value: value to write, clamped to nbits + * @start: bit offset of the n-bit value + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG. + * + * bitmap_write() behaves as-if implemented as @nbits calls of __assign_bi= t(), + * i.e. bits beyond @nbits are ignored: + * + * for (bit =3D 0; bit < nbits; bit++) + * __assign_bit(start + bit, bitmap, val & BIT(bit)); + * + * For @nbits =3D=3D 0 and @nbits > BITS_PER_LONG no writes are performed. + */ +static inline void bitmap_write(unsigned long *map, unsigned long value, + unsigned long start, unsigned long nbits) +{ + size_t index; + unsigned long offset; + unsigned long space; + unsigned long mask; + bool fit; + + if (unlikely(!nbits || nbits > BITS_PER_LONG)) + return; + + mask =3D BITMAP_LAST_WORD_MASK(nbits); + value &=3D mask; + offset =3D start % BITS_PER_LONG; + space =3D BITS_PER_LONG - offset; + fit =3D space >=3D nbits; + index =3D BIT_WORD(start); + + map[index] &=3D (fit ? (~(mask << offset)) : ~BITMAP_FIRST_WORD_MASK(star= t)); + map[index] |=3D value << offset; + if (fit) + return; + + map[index + 1] &=3D BITMAP_FIRST_WORD_MASK(start + nbits); + map[index + 1] |=3D (value >> space); +} + #endif /* __ASSEMBLY__ */ =20 #endif /* __LINUX_BITMAP_H */ --=20 2.43.0.472.g3155946c3a-goog