From nobody Mon Feb 9 09:43:31 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DCADC001E0 for ; Thu, 27 Jul 2023 14:46:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234010AbjG0OqN (ORCPT ); Thu, 27 Jul 2023 10:46:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233995AbjG0Op5 (ORCPT ); Thu, 27 Jul 2023 10:45:57 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8ACB130E3; Thu, 27 Jul 2023 07:45:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690469146; x=1722005146; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2UbKf7v2UR/BH4b/sLY6ZcXfEMndoDL0gkvxE3Ft3Vg=; b=WGsoPrkCUaCL2T8DvRzS42o1JA+UajI5CE95h0D8H09MOuhuLF1tb6c+ 1tnMNJSD3JXe9eTIN3pmjVFq+DJzisp+ZuzKvSSkrqWFMrxrefNA7TAD5 YLmLeQGEkMGBtwqn3IIpyW4R7sB2F6eWL30mm1JjoJ76iMigxgkokSUOJ xx7RT4ZlDKW4IwFsJsgjyRUtyiLwUrcpag7Vj9J3QMj6jDfeXMXTTyAB2 yW96b1Wpv/A1ffysmOcp6xcK4mLVh5ypqeSo5rQ/dlmW5REc1gmLktyid K3nxbP8ZiOscdVQ5rclPtGiRUQmVpvSPEuREYhbCZojgtgvc+NFt4vxFI g==; X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="432139738" X-IronPort-AV: E=Sophos;i="6.01,235,1684825200"; d="scan'208";a="432139738" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2023 07:45:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10784"; a="817119912" X-IronPort-AV: E=Sophos;i="6.01,235,1684825200"; d="scan'208";a="817119912" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 27 Jul 2023 07:45:42 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Larysa Zaremba , Yunsheng Lin , Alexander Duyck , Jesper Dangaard Brouer , Ilias Apalodimas , Simon Horman , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 4/9] page_pool: shrink &page_pool_params a tiny bit Date: Thu, 27 Jul 2023 16:43:31 +0200 Message-ID: <20230727144336.1646454-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230727144336.1646454-1-aleksander.lobakin@intel.com> References: <20230727144336.1646454-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" For now, this structure takes a whole 64-byte cacheline on x86_64. But in fact, it has a 4-byte hole before ::init_callback() (yet not sufficient to change its sizeof()). ::dma_dir is whole 4 bytes, although its values can only be 0 and 2. Merge it with ::flags and, so that its slot gets freed and reduces the structure's size to 56 bytes. This adds instruction when reading that field, but the upcoming change will make those reads happen way less often. Pad the freed slot explicitly in &page_pool to not alter cacheline layout while it's not used. Signed-off-by: Alexander Lobakin --- include/net/page_pool/types.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 664a787948e1..c86f65e57614 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -44,13 +44,13 @@ struct pp_alloc_cache { }; =20 struct page_pool_params { - unsigned int flags; + unsigned int flags:30; + enum dma_data_direction dma_dir:2; /* DMA mapping direction */ unsigned int order; unsigned int pool_size; int nid; /* Numa node id to allocate from pages from */ struct device *dev; /* device, for DMA pre-mapping purposes */ struct napi_struct *napi; /* Sole consumer of pages, otherwise NULL */ - enum dma_data_direction dma_dir; /* DMA mapping direction */ unsigned int max_len; /* max DMA sync memory size */ unsigned int offset; /* DMA addr offset */ void (*init_callback)(struct page *page, void *arg); @@ -93,6 +93,7 @@ struct page_pool_stats { =20 struct page_pool { struct page_pool_params p; + long pad; =20 long frag_users; struct page *frag_page; --=20 2.41.0