From nobody Fri Nov 29 03:50:02 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1650742560; cv=none; d=zohomail.com; s=zohoarc; b=Kto/6tDDaS40YOhpEh8yJMNYM2NzBjwpP75l3nnkR1fO9t75y4Yk04wMQ+FOyLoxG2A3k0N8b2o9HvHzkjOy07qYtt9ApelmFhnQB+igT/UxZ/eZ6v74Os26004xYq+RyA/mmmMQ8WGmNJ19dgXRSBnS2v9t9AGAiSSrMnujx9Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1650742560; h=Content-Type:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=4tavhL6msLPCorzynuJXgSiaV6UCD46MhySnJI+bSiQ=; b=bZTulP9tKOof58cfXnnhgfjtfN6Z5TCMYYudM+f2Bc3LEwrJYTDbNWeun+ZwkJZH1rwhaczgeXkJaAknFR6CwVDjMf34iqZA7uDe3mP8rlr18sHHym2RkqSgZ5HRJAHzfOJE+5s2HqCLRAH/yYHSbSORFNobP1MYTFlru1uR2P4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1650742560165987.1473618984344; Sat, 23 Apr 2022 12:36:00 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.311778.529023 (Exim 4.92) (envelope-from ) id 1niLXD-0004v7-Bq; Sat, 23 Apr 2022 19:35:11 +0000 Received: by outflank-mailman (output) from mailman id 311778.529023; Sat, 23 Apr 2022 19:35:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1niLXD-0004v0-7Z; Sat, 23 Apr 2022 19:35:11 +0000 Received: by outflank-mailman (input) for mailman id 311778; Sat, 23 Apr 2022 19:35:09 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1niLXB-0004uu-Ek for xen-devel@lists.xenproject.org; Sat, 23 Apr 2022 19:35:09 +0000 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [2607:f8b0:4864:20::429]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 7a98b2bd-c33c-11ec-a405-831a346695d4; Sat, 23 Apr 2022 21:35:07 +0200 (CEST) Received: by mail-pf1-x429.google.com with SMTP id y14so10352870pfe.10 for ; Sat, 23 Apr 2022 12:35:07 -0700 (PDT) Received: from localhost ([118.33.58.98]) by smtp.gmail.com with ESMTPSA id j9-20020aa78009000000b004fde2dd78b0sm6182154pfi.109.2022.04.23.12.35.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 23 Apr 2022 12:35:05 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7a98b2bd-c33c-11ec-a405-831a346695d4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:mime-version:content-disposition; bh=4tavhL6msLPCorzynuJXgSiaV6UCD46MhySnJI+bSiQ=; b=JelAi7F7QJ49IeyAJRnO2kSRA9utw+F4WEa0i3BeAghEIrPa6SjKV2Swjj3YmZJJ5Q g4v2FTsnCrjMb4o/rCpmsdSWPqXom+rYbt92b454r8fi10TfEAgLEoim0FPc3vq3enJA NHpttCcnI0cAZqTbsV/5QJ7u2mtD+7TeFKuJTnLoX9iOAzbDpu6rIDzdAyFadWBqD6jG 8UGECs/dLgQ/h2OSDdQveYEbPWsTk0VpxwBXpPFWjGKx3ZHr1RJ1UMDz7iktCdhEf6W5 1ShUAwkux3FY2Aqcrjwv9XZkJ0dasLEXkklB8giEN/N5XOSamYyrlKPU5F24OA+T4tlH CgPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=4tavhL6msLPCorzynuJXgSiaV6UCD46MhySnJI+bSiQ=; b=7q1Wgdwq+2WHnjsHD3KSSXrRKYik6PPtbtJn/HndCIIq+eXXMDT7fDU9GSE7/Lx9Dx GjcICGBMzz5w6AS6S6C7m9I8Uq6JwvV9d4lg17+3e/MyTRA96Tgj0MPAzWlXQjhychK0 EuVQgbUE3gbfFZyS5PtVeQGTbPMNkeRz+NdkKa2A1piet88fUa+eV+BzQCYix2AyQzRL +dq7AUvcASLT9wBoCGlVyf/mfmorHeeZinIBU0bPDXhIzHGOzLFldvhpk3gDsu+BtqW/ xFho9Xr2gjWFEKlZEfhaWtbQpYO3GqF2DN69DwWNFyymEr9Yvqg/JQ99bWprP+H1OATm Qhsw== X-Gm-Message-State: AOAM5337MUbyZsaL4Lg0VtT7m8tJ155qvgQfsxJ5VRS+ZMgWCBD2JWwk vCCVamtltvB5keVnmnR7gAs= X-Google-Smtp-Source: ABdhPJybvX+zPdayg19N6R8s+Di+UkzQJTh8X3RzJZetxAmyu65neNTb3E+e5UArpdKwgzbcT3I2iQ== X-Received: by 2002:a63:8941:0:b0:3aa:b55c:729b with SMTP id v62-20020a638941000000b003aab55c729bmr8618019pgd.285.1650742505694; Sat, 23 Apr 2022 12:35:05 -0700 (PDT) Date: Sun, 24 Apr 2022 04:35:01 +0900 From: Paran Lee To: Jan Beulich , Andrew Cooper , George Dunlap , Stefano Stabellini , Julien Grall Cc: Austin Kim , xen-devel@lists.xenproject.org Subject: [PATCH] xen/mm: page_alloc fix duplicated order shift operation in the loop Message-ID: <20220423193501.GA10077@DESKTOP-NK4TH6S.localdomain> MIME-Version: 1.0 Content-Disposition: inline X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1650742561191100001 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It doesn't seem necessary to do that duplicate calculation of order shift 2^@order in the loop. In addition, I fixed type of total_avail_pages from long to unsigned long. because when total_avail_pages static variable substitute in functions of page alloc local variable, type of local variables is unsigned long. Signed-off-by: Paran Lee --- xen/common/page_alloc.c | 51 ++++++++++++++++++++++------------------- 1 file changed, 27 insertions(+), 24 deletions(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 319029140f..9a955ce84e 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -456,7 +456,7 @@ static heap_by_zone_and_order_t *_heap[MAX_NUMNODES]; static unsigned long node_need_scrub[MAX_NUMNODES]; =20 static unsigned long *avail[MAX_NUMNODES]; -static long total_avail_pages; +static unsigned long total_avail_pages; =20 static DEFINE_SPINLOCK(heap_lock); static long outstanding_claims; /* total outstanding claims by all domains= */ @@ -922,8 +922,9 @@ static struct page_info *alloc_heap_pages( struct domain *d) { nodeid_t node; - unsigned int i, buddy_order, zone, first_dirty; - unsigned long request =3D 1UL << order; + unsigned int buddy_order, zone, first_dirty; + unsigned int buddy_request; + unsigned long i, request =3D 1UL << order; struct page_info *pg; bool need_tlbflush =3D false; uint32_t tlbflush_timestamp =3D 0; @@ -975,16 +976,17 @@ static struct page_info *alloc_heap_pages( while ( buddy_order !=3D order ) { buddy_order--; + buddy_request =3D 1U << buddy_order; page_list_add_scrub(pg, node, zone, buddy_order, - (1U << buddy_order) > first_dirty ? + buddy_request > first_dirty ? first_dirty : INVALID_DIRTY_IDX); - pg +=3D 1U << buddy_order; + pg +=3D buddy_request; =20 if ( first_dirty !=3D INVALID_DIRTY_IDX ) { /* Adjust first_dirty */ - if ( first_dirty >=3D 1U << buddy_order ) - first_dirty -=3D 1U << buddy_order; + if ( first_dirty >=3D buddy_request ) + first_dirty -=3D buddy_request; else first_dirty =3D 0; /* We've moved past original first_dirt= y */ } @@ -1000,13 +1002,13 @@ static struct page_info *alloc_heap_pages( if ( d !=3D NULL ) d->last_alloc_node =3D node; =20 - for ( i =3D 0; i < (1 << order); i++ ) + for ( i =3D 0; i < request; i++ ) { /* Reference count must continuously be zero for free pages. */ if ( (pg[i].count_info & ~PGC_need_scrub) !=3D PGC_state_free ) { printk(XENLOG_ERR - "pg[%u] MFN %"PRI_mfn" c=3D%#lx o=3D%u v=3D%#lx t=3D%#x= \n", + "pg[%lu] MFN %"PRI_mfn" c=3D%#lx o=3D%u v=3D%#lx t=3D%#= x\n", i, mfn_x(page_to_mfn(pg + i)), pg[i].count_info, pg[i].v.free.order, pg[i].u.free.val, pg[i].tlbflush_timestamp); @@ -1034,7 +1036,7 @@ static struct page_info *alloc_heap_pages( if ( first_dirty !=3D INVALID_DIRTY_IDX || (scrub_debug && !(memflags & MEMF_no_scrub)) ) { - for ( i =3D 0; i < (1U << order); i++ ) + for ( i =3D 0; i < request; i++ ) { if ( test_and_clear_bit(_PGC_need_scrub, &pg[i].count_info) ) { @@ -1063,7 +1065,7 @@ static struct page_info *alloc_heap_pages( * can control its own visibility of/through the cache. */ mfn =3D page_to_mfn(pg); - for ( i =3D 0; i < (1U << order); i++ ) + for ( i =3D 0; i < request; i++ ) flush_page_to_ram(mfn_x(mfn) + i, !(memflags & MEMF_no_icache_flus= h)); =20 return pg; @@ -1437,15 +1439,16 @@ static void free_heap_pages( { unsigned long mask; mfn_t mfn =3D page_to_mfn(pg); - unsigned int i, node =3D phys_to_nid(mfn_to_maddr(mfn)); + unsigned int node =3D phys_to_nid(mfn_to_maddr(mfn)); unsigned int zone =3D page_to_zone(pg); + unsigned long i, request =3D 1UL << order; bool pg_offlined =3D false; =20 ASSERT(order <=3D MAX_ORDER); =20 spin_lock(&heap_lock); =20 - for ( i =3D 0; i < (1 << order); i++ ) + for ( i =3D 0; i < request; i++ ) { if ( mark_page_free(&pg[i], mfn_add(mfn, i)) ) pg_offlined =3D true; @@ -1457,11 +1460,11 @@ static void free_heap_pages( } } =20 - avail[node][zone] +=3D 1 << order; - total_avail_pages +=3D 1 << order; + avail[node][zone] +=3D request; + total_avail_pages +=3D request; if ( need_scrub ) { - node_need_scrub[node] +=3D 1 << order; + node_need_scrub[node] +=3D request; pg->u.free.first_dirty =3D 0; } else @@ -1490,7 +1493,7 @@ static void free_heap_pages( /* Update predecessor's first_dirty if necessary. */ if ( predecessor->u.free.first_dirty =3D=3D INVALID_DIRTY_IDX = && pg->u.free.first_dirty !=3D INVALID_DIRTY_IDX ) - predecessor->u.free.first_dirty =3D (1U << order) + + predecessor->u.free.first_dirty =3D mask + pg->u.free.first_dirty; =20 pg =3D predecessor; @@ -1511,7 +1514,7 @@ static void free_heap_pages( /* Update pg's first_dirty if necessary. */ if ( pg->u.free.first_dirty =3D=3D INVALID_DIRTY_IDX && successor->u.free.first_dirty !=3D INVALID_DIRTY_IDX ) - pg->u.free.first_dirty =3D (1U << order) + + pg->u.free.first_dirty =3D mask + successor->u.free.first_dirty; =20 page_list_del(successor, &heap(node, zone, order)); @@ -2416,7 +2419,7 @@ struct page_info *alloc_domheap_pages( void free_domheap_pages(struct page_info *pg, unsigned int order) { struct domain *d =3D page_get_owner(pg); - unsigned int i; + unsigned long i, request =3D 1UL << order; bool drop_dom_ref; =20 ASSERT(!in_irq()); @@ -2426,10 +2429,10 @@ void free_domheap_pages(struct page_info *pg, unsig= ned int order) /* NB. May recursively lock from relinquish_memory(). */ spin_lock_recursive(&d->page_alloc_lock); =20 - for ( i =3D 0; i < (1 << order); i++ ) + for ( i =3D 0; i < request; i++ ) arch_free_heap_page(d, &pg[i]); =20 - d->xenheap_pages -=3D 1 << order; + d->xenheap_pages -=3D request; drop_dom_ref =3D (d->xenheap_pages =3D=3D 0); =20 spin_unlock_recursive(&d->page_alloc_lock); @@ -2443,12 +2446,12 @@ void free_domheap_pages(struct page_info *pg, unsig= ned int order) /* NB. May recursively lock from relinquish_memory(). */ spin_lock_recursive(&d->page_alloc_lock); =20 - for ( i =3D 0; i < (1 << order); i++ ) + for ( i =3D 0; i < request; i++ ) { if ( pg[i].u.inuse.type_info & PGT_count_mask ) { printk(XENLOG_ERR - "pg[%u] MFN %"PRI_mfn" c=3D%#lx o=3D%u v=3D%#lx= t=3D%#x\n", + "pg[%lu] MFN %"PRI_mfn" c=3D%#lx o=3D%u v=3D%#l= x t=3D%#x\n", i, mfn_x(page_to_mfn(pg + i)), pg[i].count_info, pg[i].v.free.order, pg[i].u.free.val, pg[i].tlbflush_timestamp); @@ -2462,7 +2465,7 @@ void free_domheap_pages(struct page_info *pg, unsigne= d int order) } } =20 - drop_dom_ref =3D !domain_adjust_tot_pages(d, -(1 << order)); + drop_dom_ref =3D !domain_adjust_tot_pages(d, -request); =20 spin_unlock_recursive(&d->page_alloc_lock); =20 --=20 2.25.1