From nobody Thu Feb 12 07:40:24 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48894C6FD18 for ; Tue, 25 Apr 2023 14:07:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234323AbjDYOHD (ORCPT ); Tue, 25 Apr 2023 10:07:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233704AbjDYOHB (ORCPT ); Tue, 25 Apr 2023 10:07:01 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98FB2E6A for ; Tue, 25 Apr 2023 07:06:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682431572; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=ky4or7cbHu9y2lAhkNJDV4gPS/WcEX9x/ObLAA4vgOA=; b=Or81DViKcry4UI8VdHBJohr8jp+0IwrG9SIyz0/zsRrL92uXACQSacD1i6HSRdf3O7ajcH hmdr1xVg8JrGjgkeINpKTzLRt66BoUJeEfxzCv2JlacTy6oyoVkGJEX13Fufpw2eCwA5oJ sprH3DZMndfa0w3PGAX7dlMtUKW41ho= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-183-bVPvqCw8PY-I2o_YWzJQCQ-1; Tue, 25 Apr 2023 10:06:09 -0400 X-MC-Unique: bVPvqCw8PY-I2o_YWzJQCQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6C573884EC5; Tue, 25 Apr 2023 14:06:08 +0000 (UTC) Received: from p1.luc.cera.cz.com (unknown [10.45.226.81]) by smtp.corp.redhat.com (Postfix) with ESMTP id 07EB840C2064; Tue, 25 Apr 2023 14:06:05 +0000 (UTC) From: Ivan Vecera To: netdev@vger.kernel.org Cc: Jamal Hadi Salim , Cong Wang , Jiri Pirko , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Marcelo Ricardo Leitner , Paul Blakey , linux-kernel@vger.kernel.org (open list) Subject: [PATCH net] net/sched: flower: Fix wrong handle assignment during filter change Date: Tue, 25 Apr 2023 16:06:04 +0200 Message-Id: <20230425140604.169881-1-ivecera@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Commit 08a0063df3ae ("net/sched: flower: Move filter handle initialization earlier") moved filter handle initialization but an assignment of the handle to fnew->handle is done regardless of fold value. This is wrong because if fold !=3D NULL (so fold->handle =3D=3D handle) no new handle is allocated and passed handle is assigned to fnew->handle. Then if any subsequent action in fl_change() fails then the handle value is removed from IDR that is incorrect as we will have still valid old filter instance with handle that is not present in IDR. Fix this issue by moving the assignment so it is done only when passed fold =3D=3D NULL. Prior the patch: [root@machine tc-testing]# ./tdc.py -d enp1s0f0np0 -e 14be Test 14be: Concurrently replace same range of 100k flower filters from 10 t= c instances exit: 123 exit: 0 RTNETLINK answers: Invalid argument We have an error talking to the kernel Command failed tmp/replace_6:1885 All test results: 1..1 not ok 1 14be - Concurrently replace same range of 100k flower filters from= 10 tc instances Command exited with 123, expected 0 RTNETLINK answers: Invalid argument We have an error talking to the kernel Command failed tmp/replace_6:1885 After the patch: [root@machine tc-testing]# ./tdc.py -d enp1s0f0np0 -e 14be Test 14be: Concurrently replace same range of 100k flower filters from 10 t= c instances All test results: 1..1 ok 1 14be - Concurrently replace same range of 100k flower filters from 10 = tc instances Fixes: 08a0063df3ae ("net/sched: flower: Move filter handle initialization = earlier") Signed-off-by: Ivan Vecera Reviewed-by: Simon Horman --- net/sched/cls_flower.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 475fe222a855..fa6c2bb0b626 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -2231,8 +2231,8 @@ static int fl_change(struct net *net, struct sk_buff = *in_skb, kfree(fnew); goto errout_tb; } + fnew->handle =3D handle; } - fnew->handle =3D handle; =20 err =3D tcf_exts_init_ex(&fnew->exts, net, TCA_FLOWER_ACT, 0, tp, handle, !tc_skip_hw(fnew->flags)); --=20 2.39.1