From nobody Thu Feb 12 02:59:32 2026 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43D555733B for ; Mon, 1 Apr 2024 23:45:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015138; cv=none; b=NTEq4U+fFEVAmm2MD859XM+wxzkaSICzKO44RXwzHmPz0LPgXq7oMutAgFdqycgi4yyvVOYWw+CAeqFJYukcy2+sYk1WBByESGuqozN2Uw8lgg0Xp8IGLLFzQKg3mD5WEn6yIGJ2C4AT+/qol86hxCE76sVH3IsLtQH2xeL/DLE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015138; c=relaxed/simple; bh=+CsztK9q5tM43GGL10gYK8Kqi1f7Pf8hDsI9/3BBNzM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qs1wvb+L4UY50Aho/XZ92sX7sQAy7th5/iOWJgMaTsV29pip0VRJs+4iK3vWfdI5pmUVLq5MXObHxnU3h5SvzkTCfn+sb4wT1iOfVc73IQF3jIrqip2rx5y9ler1VWtbVlvDls0HYH/EG+0HNbGWn5cBBVEChMSiDsBzZMOjnLE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YXNe6TZB; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YXNe6TZB" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-6e71495a60dso5125629b3a.0 for ; Mon, 01 Apr 2024 16:45:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712015136; x=1712619936; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=omBG1Fp9W5XLPZF2l7sQk0iqYfHEUN+VAknuUn+rqZE=; b=YXNe6TZBl/kEvTwXRhlNSxxHrRPhLlfx9lkIgnhnXx0OVn7CnAEzcPUxChab9x3bLW kYFyi7aURr89dDUCiO1oslUcW4d/fFbFliVw8ORikUnWnPv+oB0mTM6aPAcrjUHmaZET knBRf7kC1c3d4URoJ71sjgwPR8KWMaUJr3sfJt5pZ3RR5IYyzCqKjiP6d5yCYrnY1EHE BBH7k3KVlu9dCAk4TYonQvTVh7MKFR4PsiENefVEXsp89Cuz3iTTwEBOycrCNdAbruQt UmRm3cGEmuFokK3k8Vm8htbh4YfD5C9pfQ5ojUAKIWAbrFm6xGZkMWEX6r3DosKtArK1 +b7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712015136; x=1712619936; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=omBG1Fp9W5XLPZF2l7sQk0iqYfHEUN+VAknuUn+rqZE=; b=lIobq26/h/5wIyZA8oopRKF1M80NJnOAr6jQL1C4/WMnBeineXLZR8FAG9DY39S4ZT RdyE9/8XYWoH76eif6dk3NXjouQzmWqwhegwOJlAygQdTO6pudITl2rR0Oaqlxp6sgi5 XEd6x0UWHaBo1zmT7z64GnaNq10PhZ7tPwkgT/NpENt4tiOjSE5Rv7TOK+i+zH1SzJ+l 6CEcTsySzcuJu0tz/K7C+vF7v6NrsGU1erhnf2jth+KVhWw0MN1zhqgATvlhsYX66ihU L09m1CQE5GF2lLL3d1uI5zO6QQqVd7P6ggUFSu8/PknJDnZaj7lwGfHlm5+zhiF3sPdU nvhA== X-Forwarded-Encrypted: i=1; AJvYcCVtbO3x+uOGS4cI+aMckz4gfbebm+R+MXl+P9wmh+tDHns/QaAKTxu2q4m/WCDT04Id3/zTnF3c/3a8L41M95E0tLIMT7EraTAtaTr7 X-Gm-Message-State: AOJu0Yy5ld+wpPe5oD6PsP/XNE0uZ1Z3VIqwWA3u+8ciEPpAGhQ65YwG Q/e4xxpMMILjldcrWWta/zIFHiyc+pNujP4npY8psykSeVrW0FS/xPUepHOXtSDtIIbBmXhXkte 7Tk2cv4eG6icfLZDg68Qw6A== X-Google-Smtp-Source: AGHT+IFPnyCpOKHXsNJyFmoIYw9mHF0nhAKa+sK2SXtLT4dW9Z+Ni+6ZQLW0B19CFvr6ZBUZQvE7c1icfY8p5Xe4pA== X-Received: from hramamurthy-gve.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:141e]) (user=hramamurthy job=sendgmr) by 2002:a05:6a00:1784:b0:6ea:baf6:57a3 with SMTP id s4-20020a056a00178400b006eabaf657a3mr215627pfg.6.1712015136495; Mon, 01 Apr 2024 16:45:36 -0700 (PDT) Date: Mon, 1 Apr 2024 23:45:26 +0000 In-Reply-To: <20240401234530.3101900-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240401234530.3101900-1-hramamurthy@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401234530.3101900-2-hramamurthy@google.com> Subject: [PATCH net-next 1/5] gve: simplify setting decriptor count defaults From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, pkaligineedi@google.com, shailend@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, rushilg@google.com, jfraker@google.com, linux-kernel@vger.kernel.org, Harshitha Ramamurthy Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Combine the gve_set_desc_cnt and gve_set_desc_cnt_dqo into one function which sets the counts after checking the queue format. Both the functions in the previous code and the new combined function never return an error so make the new function void and remove the goto on error. Also rename the new function to gve_set_default_desc_cnt to be clearer about its intention. Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve_adminq.c | 44 +++++++------------- 1 file changed, 15 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/eth= ernet/google/gve/gve_adminq.c index ae12ac38e18b..50affa11a59c 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -745,31 +745,19 @@ int gve_adminq_destroy_rx_queues(struct gve_priv *pri= v, u32 num_queues) return gve_adminq_kick_and_wait(priv); } =20 -static int gve_set_desc_cnt(struct gve_priv *priv, - struct gve_device_descriptor *descriptor) +static void gve_set_default_desc_cnt(struct gve_priv *priv, + const struct gve_device_descriptor *descriptor, + const struct gve_device_option_dqo_rda *dev_op_dqo_rda) { priv->tx_desc_cnt =3D be16_to_cpu(descriptor->tx_queue_entries); priv->rx_desc_cnt =3D be16_to_cpu(descriptor->rx_queue_entries); - return 0; -} - -static int -gve_set_desc_cnt_dqo(struct gve_priv *priv, - const struct gve_device_descriptor *descriptor, - const struct gve_device_option_dqo_rda *dev_op_dqo_rda) -{ - priv->tx_desc_cnt =3D be16_to_cpu(descriptor->tx_queue_entries); - priv->rx_desc_cnt =3D be16_to_cpu(descriptor->rx_queue_entries); - - if (priv->queue_format =3D=3D GVE_DQO_QPL_FORMAT) - return 0; - - priv->options_dqo_rda.tx_comp_ring_entries =3D - be16_to_cpu(dev_op_dqo_rda->tx_comp_ring_entries); - priv->options_dqo_rda.rx_buff_ring_entries =3D - be16_to_cpu(dev_op_dqo_rda->rx_buff_ring_entries); =20 - return 0; + if (priv->queue_format =3D=3D GVE_DQO_RDA_FORMAT) { + priv->options_dqo_rda.tx_comp_ring_entries =3D + be16_to_cpu(dev_op_dqo_rda->tx_comp_ring_entries); + priv->options_dqo_rda.rx_buff_ring_entries =3D + be16_to_cpu(dev_op_dqo_rda->rx_buff_ring_entries); + } } =20 static void gve_enable_supported_features(struct gve_priv *priv, @@ -888,15 +876,13 @@ int gve_adminq_describe_device(struct gve_priv *priv) dev_info(&priv->pdev->dev, "Driver is running with GQI QPL queue format.\n"); } - if (gve_is_gqi(priv)) { - err =3D gve_set_desc_cnt(priv, descriptor); - } else { - /* DQO supports LRO. */ + + /* set default descriptor counts */ + gve_set_default_desc_cnt(priv, descriptor, dev_op_dqo_rda); + + /* DQO supports LRO. */ + if (!gve_is_gqi(priv)) priv->dev->hw_features |=3D NETIF_F_LRO; - err =3D gve_set_desc_cnt_dqo(priv, descriptor, dev_op_dqo_rda); - } - if (err) - goto free_device_descriptor; =20 priv->max_registered_pages =3D be64_to_cpu(descriptor->max_registered_pages); --=20 2.44.0.478.gd926399ef9-goog From nobody Thu Feb 12 02:59:32 2026 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1324858200 for ; Mon, 1 Apr 2024 23:45:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015140; cv=none; b=m1UJv6/atuyN3fKEB1Tc1SO5UMx+hKYxN19+d25Hd3JeX8wkWahSP8pmjeb21f+zItgds27tiwRuHv6tNTXs4L9wazEavsvi3RKqQISPcXTkejoRtFYJmod5sACSvxSMBLXPJq2+GMizw6bJMExTVvzyEW9Uw9dgjjQMZmiGZ+g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015140; c=relaxed/simple; bh=4Al7cogJgwG71T9PF36FmiCOCANhmjGddBvyH81kElo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ehbqMEHKJb8N6GR8xYdtn1WlD0UMwffr6AXlnsE5tIF5LaFJMIQO0R7eJAJAXrcQbESJevkSuN3M7RvMRb0dlWzdsTNMEwm/agHXNmBpm8ToRIFEp2ysk3PNSKi1jRQiLS2YJ+q3Fm7SwnmjBlNXtz3KcQZ4YKXzv5zHw0j40WY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YDhZya3c; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YDhZya3c" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1e0b29c2ef9so39042805ad.2 for ; Mon, 01 Apr 2024 16:45:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712015138; x=1712619938; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TD84AfiiAPduKfquC9ZyoiPEtdYCEB6tU+3PiFa9oWU=; b=YDhZya3chzxAteE1FMP5l49U3pkC/aAgONylHAS3ayzWwMh7eX6sQBpcg6ZJnRTJsc dFyR8sbPYYOaYxFGTW3MTcWENcRq9IP0FPhmL8GDiaSLb2DoKOIHT0bHXWdh6B5BqfnY oPAQ1XYrR8CQqhldG48IO0f47plaUFUsn/IyqSgfSl63dfyMfsKrM6H8bcZlWNxnRIZT Gol5ljKBuN8P0MTjsElR3ulWkgcWOq4ETpRmQ1H4QFtx6snna/r32tsTLQ8qlc6THSdI UR0L3N6Fo4ItToETANDj9aGytnNAH5tgFAW6Cl1/vyoCaR/G82+lwAbK1vugOjTBN2kc ZI7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712015138; x=1712619938; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TD84AfiiAPduKfquC9ZyoiPEtdYCEB6tU+3PiFa9oWU=; b=QcMPgfUzbvkJ5yEXw+rfxaUGXN3r3ejy048pr7hVH9F8Fhxlhoq34Q+RH/9DzaidJk RLccNs1X+cyM7nznxDx1RlbcLpfevwGS5FRR6E+Wg6v4GugD2QJrq6Gi2S24pO5sjbFO EeGR5pugNGpmHqxRCoPNszy5NrECC8CCDM91bkaGy89dDZ/DC6Xqwq3YfYh0RlZFkEP6 2Vv0RXw4VrVLUeUr9zm5mUBs/l6qf+u5YgTSzRLhec7opwJuVHFZLuZ5CCHYDoB3Gtg0 WypQqeQbK2SVCn+OUf90cdXwbp5QopXZqqSE0EMliTC/lAQS/DG8aJjcTpv/dT8ufUGU C80g== X-Forwarded-Encrypted: i=1; AJvYcCVZL/VMHPJa7kDPhdl2MY8gLbMdvT2l4S6glFWxjmNYmVxUEJcX1WbjlGv65Dn6503JRGdLYC53K6+zOvUaYp0N8vxEwbz2Yx/uae+t X-Gm-Message-State: AOJu0YxCV3BEnPIsJMDkO7tPLl8y2o6r+hDQatYdhDq60WG7zUXliw0h bQM+HHjNqUoP/wo4SBqWR+xDJ+RaB1YQ49CjWhzGbcCEx2tJgI5d01UvdmZZtpiuLRZOx9u/Jnv oJT08zkfa3rH6GQKU7SPZLQ== X-Google-Smtp-Source: AGHT+IEpZZdbMF8QXndZ12sIWCPLHdc5DHSePA9XGiZF2UTbpSSDR6xCYwihyqIUpslkzVLV0Ia4HHkiL33e5dL/UA== X-Received: from hramamurthy-gve.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:141e]) (user=hramamurthy job=sendgmr) by 2002:a17:902:d505:b0:1e1:a509:5681 with SMTP id b5-20020a170902d50500b001e1a5095681mr819299plg.2.1712015138316; Mon, 01 Apr 2024 16:45:38 -0700 (PDT) Date: Mon, 1 Apr 2024 23:45:27 +0000 In-Reply-To: <20240401234530.3101900-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240401234530.3101900-1-hramamurthy@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401234530.3101900-3-hramamurthy@google.com> Subject: [PATCH net-next 2/5] gve: make the completion and buffer ring size equal for DQO From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, pkaligineedi@google.com, shailend@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, rushilg@google.com, jfraker@google.com, linux-kernel@vger.kernel.org, Harshitha Ramamurthy Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For the DQO queue format, the gve driver stores two ring sizes for both TX and RX - one for completion queue ring and one for data buffer ring. This is supposed to enable asymmetric sizes for these two rings but that is not supported. Make both fields reference the same single variable. This change renders reading supported TX completion ring size and RX buffer ring size for DQO from the device useless, so change those fields to reserved and remove related code. Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 6 --- drivers/net/ethernet/google/gve/gve_adminq.c | 40 +++++--------------- drivers/net/ethernet/google/gve/gve_adminq.h | 3 +- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 3 +- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 4 +- 5 files changed, 13 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/g= oogle/gve/gve.h index 4814c96d5fe7..f009f7b3e68b 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -621,11 +621,6 @@ struct gve_qpl_config { unsigned long *qpl_id_map; /* bitmap of used qpl ids */ }; =20 -struct gve_options_dqo_rda { - u16 tx_comp_ring_entries; /* number of tx_comp descriptors */ - u16 rx_buff_ring_entries; /* number of rx_buff descriptors */ -}; - struct gve_irq_db { __be32 index; } ____cacheline_aligned; @@ -792,7 +787,6 @@ struct gve_priv { u64 link_speed; bool up_before_suspend; /* True if dev was up before suspend */ =20 - struct gve_options_dqo_rda options_dqo_rda; struct gve_ptype_lut *ptype_lut_dqo; =20 /* Must be a power of two. */ diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/eth= ernet/google/gve/gve_adminq.c index 50affa11a59c..2ff9327ec056 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -565,6 +565,7 @@ static int gve_adminq_create_tx_queue(struct gve_priv *= priv, u32 queue_index) cpu_to_be64(tx->q_resources_bus), .tx_ring_addr =3D cpu_to_be64(tx->bus), .ntfy_id =3D cpu_to_be32(tx->ntfy_id), + .tx_ring_size =3D cpu_to_be16(priv->tx_desc_cnt), }; =20 if (gve_is_gqi(priv)) { @@ -573,24 +574,17 @@ static int gve_adminq_create_tx_queue(struct gve_priv= *priv, u32 queue_index) =20 cmd.create_tx_queue.queue_page_list_id =3D cpu_to_be32(qpl_id); } else { - u16 comp_ring_size; u32 qpl_id =3D 0; =20 - if (priv->queue_format =3D=3D GVE_DQO_RDA_FORMAT) { + if (priv->queue_format =3D=3D GVE_DQO_RDA_FORMAT) qpl_id =3D GVE_RAW_ADDRESSING_QPL_ID; - comp_ring_size =3D - priv->options_dqo_rda.tx_comp_ring_entries; - } else { + else qpl_id =3D tx->dqo.qpl->id; - comp_ring_size =3D priv->tx_desc_cnt; - } cmd.create_tx_queue.queue_page_list_id =3D cpu_to_be32(qpl_id); - cmd.create_tx_queue.tx_ring_size =3D - cpu_to_be16(priv->tx_desc_cnt); cmd.create_tx_queue.tx_comp_ring_addr =3D cpu_to_be64(tx->complq_bus_dqo); cmd.create_tx_queue.tx_comp_ring_size =3D - cpu_to_be16(comp_ring_size); + cpu_to_be16(priv->tx_desc_cnt); } =20 return gve_adminq_issue_cmd(priv, &cmd); @@ -621,6 +615,7 @@ static int gve_adminq_create_rx_queue(struct gve_priv *= priv, u32 queue_index) .queue_id =3D cpu_to_be32(queue_index), .ntfy_id =3D cpu_to_be32(rx->ntfy_id), .queue_resources_addr =3D cpu_to_be64(rx->q_resources_bus), + .rx_ring_size =3D cpu_to_be16(priv->rx_desc_cnt), }; =20 if (gve_is_gqi(priv)) { @@ -635,20 +630,13 @@ static int gve_adminq_create_rx_queue(struct gve_priv= *priv, u32 queue_index) cmd.create_rx_queue.queue_page_list_id =3D cpu_to_be32(qpl_id); cmd.create_rx_queue.packet_buffer_size =3D cpu_to_be16(rx->packet_buffer= _size); } else { - u16 rx_buff_ring_entries; u32 qpl_id =3D 0; =20 - if (priv->queue_format =3D=3D GVE_DQO_RDA_FORMAT) { + if (priv->queue_format =3D=3D GVE_DQO_RDA_FORMAT) qpl_id =3D GVE_RAW_ADDRESSING_QPL_ID; - rx_buff_ring_entries =3D - priv->options_dqo_rda.rx_buff_ring_entries; - } else { + else qpl_id =3D rx->dqo.qpl->id; - rx_buff_ring_entries =3D priv->rx_desc_cnt; - } cmd.create_rx_queue.queue_page_list_id =3D cpu_to_be32(qpl_id); - cmd.create_rx_queue.rx_ring_size =3D - cpu_to_be16(priv->rx_desc_cnt); cmd.create_rx_queue.rx_desc_ring_addr =3D cpu_to_be64(rx->dqo.complq.bus); cmd.create_rx_queue.rx_data_ring_addr =3D @@ -656,7 +644,7 @@ static int gve_adminq_create_rx_queue(struct gve_priv *= priv, u32 queue_index) cmd.create_rx_queue.packet_buffer_size =3D cpu_to_be16(priv->data_buffer_size_dqo); cmd.create_rx_queue.rx_buff_ring_size =3D - cpu_to_be16(rx_buff_ring_entries); + cpu_to_be16(priv->rx_desc_cnt); cmd.create_rx_queue.enable_rsc =3D !!(priv->dev->features & NETIF_F_LRO); if (priv->header_split_enabled) @@ -746,18 +734,10 @@ int gve_adminq_destroy_rx_queues(struct gve_priv *pri= v, u32 num_queues) } =20 static void gve_set_default_desc_cnt(struct gve_priv *priv, - const struct gve_device_descriptor *descriptor, - const struct gve_device_option_dqo_rda *dev_op_dqo_rda) + const struct gve_device_descriptor *descriptor) { priv->tx_desc_cnt =3D be16_to_cpu(descriptor->tx_queue_entries); priv->rx_desc_cnt =3D be16_to_cpu(descriptor->rx_queue_entries); - - if (priv->queue_format =3D=3D GVE_DQO_RDA_FORMAT) { - priv->options_dqo_rda.tx_comp_ring_entries =3D - be16_to_cpu(dev_op_dqo_rda->tx_comp_ring_entries); - priv->options_dqo_rda.rx_buff_ring_entries =3D - be16_to_cpu(dev_op_dqo_rda->rx_buff_ring_entries); - } } =20 static void gve_enable_supported_features(struct gve_priv *priv, @@ -878,7 +858,7 @@ int gve_adminq_describe_device(struct gve_priv *priv) } =20 /* set default descriptor counts */ - gve_set_default_desc_cnt(priv, descriptor, dev_op_dqo_rda); + gve_set_default_desc_cnt(priv, descriptor); =20 /* DQO supports LRO. */ if (!gve_is_gqi(priv)) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/eth= ernet/google/gve/gve_adminq.h index 5ac972e45ff8..3ff2028a7472 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -103,8 +103,7 @@ static_assert(sizeof(struct gve_device_option_gqi_qpl) = =3D=3D 4); =20 struct gve_device_option_dqo_rda { __be32 supported_features_mask; - __be16 tx_comp_ring_entries; - __be16 rx_buff_ring_entries; + __be32 reserved; }; =20 static_assert(sizeof(struct gve_device_option_dqo_rda) =3D=3D 8); diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/eth= ernet/google/gve/gve_rx_dqo.c index 8e8071308aeb..7c2ab1edfcb2 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -305,8 +305,7 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, size_t size; int i; =20 - const u32 buffer_queue_slots =3D cfg->raw_addressing ? - priv->options_dqo_rda.rx_buff_ring_entries : cfg->ring_size; + const u32 buffer_queue_slots =3D cfg->ring_size; const u32 completion_queue_slots =3D cfg->ring_size; =20 netif_dbg(priv, drv, priv->dev, "allocating rx ring DQO\n"); diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/eth= ernet/google/gve/gve_tx_dqo.c index bc34b6cd3a3e..70f29b90a982 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -295,9 +295,7 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, =20 /* Queue sizes must be a power of 2 */ tx->mask =3D cfg->ring_size - 1; - tx->dqo.complq_mask =3D priv->queue_format =3D=3D GVE_DQO_RDA_FORMAT ? - priv->options_dqo_rda.tx_comp_ring_entries - 1 : - tx->mask; + tx->dqo.complq_mask =3D tx->mask; =20 /* The max number of pending packets determines the maximum number of * descriptors which maybe written to the completion queue. --=20 2.44.0.478.gd926399ef9-goog From nobody Thu Feb 12 02:59:32 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E43058AAD for ; Mon, 1 Apr 2024 23:45:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015142; cv=none; b=OCYnZVQQrj75kHBuYxXWQ9HGr7H6AdkbQdFE9mSy7tVK2Acf43JN8hJwPETDstmycoD7Etm5Ynn9Njj/I2XWOQu3DUJoff/Q1+H4/R6tGpxtVnyx1JMRJCAGuVCGAZLxN9/w7b3RoFmxO42vhQ+g+2ae2FJAOxQcBbgP3kL3kwQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015142; c=relaxed/simple; bh=zmdb2RykTKxgOnYxe1piDwaiGtsm1mvy4+l4CT85SUA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NB6jcLqjHyLL1kd75wAPWY4aEcXQCvgKE8PhN7nxFUbH1klKZIS4IWMCtyixdsPWQZcEXDXcgAcW6ggibzHXbytuvLbAzegSFVL+cXXgaxbpzFXRyK8uOVe8KwTcerf+6vxRmmWBHDiKAKemsNH8rhGXStz9pfHm0h2d6+DF5mo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3W3lMt6E; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3W3lMt6E" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-29c7932c5f5so3523053a91.1 for ; Mon, 01 Apr 2024 16:45:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712015140; x=1712619940; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SL4gSpw35iFPDr0r8I57ZBW33p2oC8+GNJ3croWhHJo=; b=3W3lMt6Ey94I3YguA0w6fLvbZRcdtBgRkW2vFGGABFIW118jghP/vGxl9jesIq8lr3 gpPpsQg0qecsx7lwvZeT9jS36J4NW2OXGzYcKZllYFBUPf5uahswbqz4jh8q6SJ5IIm6 3y8qwDNC90MUZ9ZKNw6rv9F96sbq9FtEQZKfabbmruPxjfekMm/T79UGxZL95tWDey5J pEAvC2hmDph55rcIp5bIaootJ/5RKH7yWYlV/j6Gw6N1KTy7gPGFAhXerpfqbJeZ6+03 rIW0q8j6NvgfjtZBsBslx0U+Rc1FarG+ysku3CN2eMoBLys0JxlSMg8VjIMFgRcnJaTi cgHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712015140; x=1712619940; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SL4gSpw35iFPDr0r8I57ZBW33p2oC8+GNJ3croWhHJo=; b=ZsZ9qtoCPBJQuMO/aPZq17tvTgEKvnCY6tkKqAjIYtgnuqk02qmiHR7fSJbs3Cc5nl 4oG08N8PciRHbo5SZrwHJ0gqH4sKPZ1UYp9AQLcJleg07NUzNC246BA2Rl9KminhWyc9 HT7MMaQFbE04hvowvV3KNIPlTLqmyB5XmchBodSG4pJOXb4O0kkQQzJP2USzzYcCLfyR lbk0h7bq7BO1uRwOHRS2SomGs4pK1LMxIppRnKb6guHXW/wOcbnGH+tgQWptEddotU2Z UWP2WURocikxMGux4EMCPx+Dx6I0Nu6IRcisbO5hiqDl96eXDbuvQkclvqmpJztk12ia KotA== X-Forwarded-Encrypted: i=1; AJvYcCWp7dHIWedACpLdEMmlN8B8CeTl7zD7uDFUlxQjuGtVkU/hINHhLQf5IGjcC/fBpTfh7YVEujXJUT6n+9IXNaP6hd6SYquwjkYuAHzr X-Gm-Message-State: AOJu0YyRx7GU3v+3wPdZcnGGCgTdto+LtO+WgW5F38bVAL7Nuz9+4rS1 wqBFtO9bBpRfKtmR06/Q7BFCguIZg8aAxlF5vHhADO1aUbQ9hsg2Q0JaUiHI7DIS+OI9sAUQlj1 JgvsRLIzcXGxSBpLJ17sO/Q== X-Google-Smtp-Source: AGHT+IEVImByN++D88oacBX+RxQuh83VMx/H5Xh8Wa6G1kf6Ysz7vUklVGBi0ciEnIoEuO4k9iWTvBB2xKDji1NemA== X-Received: from hramamurthy-gve.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:141e]) (user=hramamurthy job=sendgmr) by 2002:a17:903:2305:b0:1e2:57b:9d8c with SMTP id d5-20020a170903230500b001e2057b9d8cmr649790plh.4.1712015140212; Mon, 01 Apr 2024 16:45:40 -0700 (PDT) Date: Mon, 1 Apr 2024 23:45:28 +0000 In-Reply-To: <20240401234530.3101900-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240401234530.3101900-1-hramamurthy@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401234530.3101900-4-hramamurthy@google.com> Subject: [PATCH net-next 3/5] gve: set page count for RX QPL for GQI and DQO queue formats From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, pkaligineedi@google.com, shailend@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, rushilg@google.com, jfraker@google.com, linux-kernel@vger.kernel.org, Harshitha Ramamurthy Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Fulfill the requirement that for GQI, the number of pages per RX QPL is equal to the ring size. Set this value to be equal to ring size. Because of this change, the rx_data_slot_cnt and rx_pages_per_qpl fields stored in the priv structure are not needed, so remove their usage. And for DQO, the number of pages per RX QPL is more than ring size to account for out-of-order completions. So set it to two times of rx ring size. Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 11 ++++++++--- drivers/net/ethernet/google/gve/gve_adminq.c | 11 ----------- drivers/net/ethernet/google/gve/gve_main.c | 14 +++++++++----- drivers/net/ethernet/google/gve/gve_rx.c | 2 +- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 4 ++-- 5 files changed, 20 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/g= oogle/gve/gve.h index f009f7b3e68b..693d4b7d818b 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -63,7 +63,6 @@ #define GVE_DEFAULT_HEADER_BUFFER_SIZE 128 =20 #define DQO_QPL_DEFAULT_TX_PAGES 512 -#define DQO_QPL_DEFAULT_RX_PAGES 2048 =20 /* Maximum TSO size supported on DQO */ #define GVE_DQO_TX_MAX 0x3FFFF @@ -714,8 +713,6 @@ struct gve_priv { u16 tx_desc_cnt; /* num desc per ring */ u16 rx_desc_cnt; /* num desc per ring */ u16 tx_pages_per_qpl; /* Suggested number of pages per qpl for TX queues = by NIC */ - u16 rx_pages_per_qpl; /* Suggested number of pages per qpl for RX queues = by NIC */ - u16 rx_data_slot_cnt; /* rx buffer length */ u64 max_registered_pages; u64 num_registered_pages; /* num pages registered with NIC */ struct bpf_prog *xdp_prog; /* XDP BPF program */ @@ -1038,6 +1035,14 @@ static inline u32 gve_rx_start_qpl_id(const struct g= ve_queue_config *tx_cfg) return gve_get_rx_qpl_id(tx_cfg, 0); } =20 +static inline u32 gve_get_rx_pages_per_qpl_dqo(u32 rx_desc_cnt) +{ + /* For DQO, page count should be more than ring size for + * out-of-order completions. Set it to two times of ring size. + */ + return 2 * rx_desc_cnt; +} + /* Returns a pointer to the next available tx qpl in the list of qpls */ static inline struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_tx_alloc_rings_cf= g *cfg, diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/eth= ernet/google/gve/gve_adminq.c index 2ff9327ec056..faeff20cd370 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -764,12 +764,8 @@ static void gve_enable_supported_features(struct gve_p= riv *priv, if (dev_op_dqo_qpl) { priv->tx_pages_per_qpl =3D be16_to_cpu(dev_op_dqo_qpl->tx_pages_per_qpl); - priv->rx_pages_per_qpl =3D - be16_to_cpu(dev_op_dqo_qpl->rx_pages_per_qpl); if (priv->tx_pages_per_qpl =3D=3D 0) priv->tx_pages_per_qpl =3D DQO_QPL_DEFAULT_TX_PAGES; - if (priv->rx_pages_per_qpl =3D=3D 0) - priv->rx_pages_per_qpl =3D DQO_QPL_DEFAULT_RX_PAGES; } =20 if (dev_op_buffer_sizes && @@ -878,13 +874,6 @@ int gve_adminq_describe_device(struct gve_priv *priv) mac =3D descriptor->mac; dev_info(&priv->pdev->dev, "MAC addr: %pM\n", mac); priv->tx_pages_per_qpl =3D be16_to_cpu(descriptor->tx_pages_per_qpl); - priv->rx_data_slot_cnt =3D be16_to_cpu(descriptor->rx_pages_per_qpl); - - if (gve_is_gqi(priv) && priv->rx_data_slot_cnt < priv->rx_desc_cnt) { - dev_err(&priv->pdev->dev, "rx_data_slot_cnt cannot be smaller than rx_de= sc_cnt, setting rx_desc_cnt down to %d.\n", - priv->rx_data_slot_cnt); - priv->rx_desc_cnt =3D priv->rx_data_slot_cnt; - } priv->default_num_queues =3D be16_to_cpu(descriptor->default_num_queues); =20 gve_enable_supported_features(priv, supported_features_mask, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ether= net/google/gve/gve_main.c index 166bd827a6d7..470447c0490f 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1103,13 +1103,13 @@ static int gve_alloc_n_qpls(struct gve_priv *priv, return err; } =20 -static int gve_alloc_qpls(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *cfg) +static int gve_alloc_qpls(struct gve_priv *priv, struct gve_qpls_alloc_cfg= *cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { int max_queues =3D cfg->tx_cfg->max_queues + cfg->rx_cfg->max_queues; int rx_start_id, tx_num_qpls, rx_num_qpls; struct gve_queue_page_list *qpls; - int page_count; + u32 page_count; int err; =20 if (cfg->raw_addressing) @@ -1141,8 +1141,12 @@ static int gve_alloc_qpls(struct gve_priv *priv, /* For GQI_QPL number of pages allocated have 1:1 relationship with * number of descriptors. For DQO, number of pages required are * more than descriptors (because of out of order completions). + * Set it to twice the number of descriptors. */ - page_count =3D cfg->is_gqi ? priv->rx_data_slot_cnt : priv->rx_pages_per_= qpl; + if (cfg->is_gqi) + page_count =3D rx_alloc_cfg->ring_size; + else + page_count =3D gve_get_rx_pages_per_qpl_dqo(rx_alloc_cfg->ring_size); rx_num_qpls =3D gve_num_rx_qpls(cfg->rx_cfg, gve_is_qpl(priv)); err =3D gve_alloc_n_qpls(priv, qpls, page_count, rx_start_id, rx_num_qpls= ); if (err) @@ -1363,7 +1367,7 @@ static int gve_queues_mem_alloc(struct gve_priv *priv, { int err; =20 - err =3D gve_alloc_qpls(priv, qpls_alloc_cfg); + err =3D gve_alloc_qpls(priv, qpls_alloc_cfg, rx_alloc_cfg); if (err) { netif_err(priv, drv, priv->dev, "Failed to alloc QPLs\n"); return err; diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/etherne= t/google/gve/gve_rx.c index 20f5a9e7fae9..cd727e55ae0f 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -240,7 +240,7 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, int idx) { struct device *hdev =3D &priv->pdev->dev; - u32 slots =3D priv->rx_data_slot_cnt; + u32 slots =3D cfg->ring_size; int filled_pages; size_t bytes; int err; diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/eth= ernet/google/gve/gve_rx_dqo.c index 7c2ab1edfcb2..15108407b54f 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -178,7 +178,7 @@ static int gve_alloc_page_dqo(struct gve_rx_ring *rx, return err; } else { idx =3D rx->dqo.next_qpl_page_idx; - if (idx >=3D priv->rx_pages_per_qpl) { + if (idx >=3D gve_get_rx_pages_per_qpl_dqo(priv->rx_desc_cnt)) { net_err_ratelimited("%s: Out of QPL pages\n", priv->dev->name); return -ENOMEM; @@ -321,7 +321,7 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, =20 rx->dqo.num_buf_states =3D cfg->raw_addressing ? min_t(s16, S16_MAX, buffer_queue_slots * 4) : - priv->rx_pages_per_qpl; + gve_get_rx_pages_per_qpl_dqo(cfg->ring_size); rx->dqo.buf_states =3D kvcalloc(rx->dqo.num_buf_states, sizeof(rx->dqo.buf_states[0]), GFP_KERNEL); --=20 2.44.0.478.gd926399ef9-goog From nobody Thu Feb 12 02:59:32 2026 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 697AE5D8EB for ; Mon, 1 Apr 2024 23:45:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015144; cv=none; b=Y4t9lbcN85RZ89nrGfRg3lRRt/m2eAJad/jHM5QMpVuJca8jO1RMRtqouPJz88a2wQzuMQw5JzGXpvm2ysGPTmuioaZmnmeQXYCboN3QWAZCQQqaczX8eWSHJO2sgnVqmdnLn4zT40GXsN18be6yiQiiurq8us99FV9IC7sqbio= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015144; c=relaxed/simple; bh=wpaOzUKZe6m41VJhMrZmFbZtbx1NJZMDb9aKqCk1HAc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QyZwQMWYZ8VxVpEE1JLEEorLzIpnka3ZghZoLzbxheQ8z7H7BGeOrMsTAeISRi/M0zTx3PCZGcLONMZtFTuMgHeC17pVMQuJ1erJK4SUYpew12zbl4zxZbcRd6VLpDe5nNEE5lism27rugu+Iwk/W0bxgRSNtusmIGgcLY8Eof4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Df6Zv/SK; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Df6Zv/SK" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1deffb08ac1so33431345ad.3 for ; Mon, 01 Apr 2024 16:45:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712015142; x=1712619942; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DEKaQ6KAeMPWO9dtuQ6KA+UswlqpBXRhAE4EEfNny0E=; b=Df6Zv/SKtWu+T9fwjBMY2viIe2nmpbrNhDxJybsuROI919TgWhoCmbazid38TzPx8P TtDQXJL4rKumCL8MJybDtvy73slGcRs5xpvaHEBWLyOJN5QzDsqAq2bv4maNq0xEKCwL bJFrlFr4MtnQ/OQRsP5SxKgq0IKM7VkAMNbi4grP/wHFbIantsteCFOvJVotjjzcr5iY CDilthfuosgVoqmk3a0gLyBnsZ5HBNX8vXH3TwaOq204jlYznuJB3/yYZQShqjBRODKc d7Yu6MM8K2zeoNrNCPAOHHStg22UAtPkgowDuZVvxifEujLfC6pZ6ZMiYA1CIj5ixPW8 kidg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712015142; x=1712619942; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DEKaQ6KAeMPWO9dtuQ6KA+UswlqpBXRhAE4EEfNny0E=; b=C7N+XlIeJHBzLdh9X2b1JipQ6yBz4ResNrgBAn2p3w5TOG/XKIIq6p6RGXrZ1E/YJV MFJxHzghHlvLYYMkZlDyJNYhNTQtVeW2HBTbGA7XVUFgOt6Ijyx5IfMUDtEo1OVx+MwM wIcX+LG2RH7YoYT6CgLzY/hS+UXqGIjVT75w2rSVzjdk6QDWMH5gnXu0IKx5IuPiSYIj Up2iClLKu0bsjmh/26VtZ1KRenIsFwq2PsrvB1pSbHetrUXy7QaTppmbeuQdd899nrnV vPMWB6zMfc39qFu/JuShdV6gOZUnFdo5X55DnDFJRm8/Le+lpQiz9s7q7vfJ2lw4bJ5q bgmg== X-Forwarded-Encrypted: i=1; AJvYcCV8HAXSYvNpc9fDB0k4qWK2o+pXjhIjM1Nartdw4395gj0s7dprG5kpGc1y8MnognJj8Be7Zbb8nOQOKiyFlXvLlttQkzZqURpSSRun X-Gm-Message-State: AOJu0YyE5lr9XLMR3mtjNTqy4Ai5yF6G9AN4ccuQKCOfWqFx8uR5IVtw 5/RVRTL4ZsjHZWONqArcl2UNI/WKdUUT/W8Nylqc1uRn9aVaiCbZZ8ZsdVZ2F5bK+o+rImVTD2z 0dCVxUIBu8cC7aUwWtlcfcw== X-Google-Smtp-Source: AGHT+IEFwS5Z6zr+ZYfTQEzPuI+9uZ0WffSIQt9jTChPWGH4dIrQ5vd1dTDCGXzmtoo03Z8BVdd8AkExGL442jOsWw== X-Received: from hramamurthy-gve.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:141e]) (user=hramamurthy job=sendgmr) by 2002:a17:903:41d1:b0:1e0:c580:4960 with SMTP id u17-20020a17090341d100b001e0c5804960mr1068554ple.8.1712015141826; Mon, 01 Apr 2024 16:45:41 -0700 (PDT) Date: Mon, 1 Apr 2024 23:45:29 +0000 In-Reply-To: <20240401234530.3101900-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240401234530.3101900-1-hramamurthy@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401234530.3101900-5-hramamurthy@google.com> Subject: [PATCH net-next 4/5] gve: add support to read ring size ranges from the device From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, pkaligineedi@google.com, shailend@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, rushilg@google.com, jfraker@google.com, linux-kernel@vger.kernel.org, Harshitha Ramamurthy Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add support to read ring size change capability and the min and max descriptor counts from the device and store it in the driver. Also accommodate a special case where the device does not provide minimum ring size depending on the version of the device. In that case, rely on default values for the minimums. Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 10 +++ drivers/net/ethernet/google/gve/gve_adminq.c | 71 +++++++++++++++++--- drivers/net/ethernet/google/gve/gve_adminq.h | 45 ++++++++----- 3 files changed, 102 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/g= oogle/gve/gve.h index 693d4b7d818b..669cacdae4f4 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -50,6 +50,10 @@ /* PTYPEs are always 10 bits. */ #define GVE_NUM_PTYPES 1024 =20 +/* Default minimum ring size */ +#define GVE_DEFAULT_MIN_TX_RING_SIZE 256 +#define GVE_DEFAULT_MIN_RX_RING_SIZE 512 + #define GVE_DEFAULT_RX_BUFFER_SIZE 2048 =20 #define GVE_MAX_RX_BUFFER_SIZE 4096 @@ -712,6 +716,12 @@ struct gve_priv { u16 num_event_counters; u16 tx_desc_cnt; /* num desc per ring */ u16 rx_desc_cnt; /* num desc per ring */ + u16 max_tx_desc_cnt; + u16 max_rx_desc_cnt; + u16 min_tx_desc_cnt; + u16 min_rx_desc_cnt; + bool modify_ring_size_enabled; + bool default_min_ring_size; u16 tx_pages_per_qpl; /* Suggested number of pages per qpl for TX queues = by NIC */ u64 max_registered_pages; u64 num_registered_pages; /* num pages registered with NIC */ diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/eth= ernet/google/gve/gve_adminq.c index faeff20cd370..b2b619aa2310 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -32,6 +32,8 @@ struct gve_device_option *gve_get_next_option(struct gve_= device_descriptor *desc return option_end > descriptor_end ? NULL : (struct gve_device_option *)o= ption_end; } =20 +#define GVE_DEVICE_OPTION_NO_MIN_RING_SIZE 8 + static void gve_parse_device_option(struct gve_priv *priv, struct gve_device_descriptor *device_descriptor, @@ -41,7 +43,8 @@ void gve_parse_device_option(struct gve_priv *priv, struct gve_device_option_dqo_rda **dev_op_dqo_rda, struct gve_device_option_jumbo_frames **dev_op_jumbo_frames, struct gve_device_option_dqo_qpl **dev_op_dqo_qpl, - struct gve_device_option_buffer_sizes **dev_op_buffer_sizes) + struct gve_device_option_buffer_sizes **dev_op_buffer_sizes, + struct gve_device_option_modify_ring **dev_op_modify_ring) { u32 req_feat_mask =3D be32_to_cpu(option->required_features_mask); u16 option_length =3D be16_to_cpu(option->option_length); @@ -165,6 +168,27 @@ void gve_parse_device_option(struct gve_priv *priv, "Buffer Sizes"); *dev_op_buffer_sizes =3D (void *)(option + 1); break; + case GVE_DEV_OPT_ID_MODIFY_RING: + if (option_length < GVE_DEVICE_OPTION_NO_MIN_RING_SIZE || + req_feat_mask !=3D GVE_DEV_OPT_REQ_FEAT_MASK_MODIFY_RING) { + dev_warn(&priv->pdev->dev, GVE_DEVICE_OPTION_ERROR_FMT, + "Modify Ring", (int)sizeof(**dev_op_modify_ring), + GVE_DEV_OPT_REQ_FEAT_MASK_MODIFY_RING, + option_length, req_feat_mask); + break; + } + + if (option_length > sizeof(**dev_op_modify_ring)) { + dev_warn(&priv->pdev->dev, + GVE_DEVICE_OPTION_TOO_BIG_FMT, "Modify Ring"); + } + + *dev_op_modify_ring =3D (void *)(option + 1); + + /* device has not provided min ring size */ + if (option_length =3D=3D GVE_DEVICE_OPTION_NO_MIN_RING_SIZE) + priv->default_min_ring_size =3D true; + break; default: /* If we don't recognize the option just continue * without doing anything. @@ -183,7 +207,8 @@ gve_process_device_options(struct gve_priv *priv, struct gve_device_option_dqo_rda **dev_op_dqo_rda, struct gve_device_option_jumbo_frames **dev_op_jumbo_frames, struct gve_device_option_dqo_qpl **dev_op_dqo_qpl, - struct gve_device_option_buffer_sizes **dev_op_buffer_sizes) + struct gve_device_option_buffer_sizes **dev_op_buffer_sizes, + struct gve_device_option_modify_ring **dev_op_modify_ring) { const int num_options =3D be16_to_cpu(descriptor->num_device_options); struct gve_device_option *dev_opt; @@ -204,7 +229,8 @@ gve_process_device_options(struct gve_priv *priv, gve_parse_device_option(priv, descriptor, dev_opt, dev_op_gqi_rda, dev_op_gqi_qpl, dev_op_dqo_rda, dev_op_jumbo_frames, - dev_op_dqo_qpl, dev_op_buffer_sizes); + dev_op_dqo_qpl, dev_op_buffer_sizes, + dev_op_modify_ring); dev_opt =3D next_opt; } =20 @@ -738,6 +764,12 @@ static void gve_set_default_desc_cnt(struct gve_priv *= priv, { priv->tx_desc_cnt =3D be16_to_cpu(descriptor->tx_queue_entries); priv->rx_desc_cnt =3D be16_to_cpu(descriptor->rx_queue_entries); + + /* set default ranges */ + priv->max_tx_desc_cnt =3D priv->tx_desc_cnt; + priv->max_rx_desc_cnt =3D priv->rx_desc_cnt; + priv->min_tx_desc_cnt =3D priv->tx_desc_cnt; + priv->min_rx_desc_cnt =3D priv->rx_desc_cnt; } =20 static void gve_enable_supported_features(struct gve_priv *priv, @@ -747,7 +779,9 @@ static void gve_enable_supported_features(struct gve_pr= iv *priv, const struct gve_device_option_dqo_qpl *dev_op_dqo_qpl, const struct gve_device_option_buffer_sizes - *dev_op_buffer_sizes) + *dev_op_buffer_sizes, + const struct gve_device_option_modify_ring + *dev_op_modify_ring) { /* Before control reaches this point, the page-size-capped max MTU from * the gve_device_descriptor field has already been stored in @@ -778,12 +812,33 @@ static void gve_enable_supported_features(struct gve_= priv *priv, "BUFFER SIZES device option enabled with max_rx_buffer_size of %u, hea= der_buf_size of %u.\n", priv->max_rx_buffer_size, priv->header_buf_size); } + + /* Read and store ring size ranges given by device */ + if (dev_op_modify_ring && + (supported_features_mask & GVE_SUP_MODIFY_RING_MASK)) { + priv->modify_ring_size_enabled =3D true; + + /* max ring size for DQO QPL should not be overwritten because of device= limit */ + if (priv->queue_format !=3D GVE_DQO_QPL_FORMAT) { + priv->max_rx_desc_cnt =3D be16_to_cpu(dev_op_modify_ring->max_rx_ring_s= ize); + priv->max_tx_desc_cnt =3D be16_to_cpu(dev_op_modify_ring->max_tx_ring_s= ize); + } + if (priv->default_min_ring_size) { + /* If device hasn't provided minimums, use default minimums */ + priv->min_tx_desc_cnt =3D GVE_DEFAULT_MIN_TX_RING_SIZE; + priv->min_rx_desc_cnt =3D GVE_DEFAULT_MIN_RX_RING_SIZE; + } else { + priv->min_rx_desc_cnt =3D be16_to_cpu(dev_op_modify_ring->min_rx_ring_s= ize); + priv->min_tx_desc_cnt =3D be16_to_cpu(dev_op_modify_ring->min_tx_ring_s= ize); + } + } } =20 int gve_adminq_describe_device(struct gve_priv *priv) { struct gve_device_option_buffer_sizes *dev_op_buffer_sizes =3D NULL; struct gve_device_option_jumbo_frames *dev_op_jumbo_frames =3D NULL; + struct gve_device_option_modify_ring *dev_op_modify_ring =3D NULL; struct gve_device_option_gqi_rda *dev_op_gqi_rda =3D NULL; struct gve_device_option_gqi_qpl *dev_op_gqi_qpl =3D NULL; struct gve_device_option_dqo_rda *dev_op_dqo_rda =3D NULL; @@ -815,9 +870,9 @@ int gve_adminq_describe_device(struct gve_priv *priv) =20 err =3D gve_process_device_options(priv, descriptor, &dev_op_gqi_rda, &dev_op_gqi_qpl, &dev_op_dqo_rda, - &dev_op_jumbo_frames, - &dev_op_dqo_qpl, - &dev_op_buffer_sizes); + &dev_op_jumbo_frames, &dev_op_dqo_qpl, + &dev_op_buffer_sizes, + &dev_op_modify_ring); if (err) goto free_device_descriptor; =20 @@ -878,7 +933,7 @@ int gve_adminq_describe_device(struct gve_priv *priv) =20 gve_enable_supported_features(priv, supported_features_mask, dev_op_jumbo_frames, dev_op_dqo_qpl, - dev_op_buffer_sizes); + dev_op_buffer_sizes, dev_op_modify_ring); =20 free_device_descriptor: dma_pool_free(priv->adminq_pool, descriptor, descriptor_bus); diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/eth= ernet/google/gve/gve_adminq.h index 3ff2028a7472..beedf2353847 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -133,6 +133,16 @@ struct gve_device_option_buffer_sizes { =20 static_assert(sizeof(struct gve_device_option_buffer_sizes) =3D=3D 8); =20 +struct gve_device_option_modify_ring { + __be32 supported_featured_mask; + __be16 max_rx_ring_size; + __be16 max_tx_ring_size; + __be16 min_rx_ring_size; + __be16 min_tx_ring_size; +}; + +static_assert(sizeof(struct gve_device_option_modify_ring) =3D=3D 12); + /* Terminology: * * RDA - Raw DMA Addressing - Buffers associated with SKBs are directly DMA @@ -142,28 +152,31 @@ static_assert(sizeof(struct gve_device_option_buffer_= sizes) =3D=3D 8); * the device for read/write and data is copied from/to SKBs. */ enum gve_dev_opt_id { - GVE_DEV_OPT_ID_GQI_RAW_ADDRESSING =3D 0x1, - GVE_DEV_OPT_ID_GQI_RDA =3D 0x2, - GVE_DEV_OPT_ID_GQI_QPL =3D 0x3, - GVE_DEV_OPT_ID_DQO_RDA =3D 0x4, - GVE_DEV_OPT_ID_DQO_QPL =3D 0x7, - GVE_DEV_OPT_ID_JUMBO_FRAMES =3D 0x8, - GVE_DEV_OPT_ID_BUFFER_SIZES =3D 0xa, + GVE_DEV_OPT_ID_GQI_RAW_ADDRESSING =3D 0x1, + GVE_DEV_OPT_ID_GQI_RDA =3D 0x2, + GVE_DEV_OPT_ID_GQI_QPL =3D 0x3, + GVE_DEV_OPT_ID_DQO_RDA =3D 0x4, + GVE_DEV_OPT_ID_MODIFY_RING =3D 0x6, + GVE_DEV_OPT_ID_DQO_QPL =3D 0x7, + GVE_DEV_OPT_ID_JUMBO_FRAMES =3D 0x8, + GVE_DEV_OPT_ID_BUFFER_SIZES =3D 0xa, }; =20 enum gve_dev_opt_req_feat_mask { - GVE_DEV_OPT_REQ_FEAT_MASK_GQI_RAW_ADDRESSING =3D 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_GQI_RDA =3D 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_GQI_QPL =3D 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_DQO_RDA =3D 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_JUMBO_FRAMES =3D 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_DQO_QPL =3D 0x0, - GVE_DEV_OPT_REQ_FEAT_MASK_BUFFER_SIZES =3D 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_GQI_RAW_ADDRESSING =3D 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_GQI_RDA =3D 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_GQI_QPL =3D 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_DQO_RDA =3D 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_JUMBO_FRAMES =3D 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_DQO_QPL =3D 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_BUFFER_SIZES =3D 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_MODIFY_RING =3D 0x0, }; =20 enum gve_sup_feature_mask { - GVE_SUP_JUMBO_FRAMES_MASK =3D 1 << 2, - GVE_SUP_BUFFER_SIZES_MASK =3D 1 << 4, + GVE_SUP_MODIFY_RING_MASK =3D 1 << 0, + GVE_SUP_JUMBO_FRAMES_MASK =3D 1 << 2, + GVE_SUP_BUFFER_SIZES_MASK =3D 1 << 4, }; =20 #define GVE_DEV_OPT_LEN_GQI_RAW_ADDRESSING 0x0 --=20 2.44.0.478.gd926399ef9-goog From nobody Thu Feb 12 02:59:33 2026 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AFC95F874 for ; Mon, 1 Apr 2024 23:45:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015146; cv=none; b=avoR2brDg6T3maR5QxcVdmRJTqikn8jzx9BP5bVNE59aZ5ZuCSWmp+Gluv0kRWP/xQTxPf0Lcu3m5JbdUKbS/oC5Q7iyG7QC19kBEPhhKD4wClF+Z2Z3QuoHjc6hy6yvVqJRQ254tmEKS9aCTMx9pQyEGH33KTGmNwiW4rbGWic= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712015146; c=relaxed/simple; bh=T+r5MN0wKWkjv/eITFuRd7ZDeIMaevp8ktEOAZXCBSo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MS9mp6otDU0/prHQmo5A5tDx0LWZFvSgwjVHmiPo1B0Wup4UUV7UTQRL9lFlWZsime3v5GUTqr1Zo+NbvOfcofhRJp7FfOr8GqSVecLgaHiwixGakUaPuRuCW5+RHssqEH1das5cwsQLx8QfWiTX6XAHv9tHKw5CyWwX0VRV438= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IW+B6EP/; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IW+B6EP/" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-6ea81b74262so3945481b3a.1 for ; Mon, 01 Apr 2024 16:45:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712015143; x=1712619943; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7x9CpIazFdTR4Zq3OOfIkhN9CyfFJTEzyO4IvebVf/Y=; b=IW+B6EP/KzkT/1xOdGTZWGpsj48V86ugK8ezpTTkNghksVQfX00BrG8uUb7mGUGN9X 8KymasS3cnoqtWUjHOtiHtXaXPISV2e81fpow4/9EbRS5DETdfYSUSlb8VluUuNcjopR Fc2OHX3zuc+kuNwpRd4aNvM7IEg5QkHH4o8G/eA+Tv5bp/+xO1Skdc5w9/py6jhOZQfh RK54qVIbShzhnMlCRidKcr3+RygcSIuUfyCKJv3ddMnOUJTuMR4C4SS+TwdssTgNdZXj z/Zte+VkNye7x8rdnTjzyP1v83/0svRlMxIn6gDPJcqZ55YGIQXtLGxRr1p594LodUVq qEjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712015143; x=1712619943; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7x9CpIazFdTR4Zq3OOfIkhN9CyfFJTEzyO4IvebVf/Y=; b=k87y7hEdjBN6CM6VDDyoqUR/hpp4PyETg6JmwgGAzib8jTrmHwpZL3YLMrv5YxceHE C6B/lMHoxcUmJGGRUHCtoCpNqcHfgiGkLifbzXFdrEnLL1x8/cEHQWpbz7/V8EhjHzWj g1OqvN/n9YV5Yig56G74v3pXWuDA29jIH++jt/PGprNM8fEjklSVbv4qWX1s1iSFIPVc WxoU6UtTjw49Gsf5fBSDl+q3Pam2Rsaiio6NwPM46I0T2vMCX7XMK+bJdSAU/lQ5gNUG +SbumZZ+vVwB09WCAB7JhPOSbYJ2igZa8bGLql7XMrJWMYRKNfhnQAwSVy0emP27Tg1p /bkQ== X-Forwarded-Encrypted: i=1; AJvYcCWYP4fIFATwt2+F+TdpYPAMz6lUvp8uqUMcronJ/s5riJXfEBlt1/UBnqvz4bww4ihEHmWrGl2uxgME25DQMyUizrPlLUph/zSXfiRf X-Gm-Message-State: AOJu0YwHvYzU7eVW4JiD3unvRCcaqJS4n3gtoAdtGo59MxwZl6MAyNTX gfrIMyDSF60k0BdSYDp0a2yLiovO2Joo5vaXGohjsH9S1+0igpm7mqHZ4pReKXfezQVVwiapIRh cGgdD8vflFu7MEXjjftthOg== X-Google-Smtp-Source: AGHT+IHxXT/yLL2cd2rNrldpnEy/7nGDMyQzRcgHUJLn8ajOjZ2fO60/quZkbOYxInCtgd4AjE3Y9oES7raKn1ME0A== X-Received: from hramamurthy-gve.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:141e]) (user=hramamurthy job=sendgmr) by 2002:a05:6a00:a1c:b0:6e6:b9a8:5ce8 with SMTP id p28-20020a056a000a1c00b006e6b9a85ce8mr252030pfh.6.1712015143610; Mon, 01 Apr 2024 16:45:43 -0700 (PDT) Date: Mon, 1 Apr 2024 23:45:30 +0000 In-Reply-To: <20240401234530.3101900-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240401234530.3101900-1-hramamurthy@google.com> X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240401234530.3101900-6-hramamurthy@google.com> Subject: [PATCH net-next 5/5] gve: add support to change ring size via ethtool From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, pkaligineedi@google.com, shailend@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, rushilg@google.com, jfraker@google.com, linux-kernel@vger.kernel.org, Harshitha Ramamurthy Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allow the user to change ring size via ethtool if supported by the device. The driver relies on the ring size ranges queried from device to validate ring sizes requested by the user. Reviewed-by: Praveen Kaligineedi Reviewed-by: Willem de Bruijn Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 8 ++ drivers/net/ethernet/google/gve/gve_ethtool.c | 85 +++++++++++++++++-- drivers/net/ethernet/google/gve/gve_main.c | 16 ++-- 3 files changed, 95 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/g= oogle/gve/gve.h index 669cacdae4f4..e97633b68e25 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -1159,6 +1159,14 @@ int gve_set_hsplit_config(struct gve_priv *priv, u8 = tcp_data_split); /* Reset */ void gve_schedule_reset(struct gve_priv *priv); int gve_reset(struct gve_priv *priv, bool attempt_teardown); +void gve_get_curr_alloc_cfgs(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); +int gve_adjust_config(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); int gve_adjust_queues(struct gve_priv *priv, struct gve_queue_config new_rx_config, struct gve_queue_config new_tx_config); diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/et= hernet/google/gve/gve_ethtool.c index dbe05402d40b..815dead31673 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -490,8 +490,8 @@ static void gve_get_ringparam(struct net_device *netdev, { struct gve_priv *priv =3D netdev_priv(netdev); =20 - cmd->rx_max_pending =3D priv->rx_desc_cnt; - cmd->tx_max_pending =3D priv->tx_desc_cnt; + cmd->rx_max_pending =3D priv->max_rx_desc_cnt; + cmd->tx_max_pending =3D priv->max_tx_desc_cnt; cmd->rx_pending =3D priv->rx_desc_cnt; cmd->tx_pending =3D priv->tx_desc_cnt; =20 @@ -503,20 +503,93 @@ static void gve_get_ringparam(struct net_device *netd= ev, kernel_cmd->tcp_data_split =3D ETHTOOL_TCP_DATA_SPLIT_DISABLED; } =20 +static int gve_adjust_ring_sizes(struct gve_priv *priv, + u16 new_tx_desc_cnt, + u16 new_rx_desc_cnt) +{ + struct gve_tx_alloc_rings_cfg tx_alloc_cfg =3D {0}; + struct gve_rx_alloc_rings_cfg rx_alloc_cfg =3D {0}; + struct gve_qpls_alloc_cfg qpls_alloc_cfg =3D {0}; + struct gve_qpl_config new_qpl_cfg; + int err; + + /* get current queue configuration */ + gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + + /* copy over the new ring_size from ethtool */ + tx_alloc_cfg.ring_size =3D new_tx_desc_cnt; + rx_alloc_cfg.ring_size =3D new_rx_desc_cnt; + + /* qpl_cfg is not read-only, it contains a map that gets updated as + * rings are allocated, which is why we cannot use the yet unreleased + * one in priv. + */ + qpls_alloc_cfg.qpl_cfg =3D &new_qpl_cfg; + tx_alloc_cfg.qpl_cfg =3D &new_qpl_cfg; + rx_alloc_cfg.qpl_cfg =3D &new_qpl_cfg; + + if (netif_running(priv->dev)) { + err =3D gve_adjust_config(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + if (err) + return err; + } + + /* Set new ring_size for the next up */ + priv->tx_desc_cnt =3D new_tx_desc_cnt; + priv->rx_desc_cnt =3D new_rx_desc_cnt; + + return 0; +} + +static int gve_validate_req_ring_size(struct gve_priv *priv, u16 new_tx_de= sc_cnt, + u16 new_rx_desc_cnt) +{ + /* check for valid range */ + if (new_tx_desc_cnt < priv->min_tx_desc_cnt || + new_tx_desc_cnt > priv->max_tx_desc_cnt || + new_rx_desc_cnt < priv->min_rx_desc_cnt || + new_rx_desc_cnt > priv->max_rx_desc_cnt) { + dev_err(&priv->pdev->dev, "Requested descriptor count out of range\n"); + return -EINVAL; + } + + if (!is_power_of_2(new_tx_desc_cnt) || !is_power_of_2(new_rx_desc_cnt)) { + dev_err(&priv->pdev->dev, "Requested descriptor count has to be a power = of 2\n"); + return -EINVAL; + } + return 0; +} + static int gve_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *cmd, struct kernel_ethtool_ringparam *kernel_cmd, struct netlink_ext_ack *extack) { struct gve_priv *priv =3D netdev_priv(netdev); + u16 new_tx_cnt, new_rx_cnt; + int err; + + err =3D gve_set_hsplit_config(priv, kernel_cmd->tcp_data_split); + if (err) + return err; =20 - if (priv->tx_desc_cnt !=3D cmd->tx_pending || - priv->rx_desc_cnt !=3D cmd->rx_pending) { - dev_info(&priv->pdev->dev, "Modify ring size is not supported.\n"); + if (cmd->tx_pending =3D=3D priv->tx_desc_cnt && cmd->rx_pending =3D=3D pr= iv->rx_desc_cnt) + return 0; + + if (!priv->modify_ring_size_enabled) { + dev_err(&priv->pdev->dev, "Modify ring size is not supported.\n"); return -EOPNOTSUPP; } =20 - return gve_set_hsplit_config(priv, kernel_cmd->tcp_data_split); + new_tx_cnt =3D cmd->tx_pending; + new_rx_cnt =3D cmd->rx_pending; + + if (gve_validate_req_ring_size(priv, new_tx_cnt, new_rx_cnt)) + return -EINVAL; + + return gve_adjust_ring_sizes(priv, new_tx_cnt, new_rx_cnt); } =20 static int gve_user_reset(struct net_device *netdev, u32 *flags) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ether= net/google/gve/gve_main.c index 470447c0490f..a515e5af843c 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1314,10 +1314,10 @@ static void gve_rx_get_curr_alloc_cfg(struct gve_pr= iv *priv, cfg->rx =3D priv->rx; } =20 -static void gve_get_curr_alloc_cfgs(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, - struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +void gve_get_curr_alloc_cfgs(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { gve_qpls_get_curr_alloc_cfg(priv, qpls_alloc_cfg); gve_tx_get_curr_alloc_cfg(priv, tx_alloc_cfg); @@ -1867,10 +1867,10 @@ static int gve_xdp(struct net_device *dev, struct n= etdev_bpf *xdp) } } =20 -static int gve_adjust_config(struct gve_priv *priv, - struct gve_qpls_alloc_cfg *qpls_alloc_cfg, - struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, - struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +int gve_adjust_config(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { int err; =20 --=20 2.44.0.478.gd926399ef9-goog