From nobody Wed Oct 8 07:24:30 2025 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9F3A270EA2 for ; Tue, 1 Jul 2025 09:12:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751361134; cv=none; b=mvPkzD/pK+xqKW2dDV9XTjAsDQRUotZT66TONlSC3hBWwksfxCJaFmq/I62ffwE1uIMoCE7hHNNYu71+DcfybcCHzwRAw/eslwMsC03MNfP7oEKCwTpPdKy3cfoFctEeSB2LKLfOEng+Fu0q0f4tFd3woyu6ysRFxkpB4gupKIg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751361134; c=relaxed/simple; bh=aRvvfwgn4muU07e9mCsSPE6zLsYVdRsN+X5I7zRhPYc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KWP5LG37C8pmRevLspSdz6KNGxjm3t5meIQ+r+tHUUyj27MsPCPsrm74+0tHyUQLTrRwwzH7ObL6lM4hOGne0ixYCFQG0NoImg6M9i2kzoVhoUi41jtjAtUf4y3YAMPXZTfnxQyLlJaiX3Ipvau9Bhog/P9tTy4LMjdP95CcW2I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=gfbNX6py; arc=none smtp.client-ip=209.85.210.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="gfbNX6py" Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-7490cb9a892so3784509b3a.0 for ; Tue, 01 Jul 2025 02:12:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1751361132; x=1751965932; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/6zldlB+51cDZUFEJG/JbyeLDVSusfY0mmn3+U6svIM=; b=gfbNX6pyQ4XEv5n+RxWfPLujsSamx1we7I5nbXWZhgVIK1cbPlxpvq3j6kOCbofAMN 6ECHKrYlNIqxwvuntp5iKNZlxUNw4ZjXE5AERVm8MTShk2PAGj0qKHtalIQJ1Y3C6J0f ncpqTdU7SYO0ummt6p4LtUGZZVdEPTxdiGIgk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751361132; x=1751965932; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/6zldlB+51cDZUFEJG/JbyeLDVSusfY0mmn3+U6svIM=; b=D3yf5X2Rscm1r9kmqtjJI/ehRCdu7T8BBmUUvA9xm/SV3/0I6r3KRPvld4H/8GxmdT ByMuYH7qbLQKLsVXuZ+Pu2S+aT6U+PA5lIt1NeBubVKtKqagBj5pRsDN3zwQOnEYl2xi R8Ve8OMgbCmMsHMPLVIPAdf+KL7k9kdqNfiwh6tI2LuBH/wElNe+oxLvTWj1OXClHbSR ZFD1d+ZHLNI4M81J31pSa6xckaWR17HHVaKo+9AQ/3S5xs/57ZvfXLeumNZ+CosIzsvI oj6QfVirPWP83igGX2wbUf3/Wo1NkFUjs6SETQFmQfCPhARzCcfqRR7WlpDzDWi6Go2B mZQQ== X-Forwarded-Encrypted: i=1; AJvYcCUQCjgbdBzL1Psj7yVQmNMHjHI7HFomFRvXJJmnUyYaB/9Lwm/0kRbeD6tz5NAl7Fw96x95o+P65GjLV0Y=@vger.kernel.org X-Gm-Message-State: AOJu0Yxw3bphpSfg0PqanCtg85pogR7i+jiXWuPJ16xPG50CsIZG4Yis mufHy7VvLHKvsi8sZwwiQonXhrSceXxwwWYmYJgG5wS7gFbG+XWeQMm4XUv0xB6ZhA== X-Gm-Gg: ASbGnct5CaVKpWnJPF1GBhR9uDhcGn6Audc7M7XBdmhl+KPMRXZOmPROONuq5KCp56Q bqZaBxkVEbFUu/gYY8pN6Ba50yZdJZWx4AaSplh9P5wn51TpF5U4LjP2zio0Spg7cvYb0od5HSH wz4PHjhAKLx4KVoTWqUG5RHTCkqQLG1HWFjhlXzFVA9OwkvYOhZz3pl/wdrwTdIkxkczindMN1V 1hWnuKPv14tJjwFQKLxMtNScSuMY6MEpcYKfALrAZ4dpsPyaAw+XrFw7kQJEniIA0+ZhiPOY/hi 0zI6dNgygcsDcmS//K3CX8tUghQtUuf4Bjr3hGJhli01vurYQACTD+21YV/TsLE9GS3PkmTMwBu t5SjJhZD7JbddW6aYM2rKtmr1zRJz X-Google-Smtp-Source: AGHT+IEmFrim4m8HzIx/dDKhQE0eNzJ2PKjTiX5hMhAupM2SzxuRWwlqR6PSFSlIngBlrRzJdBQTNw== X-Received: by 2002:a05:6a21:999c:b0:220:879d:5648 with SMTP id adf61e73a8af0-220a16b8ed1mr25166934637.21.1751361131980; Tue, 01 Jul 2025 02:12:11 -0700 (PDT) Received: from localhost.localdomain ([192.19.203.250]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b34e30201c1sm8893603a12.22.2025.07.01.02.12.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Jul 2025 02:12:11 -0700 (PDT) From: Vikas Gupta To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, michael.chan@broadcom.com, pavan.chebbi@broadcom.com, vsrama-krishna.nemani@broadcom.com, Vikas Gupta , Bhargava Chenna Marreddy , Rajashekar Hudumula Subject: [net-next, v3 06/10] bng_en: Add backing store support Date: Tue, 1 Jul 2025 14:35:04 +0000 Message-ID: <20250701143511.280702-7-vikas.gupta@broadcom.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250701143511.280702-1-vikas.gupta@broadcom.com> References: <20250701143511.280702-1-vikas.gupta@broadcom.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Backing store or context memory on the host helps the device to manage rings, stats and other resources. Context memory is allocated with the help of ring alloc/free functions. Signed-off-by: Vikas Gupta Reviewed-by: Bhargava Chenna Marreddy Reviewed-by: Rajashekar Hudumula --- drivers/net/ethernet/broadcom/bnge/bnge.h | 18 + .../ethernet/broadcom/bnge/bnge_hwrm_lib.c | 168 +++++++++ .../ethernet/broadcom/bnge/bnge_hwrm_lib.h | 4 + .../net/ethernet/broadcom/bnge/bnge_rmem.c | 337 ++++++++++++++++++ .../net/ethernet/broadcom/bnge/bnge_rmem.h | 153 ++++++++ 5 files changed, 680 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnge/bnge.h b/drivers/net/ethern= et/broadcom/bnge/bnge.h index 60af0517c45e..01f64a10729c 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge.h @@ -9,6 +9,7 @@ =20 #include #include "../bnxt/bnxt_hsi.h" +#include "bnge_rmem.h" =20 #define DRV_VER_MAJ 1 #define DRV_VER_MIN 15 @@ -52,6 +53,13 @@ enum { BNGE_FW_CAP_VNIC_RE_FLUSH =3D BIT_ULL(26), }; =20 +enum { + BNGE_EN_ROCE_V1 =3D BIT_ULL(0), + BNGE_EN_ROCE_V2 =3D BIT_ULL(1), +}; + +#define BNGE_EN_ROCE (BNGE_EN_ROCE_V1 | BNGE_EN_ROCE_V2) + struct bnge_dev { struct device *dev; struct pci_dev *pdev; @@ -89,6 +97,16 @@ struct bnge_dev { #define BNGE_STATE_DRV_REGISTERED 0 =20 u64 fw_cap; + + /* Backing stores */ + struct bnge_ctx_mem_info *ctx; + + u64 flags; }; =20 +static inline bool bnge_is_roce_en(struct bnge_dev *bd) +{ + return bd->flags & BNGE_EN_ROCE; +} + #endif /* _BNGE_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c b/drivers/n= et/ethernet/broadcom/bnge/bnge_hwrm_lib.c index 25ac73ac37ba..dc69bd1461f9 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c @@ -10,6 +10,7 @@ #include "../bnxt/bnxt_hsi.h" #include "bnge_hwrm.h" #include "bnge_hwrm_lib.h" +#include "bnge_rmem.h" =20 int bnge_hwrm_ver_get(struct bnge_dev *bd) { @@ -211,3 +212,170 @@ int bnge_hwrm_func_drv_unrgtr(struct bnge_dev *bd) return rc; return bnge_hwrm_req_send(bd, req); } + +static void bnge_init_ctx_initializer(struct bnge_ctx_mem_type *ctxm, + u8 init_val, u8 init_offset, + bool init_mask_set) +{ + ctxm->init_value =3D init_val; + ctxm->init_offset =3D BNGE_CTX_INIT_INVALID_OFFSET; + if (init_mask_set) + ctxm->init_offset =3D init_offset * 4; + else + ctxm->init_value =3D 0; +} + +static int bnge_alloc_all_ctx_pg_info(struct bnge_dev *bd, int ctx_max) +{ + struct bnge_ctx_mem_info *ctx =3D bd->ctx; + u16 type; + + for (type =3D 0; type < ctx_max; type++) { + struct bnge_ctx_mem_type *ctxm =3D &ctx->ctx_arr[type]; + int n =3D 1; + + if (!ctxm->max_entries) + continue; + + if (ctxm->instance_bmap) + n =3D hweight32(ctxm->instance_bmap); + ctxm->pg_info =3D kcalloc(n, sizeof(*ctxm->pg_info), GFP_KERNEL); + if (!ctxm->pg_info) + return -ENOMEM; + } + + return 0; +} + +#define BNGE_CTX_INIT_VALID(flags) \ + (!!((flags) & \ + FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_ENABLE_CTX_KIND_INIT)) + +int bnge_hwrm_func_backing_store_qcaps(struct bnge_dev *bd) +{ + struct hwrm_func_backing_store_qcaps_v2_output *resp; + struct hwrm_func_backing_store_qcaps_v2_input *req; + struct bnge_ctx_mem_info *ctx; + u16 type; + int rc; + + if (bd->ctx) + return 0; + + rc =3D bnge_hwrm_req_init(bd, req, HWRM_FUNC_BACKING_STORE_QCAPS_V2); + if (rc) + return rc; + + ctx =3D kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + bd->ctx =3D ctx; + + resp =3D bnge_hwrm_req_hold(bd, req); + + for (type =3D 0; type < BNGE_CTX_V2_MAX; ) { + struct bnge_ctx_mem_type *ctxm =3D &ctx->ctx_arr[type]; + u8 init_val, init_off, i; + __le32 *p; + u32 flags; + + req->type =3D cpu_to_le16(type); + rc =3D bnge_hwrm_req_send(bd, req); + if (rc) + goto ctx_done; + flags =3D le32_to_cpu(resp->flags); + type =3D le16_to_cpu(resp->next_valid_type); + if (!(flags & + FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_TYPE_VALID)) + continue; + + ctxm->type =3D le16_to_cpu(resp->type); + ctxm->entry_size =3D le16_to_cpu(resp->entry_size); + ctxm->flags =3D flags; + ctxm->instance_bmap =3D le32_to_cpu(resp->instance_bit_map); + ctxm->entry_multiple =3D resp->entry_multiple; + ctxm->max_entries =3D le32_to_cpu(resp->max_num_entries); + ctxm->min_entries =3D le32_to_cpu(resp->min_num_entries); + init_val =3D resp->ctx_init_value; + init_off =3D resp->ctx_init_offset; + bnge_init_ctx_initializer(ctxm, init_val, init_off, + BNGE_CTX_INIT_VALID(flags)); + ctxm->split_entry_cnt =3D min_t(u8, resp->subtype_valid_cnt, + BNGE_MAX_SPLIT_ENTRY); + for (i =3D 0, p =3D &resp->split_entry_0; i < ctxm->split_entry_cnt; + i++, p++) + ctxm->split[i] =3D le32_to_cpu(*p); + } + rc =3D bnge_alloc_all_ctx_pg_info(bd, BNGE_CTX_V2_MAX); + +ctx_done: + bnge_hwrm_req_drop(bd, req); + return rc; +} + +static void bnge_hwrm_set_pg_attr(struct bnge_ring_mem_info *rmem, u8 *pg_= attr, + __le64 *pg_dir) +{ + if (!rmem->nr_pages) + return; + + BNGE_SET_CTX_PAGE_ATTR(*pg_attr); + if (rmem->depth >=3D 1) { + if (rmem->depth =3D=3D 2) + *pg_attr |=3D 2; + else + *pg_attr |=3D 1; + *pg_dir =3D cpu_to_le64(rmem->dma_pg_tbl); + } else { + *pg_dir =3D cpu_to_le64(rmem->dma_arr[0]); + } +} + +int bnge_hwrm_func_backing_store(struct bnge_dev *bd, + struct bnge_ctx_mem_type *ctxm, + bool last) +{ + struct hwrm_func_backing_store_cfg_v2_input *req; + u32 instance_bmap =3D ctxm->instance_bmap; + int i, j, rc =3D 0, n =3D 1; + __le32 *p; + + if (!(ctxm->flags & BNGE_CTX_MEM_TYPE_VALID) || !ctxm->pg_info) + return 0; + + if (instance_bmap) + n =3D hweight32(ctxm->instance_bmap); + else + instance_bmap =3D 1; + + rc =3D bnge_hwrm_req_init(bd, req, HWRM_FUNC_BACKING_STORE_CFG_V2); + if (rc) + return rc; + bnge_hwrm_req_hold(bd, req); + req->type =3D cpu_to_le16(ctxm->type); + req->entry_size =3D cpu_to_le16(ctxm->entry_size); + req->subtype_valid_cnt =3D ctxm->split_entry_cnt; + for (i =3D 0, p =3D &req->split_entry_0; i < ctxm->split_entry_cnt; i++) + p[i] =3D cpu_to_le32(ctxm->split[i]); + for (i =3D 0, j =3D 0; j < n && !rc; i++) { + struct bnge_ctx_pg_info *ctx_pg; + + if (!(instance_bmap & (1 << i))) + continue; + req->instance =3D cpu_to_le16(i); + ctx_pg =3D &ctxm->pg_info[j++]; + if (!ctx_pg->entries) + continue; + req->num_entries =3D cpu_to_le32(ctx_pg->entries); + bnge_hwrm_set_pg_attr(&ctx_pg->ring_mem, + &req->page_size_pbl_level, + &req->page_dir); + if (last && j =3D=3D n) + req->flags =3D + cpu_to_le32(BNGE_BS_CFG_ALL_DONE); + rc =3D bnge_hwrm_req_send(bd, req); + } + bnge_hwrm_req_drop(bd, req); + + return rc; +} diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h b/drivers/n= et/ethernet/broadcom/bnge/bnge_hwrm_lib.h index 9308d4fe64d2..c04291d74bf0 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h @@ -12,5 +12,9 @@ int bnge_hwrm_func_drv_unrgtr(struct bnge_dev *bd); int bnge_hwrm_vnic_qcaps(struct bnge_dev *bd); int bnge_hwrm_nvm_dev_info(struct bnge_dev *bd, struct hwrm_nvm_get_dev_info_output *nvm_dev_info); +int bnge_hwrm_func_backing_store(struct bnge_dev *bd, + struct bnge_ctx_mem_type *ctxm, + bool last); +int bnge_hwrm_func_backing_store_qcaps(struct bnge_dev *bd); =20 #endif /* _BNGE_HWRM_LIB_H_ */ diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_rmem.c b/drivers/net/e= thernet/broadcom/bnge/bnge_rmem.c index ef232c4217bc..0e935cc46da6 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_rmem.c +++ b/drivers/net/ethernet/broadcom/bnge/bnge_rmem.c @@ -15,6 +15,24 @@ #include "bnge_hwrm_lib.h" #include "bnge_rmem.h" =20 +static void bnge_init_ctx_mem(struct bnge_ctx_mem_type *ctxm, + void *p, int len) +{ + u8 init_val =3D ctxm->init_value; + u16 offset =3D ctxm->init_offset; + u8 *p2 =3D p; + int i; + + if (!init_val) + return; + if (offset =3D=3D BNGE_CTX_INIT_INVALID_OFFSET) { + memset(p, init_val, len); + return; + } + for (i =3D 0; i < len; i +=3D ctxm->entry_size) + *(p2 + i + offset) =3D init_val; +} + void bnge_free_ring(struct bnge_dev *bd, struct bnge_ring_mem_info *rmem) { struct pci_dev *pdev =3D bd->pdev; @@ -79,6 +97,10 @@ int bnge_alloc_ring(struct bnge_dev *bd, struct bnge_rin= g_mem_info *rmem) if (!rmem->pg_arr[i]) return -ENOMEM; =20 + if (rmem->ctx_mem) + bnge_init_ctx_mem(rmem->ctx_mem, rmem->pg_arr[i], + rmem->page_size); + if (rmem->nr_pages > 1 || rmem->depth > 0) { if (i =3D=3D rmem->nr_pages - 2 && (rmem->flags & BNGE_RMEM_RING_PTE_FLAG)) @@ -99,3 +121,318 @@ int bnge_alloc_ring(struct bnge_dev *bd, struct bnge_r= ing_mem_info *rmem) =20 return 0; } + +static int bnge_alloc_ctx_one_lvl(struct bnge_dev *bd, + struct bnge_ctx_pg_info *ctx_pg) +{ + struct bnge_ring_mem_info *rmem =3D &ctx_pg->ring_mem; + + rmem->page_size =3D BNGE_PAGE_SIZE; + rmem->pg_arr =3D ctx_pg->ctx_pg_arr; + rmem->dma_arr =3D ctx_pg->ctx_dma_arr; + rmem->flags =3D BNGE_RMEM_VALID_PTE_FLAG; + if (rmem->depth >=3D 1) + rmem->flags |=3D BNGE_RMEM_USE_FULL_PAGE_FLAG; + return bnge_alloc_ring(bd, rmem); +} + +static int bnge_alloc_ctx_pg_tbls(struct bnge_dev *bd, + struct bnge_ctx_pg_info *ctx_pg, u32 mem_size, + u8 depth, struct bnge_ctx_mem_type *ctxm) +{ + struct bnge_ring_mem_info *rmem =3D &ctx_pg->ring_mem; + int rc; + + if (!mem_size) + return -EINVAL; + + ctx_pg->nr_pages =3D DIV_ROUND_UP(mem_size, BNGE_PAGE_SIZE); + if (ctx_pg->nr_pages > MAX_CTX_TOTAL_PAGES) { + ctx_pg->nr_pages =3D 0; + return -EINVAL; + } + if (ctx_pg->nr_pages > MAX_CTX_PAGES || depth > 1) { + int nr_tbls, i; + + rmem->depth =3D 2; + ctx_pg->ctx_pg_tbl =3D kcalloc(MAX_CTX_PAGES, sizeof(ctx_pg), + GFP_KERNEL); + if (!ctx_pg->ctx_pg_tbl) + return -ENOMEM; + nr_tbls =3D DIV_ROUND_UP(ctx_pg->nr_pages, MAX_CTX_PAGES); + rmem->nr_pages =3D nr_tbls; + rc =3D bnge_alloc_ctx_one_lvl(bd, ctx_pg); + if (rc) + return rc; + for (i =3D 0; i < nr_tbls; i++) { + struct bnge_ctx_pg_info *pg_tbl; + + pg_tbl =3D kzalloc(sizeof(*pg_tbl), GFP_KERNEL); + if (!pg_tbl) + return -ENOMEM; + ctx_pg->ctx_pg_tbl[i] =3D pg_tbl; + rmem =3D &pg_tbl->ring_mem; + rmem->pg_tbl =3D ctx_pg->ctx_pg_arr[i]; + rmem->dma_pg_tbl =3D ctx_pg->ctx_dma_arr[i]; + rmem->depth =3D 1; + rmem->nr_pages =3D MAX_CTX_PAGES; + rmem->ctx_mem =3D ctxm; + if (i =3D=3D (nr_tbls - 1)) { + int rem =3D ctx_pg->nr_pages % MAX_CTX_PAGES; + + if (rem) + rmem->nr_pages =3D rem; + } + rc =3D bnge_alloc_ctx_one_lvl(bd, pg_tbl); + if (rc) + break; + } + } else { + rmem->nr_pages =3D DIV_ROUND_UP(mem_size, BNGE_PAGE_SIZE); + if (rmem->nr_pages > 1 || depth) + rmem->depth =3D 1; + rmem->ctx_mem =3D ctxm; + rc =3D bnge_alloc_ctx_one_lvl(bd, ctx_pg); + } + + return rc; +} + +static void bnge_free_ctx_pg_tbls(struct bnge_dev *bd, + struct bnge_ctx_pg_info *ctx_pg) +{ + struct bnge_ring_mem_info *rmem =3D &ctx_pg->ring_mem; + + if (rmem->depth > 1 || ctx_pg->nr_pages > MAX_CTX_PAGES || + ctx_pg->ctx_pg_tbl) { + int i, nr_tbls =3D rmem->nr_pages; + + for (i =3D 0; i < nr_tbls; i++) { + struct bnge_ctx_pg_info *pg_tbl; + struct bnge_ring_mem_info *rmem2; + + pg_tbl =3D ctx_pg->ctx_pg_tbl[i]; + if (!pg_tbl) + continue; + rmem2 =3D &pg_tbl->ring_mem; + bnge_free_ring(bd, rmem2); + ctx_pg->ctx_pg_arr[i] =3D NULL; + kfree(pg_tbl); + ctx_pg->ctx_pg_tbl[i] =3D NULL; + } + kfree(ctx_pg->ctx_pg_tbl); + ctx_pg->ctx_pg_tbl =3D NULL; + } + bnge_free_ring(bd, rmem); + ctx_pg->nr_pages =3D 0; +} + +static int bnge_setup_ctxm_pg_tbls(struct bnge_dev *bd, + struct bnge_ctx_mem_type *ctxm, u32 entries, + u8 pg_lvl) +{ + struct bnge_ctx_pg_info *ctx_pg =3D ctxm->pg_info; + int i, rc =3D 0, n =3D 1; + u32 mem_size; + + if (!ctxm->entry_size || !ctx_pg) + return -EINVAL; + if (ctxm->instance_bmap) + n =3D hweight32(ctxm->instance_bmap); + if (ctxm->entry_multiple) + entries =3D roundup(entries, ctxm->entry_multiple); + entries =3D clamp_t(u32, entries, ctxm->min_entries, ctxm->max_entries); + mem_size =3D entries * ctxm->entry_size; + for (i =3D 0; i < n && !rc; i++) { + ctx_pg[i].entries =3D entries; + rc =3D bnge_alloc_ctx_pg_tbls(bd, &ctx_pg[i], mem_size, pg_lvl, + ctxm->init_value ? ctxm : NULL); + } + + return rc; +} + +static int bnge_backing_store_cfg(struct bnge_dev *bd, u32 ena) +{ + struct bnge_ctx_mem_info *ctx =3D bd->ctx; + struct bnge_ctx_mem_type *ctxm; + u16 last_type; + int rc =3D 0; + u16 type; + + if (!ena) + return 0; + else if (ena & FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM) + last_type =3D BNGE_CTX_MAX - 1; + else + last_type =3D BNGE_CTX_L2_MAX - 1; + ctx->ctx_arr[last_type].last =3D 1; + + for (type =3D 0 ; type < BNGE_CTX_V2_MAX; type++) { + ctxm =3D &ctx->ctx_arr[type]; + + rc =3D bnge_hwrm_func_backing_store(bd, ctxm, ctxm->last); + if (rc) + return rc; + } + + return 0; +} + +void bnge_free_ctx_mem(struct bnge_dev *bd) +{ + struct bnge_ctx_mem_info *ctx =3D bd->ctx; + u16 type; + + if (!ctx) + return; + + for (type =3D 0; type < BNGE_CTX_V2_MAX; type++) { + struct bnge_ctx_mem_type *ctxm =3D &ctx->ctx_arr[type]; + struct bnge_ctx_pg_info *ctx_pg =3D ctxm->pg_info; + int i, n =3D 1; + + if (!ctx_pg) + continue; + if (ctxm->instance_bmap) + n =3D hweight32(ctxm->instance_bmap); + for (i =3D 0; i < n; i++) + bnge_free_ctx_pg_tbls(bd, &ctx_pg[i]); + + kfree(ctx_pg); + ctxm->pg_info =3D NULL; + } + + ctx->flags &=3D ~BNGE_CTX_FLAG_INITED; + kfree(ctx); + bd->ctx =3D NULL; +} + +#define FUNC_BACKING_STORE_CFG_REQ_DFLT_ENABLES \ + (FUNC_BACKING_STORE_CFG_REQ_ENABLES_QP | \ + FUNC_BACKING_STORE_CFG_REQ_ENABLES_SRQ | \ + FUNC_BACKING_STORE_CFG_REQ_ENABLES_CQ | \ + FUNC_BACKING_STORE_CFG_REQ_ENABLES_VNIC | \ + FUNC_BACKING_STORE_CFG_REQ_ENABLES_STAT) + +int bnge_alloc_ctx_mem(struct bnge_dev *bd) +{ + struct bnge_ctx_mem_type *ctxm; + struct bnge_ctx_mem_info *ctx; + u32 l2_qps, qp1_qps, max_qps; + u32 ena, entries_sp, entries; + u32 srqs, max_srqs, min; + u32 num_mr, num_ah; + u32 extra_srqs =3D 0; + u32 extra_qps =3D 0; + u32 fast_qpmd_qps; + u8 pg_lvl =3D 1; + int i, rc; + + rc =3D bnge_hwrm_func_backing_store_qcaps(bd); + if (rc) { + dev_err(bd->dev, "Failed querying ctx mem caps, rc: %d\n", rc); + return rc; + } + + ctx =3D bd->ctx; + if (!ctx || (ctx->flags & BNGE_CTX_FLAG_INITED)) + return 0; + + ctxm =3D &ctx->ctx_arr[BNGE_CTX_QP]; + l2_qps =3D ctxm->qp_l2_entries; + qp1_qps =3D ctxm->qp_qp1_entries; + fast_qpmd_qps =3D ctxm->qp_fast_qpmd_entries; + max_qps =3D ctxm->max_entries; + ctxm =3D &ctx->ctx_arr[BNGE_CTX_SRQ]; + srqs =3D ctxm->srq_l2_entries; + max_srqs =3D ctxm->max_entries; + ena =3D 0; + if (bnge_is_roce_en(bd) && !is_kdump_kernel()) { + pg_lvl =3D 2; + extra_qps =3D min_t(u32, 65536, max_qps - l2_qps - qp1_qps); + /* allocate extra qps if fast qp destroy feature enabled */ + extra_qps +=3D fast_qpmd_qps; + extra_srqs =3D min_t(u32, 8192, max_srqs - srqs); + if (fast_qpmd_qps) + ena |=3D FUNC_BACKING_STORE_CFG_REQ_ENABLES_QP_FAST_QPMD; + } + + ctxm =3D &ctx->ctx_arr[BNGE_CTX_QP]; + rc =3D bnge_setup_ctxm_pg_tbls(bd, ctxm, l2_qps + qp1_qps + extra_qps, + pg_lvl); + if (rc) + return rc; + + ctxm =3D &ctx->ctx_arr[BNGE_CTX_SRQ]; + rc =3D bnge_setup_ctxm_pg_tbls(bd, ctxm, srqs + extra_srqs, pg_lvl); + if (rc) + return rc; + + ctxm =3D &ctx->ctx_arr[BNGE_CTX_CQ]; + rc =3D bnge_setup_ctxm_pg_tbls(bd, ctxm, ctxm->cq_l2_entries + + extra_qps * 2, pg_lvl); + if (rc) + return rc; + + ctxm =3D &ctx->ctx_arr[BNGE_CTX_VNIC]; + rc =3D bnge_setup_ctxm_pg_tbls(bd, ctxm, ctxm->max_entries, 1); + if (rc) + return rc; + + ctxm =3D &ctx->ctx_arr[BNGE_CTX_STAT]; + rc =3D bnge_setup_ctxm_pg_tbls(bd, ctxm, ctxm->max_entries, 1); + if (rc) + return rc; + + if (!bnge_is_roce_en(bd)) + goto skip_rdma; + + ctxm =3D &ctx->ctx_arr[BNGE_CTX_MRAV]; + /* 128K extra is needed to accommodate static AH context + * allocation by f/w. + */ + num_mr =3D min_t(u32, ctxm->max_entries / 2, 1024 * 256); + num_ah =3D min_t(u32, num_mr, 1024 * 128); + ctxm->split_entry_cnt =3D BNGE_CTX_MRAV_AV_SPLIT_ENTRY + 1; + if (!ctxm->mrav_av_entries || ctxm->mrav_av_entries > num_ah) + ctxm->mrav_av_entries =3D num_ah; + + rc =3D bnge_setup_ctxm_pg_tbls(bd, ctxm, num_mr + num_ah, 2); + if (rc) + return rc; + ena |=3D FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV; + + ctxm =3D &ctx->ctx_arr[BNGE_CTX_TIM]; + rc =3D bnge_setup_ctxm_pg_tbls(bd, ctxm, l2_qps + qp1_qps + extra_qps, 1); + if (rc) + return rc; + ena |=3D FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM; + +skip_rdma: + ctxm =3D &ctx->ctx_arr[BNGE_CTX_STQM]; + min =3D ctxm->min_entries; + entries_sp =3D ctx->ctx_arr[BNGE_CTX_VNIC].vnic_entries + l2_qps + + 2 * (extra_qps + qp1_qps) + min; + rc =3D bnge_setup_ctxm_pg_tbls(bd, ctxm, entries_sp, 2); + if (rc) + return rc; + + ctxm =3D &ctx->ctx_arr[BNGE_CTX_FTQM]; + entries =3D l2_qps + 2 * (extra_qps + qp1_qps); + rc =3D bnge_setup_ctxm_pg_tbls(bd, ctxm, entries, 2); + if (rc) + return rc; + for (i =3D 0; i < ctx->tqm_fp_rings_count + 1; i++) + ena |=3D FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP << i; + ena |=3D FUNC_BACKING_STORE_CFG_REQ_DFLT_ENABLES; + + rc =3D bnge_backing_store_cfg(bd, ena); + if (rc) { + dev_err(bd->dev, "Failed configuring ctx mem, rc: %d\n", rc); + return rc; + } + ctx->flags |=3D BNGE_CTX_FLAG_INITED; + + return 0; +} diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_rmem.h b/drivers/net/e= thernet/broadcom/bnge/bnge_rmem.h index 56de31ed6613..300f1d8268ef 100644 --- a/drivers/net/ethernet/broadcom/bnge/bnge_rmem.h +++ b/drivers/net/ethernet/broadcom/bnge/bnge_rmem.h @@ -4,6 +4,9 @@ #ifndef _BNGE_RMEM_H_ #define _BNGE_RMEM_H_ =20 +struct bnge_ctx_mem_type; +struct bnge_dev; + #define PTU_PTE_VALID 0x1UL #define PTU_PTE_LAST 0x2UL #define PTU_PTE_NEXT_TO_LAST 0x4UL @@ -27,9 +30,159 @@ struct bnge_ring_mem_info { =20 int vmem_size; void **vmem; + + struct bnge_ctx_mem_type *ctx_mem; +}; + +/* The hardware supports certain page sizes. + * Use the supported page sizes to allocate the rings. + */ +#if (PAGE_SHIFT < 12) +#define BNGE_PAGE_SHIFT 12 +#elif (PAGE_SHIFT <=3D 13) +#define BNGE_PAGE_SHIFT PAGE_SHIFT +#elif (PAGE_SHIFT < 16) +#define BNGE_PAGE_SHIFT 13 +#else +#define BNGE_PAGE_SHIFT 16 +#endif +#define BNGE_PAGE_SIZE (1 << BNGE_PAGE_SHIFT) +/* The RXBD length is 16-bit so we can only support page sizes < 64K */ +#if (PAGE_SHIFT > 15) +#define BNGE_RX_PAGE_SHIFT 15 +#else +#define BNGE_RX_PAGE_SHIFT PAGE_SHIFT +#endif +#define MAX_CTX_PAGES (BNGE_PAGE_SIZE / 8) +#define MAX_CTX_TOTAL_PAGES (MAX_CTX_PAGES * MAX_CTX_PAGES) + +struct bnge_ctx_pg_info { + u32 entries; + u32 nr_pages; + void *ctx_pg_arr[MAX_CTX_PAGES]; + dma_addr_t ctx_dma_arr[MAX_CTX_PAGES]; + struct bnge_ring_mem_info ring_mem; + struct bnge_ctx_pg_info **ctx_pg_tbl; +}; + +#define BNGE_MAX_TQM_SP_RINGS 1 +#define BNGE_MAX_TQM_FP_RINGS 8 +#define BNGE_MAX_TQM_RINGS \ + (BNGE_MAX_TQM_SP_RINGS + BNGE_MAX_TQM_FP_RINGS) +#define BNGE_BACKING_STORE_CFG_LEGACY_LEN 256 +#define BNGE_SET_CTX_PAGE_ATTR(attr) \ +do { \ + if (BNGE_PAGE_SIZE =3D=3D 0x2000) \ + attr =3D FUNC_BACKING_STORE_CFG_REQ_SRQ_PG_SIZE_PG_8K; \ + else if (BNGE_PAGE_SIZE =3D=3D 0x10000) \ + attr =3D FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_64K; \ + else \ + attr =3D FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_4K; \ +} while (0) + +#define BNGE_CTX_MRAV_AV_SPLIT_ENTRY 0 + +#define BNGE_CTX_QP \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_QP +#define BNGE_CTX_SRQ \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRQ +#define BNGE_CTX_CQ \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CQ +#define BNGE_CTX_VNIC \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_VNIC +#define BNGE_CTX_STAT \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_STAT +#define BNGE_CTX_STQM \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SP_TQM_RING +#define BNGE_CTX_FTQM \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_FP_TQM_RING +#define BNGE_CTX_MRAV \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_MRAV +#define BNGE_CTX_TIM \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TIM +#define BNGE_CTX_TCK \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TX_CK +#define BNGE_CTX_RCK \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_RX_CK +#define BNGE_CTX_MTQM \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_MP_TQM_RING +#define BNGE_CTX_SQDBS \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SQ_DB_SHADOW +#define BNGE_CTX_RQDBS \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_RQ_DB_SHADOW +#define BNGE_CTX_SRQDBS \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRQ_DB_SHADOW +#define BNGE_CTX_CQDBS \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CQ_DB_SHADOW +#define BNGE_CTX_SRT_TRACE \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRT_TRACE +#define BNGE_CTX_SRT2_TRACE \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRT2_TRACE +#define BNGE_CTX_CRT_TRACE \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CRT_TRACE +#define BNGE_CTX_CRT2_TRACE \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CRT2_TRACE +#define BNGE_CTX_RIGP0_TRACE \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_RIGP0_TRACE +#define BNGE_CTX_L2_HWRM_TRACE \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_L2_HWRM_TRACE +#define BNGE_CTX_ROCE_HWRM_TRACE \ + FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_ROCE_HWRM_TRACE + +#define BNGE_CTX_MAX (BNGE_CTX_TIM + 1) +#define BNGE_CTX_L2_MAX (BNGE_CTX_FTQM + 1) +#define BNGE_CTX_INV ((u16)-1) + +#define BNGE_CTX_V2_MAX \ + (FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_ROCE_HWRM_TRACE + 1) + +#define BNGE_BS_CFG_ALL_DONE \ + FUNC_BACKING_STORE_CFG_V2_REQ_FLAGS_BS_CFG_ALL_DONE + +struct bnge_ctx_mem_type { + u16 type; + u16 entry_size; + u32 flags; +#define BNGE_CTX_MEM_TYPE_VALID \ + FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_TYPE_VALID + u32 instance_bmap; + u8 init_value; + u8 entry_multiple; + u16 init_offset; +#define BNGE_CTX_INIT_INVALID_OFFSET 0xffff + u32 max_entries; + u32 min_entries; + u8 last:1; + u8 split_entry_cnt; +#define BNGE_MAX_SPLIT_ENTRY 4 + union { + struct { + u32 qp_l2_entries; + u32 qp_qp1_entries; + u32 qp_fast_qpmd_entries; + }; + u32 srq_l2_entries; + u32 cq_l2_entries; + u32 vnic_entries; + struct { + u32 mrav_av_entries; + u32 mrav_num_entries_units; + }; + u32 split[BNGE_MAX_SPLIT_ENTRY]; + }; + struct bnge_ctx_pg_info *pg_info; +}; + +struct bnge_ctx_mem_info { + u8 tqm_fp_rings_count; + u32 flags; +#define BNGE_CTX_FLAG_INITED 0x01 + struct bnge_ctx_mem_type ctx_arr[BNGE_CTX_V2_MAX]; }; =20 int bnge_alloc_ring(struct bnge_dev *bd, struct bnge_ring_mem_info *rmem); void bnge_free_ring(struct bnge_dev *bd, struct bnge_ring_mem_info *rmem); +int bnge_alloc_ctx_mem(struct bnge_dev *bd); +void bnge_free_ctx_mem(struct bnge_dev *bd); =20 #endif /* _BNGE_RMEM_H_ */ --=20 2.47.1