drivers/infiniband/sw/rxe/rxe_recv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
rxe_rcv() currently checks only that the incoming packet is at least
header_size(pkt) bytes long before payload_size() is used.
However, payload_size() subtracts both the attacker-controlled BTH pad
field and RXE_ICRC_SIZE from pkt->paylen:
payload_size = pkt->paylen - offset[RXE_PAYLOAD] - bth_pad(pkt)
- RXE_ICRC_SIZE
This means a short packet can still make payload_size() underflow even
if it includes enough bytes for the fixed headers. Simply requiring
header_size(pkt) + RXE_ICRC_SIZE is not sufficient either, because a
packet with a forged non-zero BTH pad can still leave payload_size()
negative and pass an underflowed value to later receive-path users.
Fix this by validating pkt->paylen against the full minimum length
required by payload_size(): header_size(pkt) + bth_pad(pkt) +
RXE_ICRC_SIZE.
Fixes: 8700e3e7c485 ("Soft RoCE driver")
Cc: stable@vger.kernel.org
Signed-off-by: hkbinbin <hkbinbinbin@gmail.com>
---
drivers/infiniband/sw/rxe/rxe_recv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
index 5861e4244049..f79214738c2b 100644
--- a/drivers/infiniband/sw/rxe/rxe_recv.c
+++ b/drivers/infiniband/sw/rxe/rxe_recv.c
@@ -330,7 +330,8 @@ void rxe_rcv(struct sk_buff *skb)
pkt->qp = NULL;
pkt->mask |= rxe_opcode[pkt->opcode].mask;
- if (unlikely(skb->len < header_size(pkt)))
+ if (unlikely(pkt->paylen < header_size(pkt) + bth_pad(pkt) +
+ RXE_ICRC_SIZE))
goto drop;
err = hdr_check(pkt);
--
2.49.0
在 2026/4/1 5:19, hkbinbin 写道:
> rxe_rcv() currently checks only that the incoming packet is at least
> header_size(pkt) bytes long before payload_size() is used.
>
> However, payload_size() subtracts both the attacker-controlled BTH pad
> field and RXE_ICRC_SIZE from pkt->paylen:
>
> payload_size = pkt->paylen - offset[RXE_PAYLOAD] - bth_pad(pkt)
> - RXE_ICRC_SIZE
>
> This means a short packet can still make payload_size() underflow even
> if it includes enough bytes for the fixed headers. Simply requiring
> header_size(pkt) + RXE_ICRC_SIZE is not sufficient either, because a
> packet with a forged non-zero BTH pad can still leave payload_size()
> negative and pass an underflowed value to later receive-path users.
>
> Fix this by validating pkt->paylen against the full minimum length
> required by payload_size(): header_size(pkt) + bth_pad(pkt) +
> RXE_ICRC_SIZE.
>
> Fixes: 8700e3e7c485 ("Soft RoCE driver")
> Cc: stable@vger.kernel.org
> Signed-off-by: hkbinbin <hkbinbinbin@gmail.com>
Thanks a lot.
If the following analysis can be added into commit logs, it is better.
"
========================================================================
Analysis
========================================================================
In drivers/infiniband/sw/rxe/rxe_hdr.h:
------------------------------------------------------------------------
static inline size_t payload_size(struct rxe_pkt_info *pkt)
{
return pkt->paylen - rxe_opcode[pkt->opcode].offset[RXE_PAYLOAD]
- bth_pad(pkt) - RXE_ICRC_SIZE;
}
------------------------------------------------------------------------
The relevant receive path is:
1. rxe_udp_encap_recv() sets pkt->paylen from the incoming UDP packet.
2. rxe_rcv() validates only:
------------------------------------------------------------------------
if (unlikely(skb->len < header_size(pkt)))
goto drop;
------------------------------------------------------------------------
3. This allows packets where paylen == header_size(pkt), i.e. packets
with only headers and no ICRC trailer.
4. For a UD SEND_ONLY packet (opcode 0x64), the minimum valid header is
BTH + DETH = 12 + 8 = 20 bytes, and offset[RXE_PAYLOAD] is also 20.
5. Therefore a 20-byte packet with pad=0 computes:
------------------------------------------------------------------------
payload_size = 20 - 20 - 0 - 4 = -4
------------------------------------------------------------------------
Because payload_size() returns size_t, this wraps to SIZE_MAX - 3.
In drivers/infiniband/sw/rxe/rxe_recv.c, rxe_icrc_check() then uses that
value in the CRC calculation:
------------------------------------------------------------------------
icrc = rxe_crc32(pkt->rxe, icrc, (u8 *)payload_addr(pkt),
payload_size(pkt) + bth_pad(pkt));
------------------------------------------------------------------------
This causes crc32_le() to read from payload_addr(pkt) for essentially the
entire address space, immediately faulting on unmapped memory.
The bug is remotely reachable because the RXE GSI QP (QPN=1) is always
present once an RXE device is configured, and UD/GSI traffic passes the
address validation path. A single crafted UDP packet to port 4791 with
valid BTH/DETH fields is sufficient.
Trigger packet fields used in testing:
- Opcode: 0x64 (UD SEND_ONLY)
- Transport version: 0
- P_Key: 0xffff
- QPN: 1 (GSI QP)
- Pad: 0
- Q_Key: 0x80010000 (GSI_QKEY)
- No payload
- No ICRC trailer
"
But no the above analysis, I still think it is good.
Thanks,
Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>
Zhu Yanjun
> ---
> drivers/infiniband/sw/rxe/rxe_recv.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
> index 5861e4244049..f79214738c2b 100644
> --- a/drivers/infiniband/sw/rxe/rxe_recv.c
> +++ b/drivers/infiniband/sw/rxe/rxe_recv.c
> @@ -330,7 +330,8 @@ void rxe_rcv(struct sk_buff *skb)
> pkt->qp = NULL;
> pkt->mask |= rxe_opcode[pkt->opcode].mask;
>
> - if (unlikely(skb->len < header_size(pkt)))
> + if (unlikely(pkt->paylen < header_size(pkt) + bth_pad(pkt) +
> + RXE_ICRC_SIZE))
> goto drop;
>
> err = hdr_check(pkt);
© 2016 - 2026 Red Hat, Inc.