From nobody Thu May 2 14:17:09 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1538056283588533.0596706523653; Thu, 27 Sep 2018 06:51:23 -0700 (PDT) Received: from localhost ([::1]:36030 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g5WhM-00045Y-V1 for importer@patchew.org; Thu, 27 Sep 2018 09:51:20 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47183) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1g5WfK-000310-OW for qemu-devel@nongnu.org; Thu, 27 Sep 2018 09:49:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1g5WfB-0001qn-9I for qemu-devel@nongnu.org; Thu, 27 Sep 2018 09:49:11 -0400 Received: from orth.archaic.org.uk ([2001:8b0:1d0::2]:48662) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1g5WfA-0001lp-UB for qemu-devel@nongnu.org; Thu, 27 Sep 2018 09:49:05 -0400 Received: from pm215 by orth.archaic.org.uk with local (Exim 4.89) (envelope-from ) id 1g5Wf2-0002pv-5I; Thu, 27 Sep 2018 14:48:56 +0100 From: Peter Maydell To: qemu-devel@nongnu.org Date: Thu, 27 Sep 2018 14:48:52 +0100 Message-Id: <20180927134852.21490-1-peter.maydell@linaro.org> X-Mailer: git-send-email 2.19.0 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2001:8b0:1d0::2 Subject: [Qemu-devel] [PATCH] hw/scsi/mptendian: Avoid taking address of fields in packed structs X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Paolo Bonzini , Fam Zheng , patches@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RDMRC_1 RSF_0 Z_629925259 SPT_0 Content-Type: text/plain; charset="utf-8" Taking the address of a field in a packed struct is a bad idea, because it might not be actually aligned enough for that pointer type (and thus cause a crash on dereference on some host architectures). Newer versions of clang warn about this. Avoid the bug by not using the "modify in place" byte swapping functions. This patch was produced with the following simple spatch script: @@ expression E; @@ -le16_to_cpus(&E); +E =3D le16_to_cpu(E); @@ expression E; @@ -le32_to_cpus(&E); +E =3D le32_to_cpu(E); @@ expression E; @@ -le64_to_cpus(&E); +E =3D le64_to_cpu(E); @@ expression E; @@ -cpu_to_le16s(&E); +E =3D cpu_to_le16(E); @@ expression E; @@ -cpu_to_le32s(&E); +E =3D cpu_to_le32(E); @@ expression E; @@ -cpu_to_le64s(&E); +E =3D cpu_to_le64(E); followed by some minor tidying of overlong lines and bad indent. Signed-off-by: Peter Maydell Reviewed-by: Fam Zheng Reviewed-by: Philippe Mathieu-Daud=C3=A9 --- NB: tested with qemu-system-x86_64 -cpu host -enable-kvm -m 4096 -device mptsas1068 -drive id=3Dmydisk,if=3Dnone,file=3Dharddisk.qcow2 -device scsi-disk,drive=3Dmydisk which behaves no differently before or after the patch, though the guest I have (an x86 ubuntu) seems to detect the controller but not find any disks attached to it. Maybe my command line is wrong? hw/scsi/mptendian.c | 163 ++++++++++++++++++++++---------------------- 1 file changed, 83 insertions(+), 80 deletions(-) diff --git a/hw/scsi/mptendian.c b/hw/scsi/mptendian.c index 8ae39a76f42..79f99734d21 100644 --- a/hw/scsi/mptendian.c +++ b/hw/scsi/mptendian.c @@ -35,152 +35,155 @@ =20 static void mptsas_fix_sgentry_endianness(MPISGEntry *sge) { - le32_to_cpus(&sge->FlagsLength); + sge->FlagsLength =3D le32_to_cpu(sge->FlagsLength); if (sge->FlagsLength & MPI_SGE_FLAGS_64_BIT_ADDRESSING) { - le64_to_cpus(&sge->u.Address64); + sge->u.Address64 =3D le64_to_cpu(sge->u.Address64); } else { - le32_to_cpus(&sge->u.Address32); + sge->u.Address32 =3D le32_to_cpu(sge->u.Address32); } } =20 static void mptsas_fix_sgentry_endianness_reply(MPISGEntry *sge) { if (sge->FlagsLength & MPI_SGE_FLAGS_64_BIT_ADDRESSING) { - cpu_to_le64s(&sge->u.Address64); + sge->u.Address64 =3D cpu_to_le64(sge->u.Address64); } else { - cpu_to_le32s(&sge->u.Address32); + sge->u.Address32 =3D cpu_to_le32(sge->u.Address32); } - cpu_to_le32s(&sge->FlagsLength); + sge->FlagsLength =3D cpu_to_le32(sge->FlagsLength); } =20 void mptsas_fix_scsi_io_endianness(MPIMsgSCSIIORequest *req) { - le32_to_cpus(&req->MsgContext); - le32_to_cpus(&req->Control); - le32_to_cpus(&req->DataLength); - le32_to_cpus(&req->SenseBufferLowAddr); + req->MsgContext =3D le32_to_cpu(req->MsgContext); + req->Control =3D le32_to_cpu(req->Control); + req->DataLength =3D le32_to_cpu(req->DataLength); + req->SenseBufferLowAddr =3D le32_to_cpu(req->SenseBufferLowAddr); } =20 void mptsas_fix_scsi_io_reply_endianness(MPIMsgSCSIIOReply *reply) { - cpu_to_le32s(&reply->MsgContext); - cpu_to_le16s(&reply->IOCStatus); - cpu_to_le32s(&reply->IOCLogInfo); - cpu_to_le32s(&reply->TransferCount); - cpu_to_le32s(&reply->SenseCount); - cpu_to_le32s(&reply->ResponseInfo); - cpu_to_le16s(&reply->TaskTag); + reply->MsgContext =3D cpu_to_le32(reply->MsgContext); + reply->IOCStatus =3D cpu_to_le16(reply->IOCStatus); + reply->IOCLogInfo =3D cpu_to_le32(reply->IOCLogInfo); + reply->TransferCount =3D cpu_to_le32(reply->TransferCount); + reply->SenseCount =3D cpu_to_le32(reply->SenseCount); + reply->ResponseInfo =3D cpu_to_le32(reply->ResponseInfo); + reply->TaskTag =3D cpu_to_le16(reply->TaskTag); } =20 void mptsas_fix_scsi_task_mgmt_endianness(MPIMsgSCSITaskMgmt *req) { - le32_to_cpus(&req->MsgContext); - le32_to_cpus(&req->TaskMsgContext); + req->MsgContext =3D le32_to_cpu(req->MsgContext); + req->TaskMsgContext =3D le32_to_cpu(req->TaskMsgContext); } =20 void mptsas_fix_scsi_task_mgmt_reply_endianness(MPIMsgSCSITaskMgmtReply *r= eply) { - cpu_to_le32s(&reply->MsgContext); - cpu_to_le16s(&reply->IOCStatus); - cpu_to_le32s(&reply->IOCLogInfo); - cpu_to_le32s(&reply->TerminationCount); + reply->MsgContext =3D cpu_to_le32(reply->MsgContext); + reply->IOCStatus =3D cpu_to_le16(reply->IOCStatus); + reply->IOCLogInfo =3D cpu_to_le32(reply->IOCLogInfo); + reply->TerminationCount =3D cpu_to_le32(reply->TerminationCount); } =20 void mptsas_fix_ioc_init_endianness(MPIMsgIOCInit *req) { - le32_to_cpus(&req->MsgContext); - le16_to_cpus(&req->ReplyFrameSize); - le32_to_cpus(&req->HostMfaHighAddr); - le32_to_cpus(&req->SenseBufferHighAddr); - le32_to_cpus(&req->ReplyFifoHostSignalingAddr); + req->MsgContext =3D le32_to_cpu(req->MsgContext); + req->ReplyFrameSize =3D le16_to_cpu(req->ReplyFrameSize); + req->HostMfaHighAddr =3D le32_to_cpu(req->HostMfaHighAddr); + req->SenseBufferHighAddr =3D le32_to_cpu(req->SenseBufferHighAddr); + req->ReplyFifoHostSignalingAddr =3D + le32_to_cpu(req->ReplyFifoHostSignalingAddr); mptsas_fix_sgentry_endianness(&req->HostPageBufferSGE); - le16_to_cpus(&req->MsgVersion); - le16_to_cpus(&req->HeaderVersion); + req->MsgVersion =3D le16_to_cpu(req->MsgVersion); + req->HeaderVersion =3D le16_to_cpu(req->HeaderVersion); } =20 void mptsas_fix_ioc_init_reply_endianness(MPIMsgIOCInitReply *reply) { - cpu_to_le32s(&reply->MsgContext); - cpu_to_le16s(&reply->IOCStatus); - cpu_to_le32s(&reply->IOCLogInfo); + reply->MsgContext =3D cpu_to_le32(reply->MsgContext); + reply->IOCStatus =3D cpu_to_le16(reply->IOCStatus); + reply->IOCLogInfo =3D cpu_to_le32(reply->IOCLogInfo); } =20 void mptsas_fix_ioc_facts_endianness(MPIMsgIOCFacts *req) { - le32_to_cpus(&req->MsgContext); + req->MsgContext =3D le32_to_cpu(req->MsgContext); } =20 void mptsas_fix_ioc_facts_reply_endianness(MPIMsgIOCFactsReply *reply) { - cpu_to_le16s(&reply->MsgVersion); - cpu_to_le16s(&reply->HeaderVersion); - cpu_to_le32s(&reply->MsgContext); - cpu_to_le16s(&reply->IOCExceptions); - cpu_to_le16s(&reply->IOCStatus); - cpu_to_le32s(&reply->IOCLogInfo); - cpu_to_le16s(&reply->ReplyQueueDepth); - cpu_to_le16s(&reply->RequestFrameSize); - cpu_to_le16s(&reply->ProductID); - cpu_to_le32s(&reply->CurrentHostMfaHighAddr); - cpu_to_le16s(&reply->GlobalCredits); - cpu_to_le32s(&reply->CurrentSenseBufferHighAddr); - cpu_to_le16s(&reply->CurReplyFrameSize); - cpu_to_le32s(&reply->FWImageSize); - cpu_to_le32s(&reply->IOCCapabilities); - cpu_to_le16s(&reply->HighPriorityQueueDepth); + reply->MsgVersion =3D cpu_to_le16(reply->MsgVersion); + reply->HeaderVersion =3D cpu_to_le16(reply->HeaderVersion); + reply->MsgContext =3D cpu_to_le32(reply->MsgContext); + reply->IOCExceptions =3D cpu_to_le16(reply->IOCExceptions); + reply->IOCStatus =3D cpu_to_le16(reply->IOCStatus); + reply->IOCLogInfo =3D cpu_to_le32(reply->IOCLogInfo); + reply->ReplyQueueDepth =3D cpu_to_le16(reply->ReplyQueueDepth); + reply->RequestFrameSize =3D cpu_to_le16(reply->RequestFrameSize); + reply->ProductID =3D cpu_to_le16(reply->ProductID); + reply->CurrentHostMfaHighAddr =3D cpu_to_le32(reply->CurrentHostMfaHig= hAddr); + reply->GlobalCredits =3D cpu_to_le16(reply->GlobalCredits); + reply->CurrentSenseBufferHighAddr =3D + cpu_to_le32(reply->CurrentSenseBufferHighAddr); + reply->CurReplyFrameSize =3D cpu_to_le16(reply->CurReplyFrameSize); + reply->FWImageSize =3D cpu_to_le32(reply->FWImageSize); + reply->IOCCapabilities =3D cpu_to_le32(reply->IOCCapabilities); + reply->HighPriorityQueueDepth =3D cpu_to_le16(reply->HighPriorityQueue= Depth); mptsas_fix_sgentry_endianness_reply(&reply->HostPageBufferSGE); - cpu_to_le32s(&reply->ReplyFifoHostSignalingAddr); + reply->ReplyFifoHostSignalingAddr =3D + cpu_to_le32(reply->ReplyFifoHostSignalingAddr); } =20 void mptsas_fix_config_endianness(MPIMsgConfig *req) { - le16_to_cpus(&req->ExtPageLength); - le32_to_cpus(&req->MsgContext); - le32_to_cpus(&req->PageAddress); + req->ExtPageLength =3D le16_to_cpu(req->ExtPageLength); + req->MsgContext =3D le32_to_cpu(req->MsgContext); + req->PageAddress =3D le32_to_cpu(req->PageAddress); mptsas_fix_sgentry_endianness(&req->PageBufferSGE); } =20 void mptsas_fix_config_reply_endianness(MPIMsgConfigReply *reply) { - cpu_to_le16s(&reply->ExtPageLength); - cpu_to_le32s(&reply->MsgContext); - cpu_to_le16s(&reply->IOCStatus); - cpu_to_le32s(&reply->IOCLogInfo); + reply->ExtPageLength =3D cpu_to_le16(reply->ExtPageLength); + reply->MsgContext =3D cpu_to_le32(reply->MsgContext); + reply->IOCStatus =3D cpu_to_le16(reply->IOCStatus); + reply->IOCLogInfo =3D cpu_to_le32(reply->IOCLogInfo); } =20 void mptsas_fix_port_facts_endianness(MPIMsgPortFacts *req) { - le32_to_cpus(&req->MsgContext); + req->MsgContext =3D le32_to_cpu(req->MsgContext); } =20 void mptsas_fix_port_facts_reply_endianness(MPIMsgPortFactsReply *reply) { - cpu_to_le32s(&reply->MsgContext); - cpu_to_le16s(&reply->IOCStatus); - cpu_to_le32s(&reply->IOCLogInfo); - cpu_to_le16s(&reply->MaxDevices); - cpu_to_le16s(&reply->PortSCSIID); - cpu_to_le16s(&reply->ProtocolFlags); - cpu_to_le16s(&reply->MaxPostedCmdBuffers); - cpu_to_le16s(&reply->MaxPersistentIDs); - cpu_to_le16s(&reply->MaxLanBuckets); + reply->MsgContext =3D cpu_to_le32(reply->MsgContext); + reply->IOCStatus =3D cpu_to_le16(reply->IOCStatus); + reply->IOCLogInfo =3D cpu_to_le32(reply->IOCLogInfo); + reply->MaxDevices =3D cpu_to_le16(reply->MaxDevices); + reply->PortSCSIID =3D cpu_to_le16(reply->PortSCSIID); + reply->ProtocolFlags =3D cpu_to_le16(reply->ProtocolFlags); + reply->MaxPostedCmdBuffers =3D cpu_to_le16(reply->MaxPostedCmdBuffers); + reply->MaxPersistentIDs =3D cpu_to_le16(reply->MaxPersistentIDs); + reply->MaxLanBuckets =3D cpu_to_le16(reply->MaxLanBuckets); } =20 void mptsas_fix_port_enable_endianness(MPIMsgPortEnable *req) { - le32_to_cpus(&req->MsgContext); + req->MsgContext =3D le32_to_cpu(req->MsgContext); } =20 void mptsas_fix_port_enable_reply_endianness(MPIMsgPortEnableReply *reply) { - cpu_to_le32s(&reply->MsgContext); - cpu_to_le16s(&reply->IOCStatus); - cpu_to_le32s(&reply->IOCLogInfo); + reply->MsgContext =3D cpu_to_le32(reply->MsgContext); + reply->IOCStatus =3D cpu_to_le16(reply->IOCStatus); + reply->IOCLogInfo =3D cpu_to_le32(reply->IOCLogInfo); } =20 void mptsas_fix_event_notification_endianness(MPIMsgEventNotify *req) { - le32_to_cpus(&req->MsgContext); + req->MsgContext =3D le32_to_cpu(req->MsgContext); } =20 void mptsas_fix_event_notification_reply_endianness(MPIMsgEventNotifyReply= *reply) @@ -188,16 +191,16 @@ void mptsas_fix_event_notification_reply_endianness(M= PIMsgEventNotifyReply *repl int length =3D reply->EventDataLength; int i; =20 - cpu_to_le16s(&reply->EventDataLength); - cpu_to_le32s(&reply->MsgContext); - cpu_to_le16s(&reply->IOCStatus); - cpu_to_le32s(&reply->IOCLogInfo); - cpu_to_le32s(&reply->Event); - cpu_to_le32s(&reply->EventContext); + reply->EventDataLength =3D cpu_to_le16(reply->EventDataLength); + reply->MsgContext =3D cpu_to_le32(reply->MsgContext); + reply->IOCStatus =3D cpu_to_le16(reply->IOCStatus); + reply->IOCLogInfo =3D cpu_to_le32(reply->IOCLogInfo); + reply->Event =3D cpu_to_le32(reply->Event); + reply->EventContext =3D cpu_to_le32(reply->EventContext); =20 /* Really depends on the event kind. This will do for now. */ for (i =3D 0; i < length; i++) { - cpu_to_le32s(&reply->Data[i]); + reply->Data[i] =3D cpu_to_le32(reply->Data[i]); } } =20 --=20 2.19.0