From nobody Mon Apr 29 13:03:44 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) client-ip=170.10.129.124; envelope-from=libvir-list-bounces@redhat.com; helo=us-smtp-delivery-124.mimecast.com; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1675782158; cv=none; d=zohomail.com; s=zohoarc; b=RGuOja9fxrhO+1LwP86wpAHXwxUjIbUn9eFD4KZRYGL/WtDmSmdCZqfbol7JJXcNFb5l2/0nLO9sJIcN5SDvCDKjskQ08qtk72VealOtG1hKYsxnQRJFfaYas0oRH9NAIGd9W4vQiPNmyZOFDWgbOsDWmX8q2kPopHalS52REFg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1675782158; h=Content-Type:Content-Transfer-Encoding:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=9gh+It4uQPkMJ1UYRJPZMOl2vV+kqwOBJxINo6jZFR4=; b=OZbrrp7GHiccYL7phbAa7udU/+bErLnpQeWicLOYPlEhb/SSgTXAuSOhgC9sDc41ummc5lmQBaFDvs7PejDWK3vRSPrAgJe6YBWFLb683BRNNh1zcC1epMXBpbHRKFNDyf9niziX9Kp6dS97vFRlGLWR5D2T9etP9PN5ZRnIAjk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=libvir-list-bounces@redhat.com; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mx.zohomail.com with SMTPS id 1675782158544278.0736593914386; Tue, 7 Feb 2023 07:02:38 -0800 (PST) Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-423-eij7bg-CPlGpGnSF6Q6_4w-1; Tue, 07 Feb 2023 10:02:34 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C8F2529A9D3F; Tue, 7 Feb 2023 15:02:24 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (unknown [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 87C59C15BA0; Tue, 7 Feb 2023 15:02:21 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 5C2F5194658D; Tue, 7 Feb 2023 15:02:21 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 7243B1946589 for ; Tue, 7 Feb 2023 15:02:19 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 64D371121315; Tue, 7 Feb 2023 15:02:19 +0000 (UTC) Received: from maggie.redhat.com (unknown [10.43.2.39]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0C2241121314 for ; Tue, 7 Feb 2023 15:02:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675782157; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=9gh+It4uQPkMJ1UYRJPZMOl2vV+kqwOBJxINo6jZFR4=; b=QLFogkZNhgB0O0mRIXlBo9ftF9i1mZ3P4/gXeTG5rkvhrXzkNfGrYh3FZnhE1hjT/NE4S1 YlCEDTzoje5ON7We6ld7vpQzdwgUe44UzypaVF9HU6SW5EGKGJXG2d+W48OHcPb59LYcVw Bagz/Nlr/vHIJYU67S0qIcXwlV8Uslc= X-MC-Unique: eij7bg-CPlGpGnSF6Q6_4w-1 X-Original-To: libvir-list@listman.corp.redhat.com From: Michal Privoznik To: libvir-list@redhat.com Subject: [PATCH] qemu_namespace: Deal with nested mounts when umount()-ing /dev Date: Tue, 7 Feb 2023 16:02:15 +0100 Message-Id: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-BeenThere: libvir-list@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development discussions about the libvirt library & tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libvir-list-bounces@redhat.com Sender: "libvir-list" X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1675782159938100001 Content-Type: text/plain; charset="utf-8"; x-default="true" In one of recent commits (v9.0.0-rc1~106) I've made our QEMU namespace code umount the original /dev. One of the reasons was enhanced security, because previously we just mounted a tmpfs over the original /dev. Thus a malicious QEMU could just umount("/dev") and it would get to the original /dev with all nodes. Now, on some systems this introduced a regression: failed to umount devfs on /dev: Device or resource busy But how this could be? We've moved all file systems mounted under /dev to a temporary location. Or have we? As it turns out, not quite. If there are two file systems mounted on the same target, e.g. like this: mount -t tmpfs tmpfs /dev/shm/ && mount -t tmpfs tmpfs /dev/shm/ then only the top most (i.e. the last one) is moved. See qemuDomainUnshareNamespace() for more info. Now, we could enhance our code to deal with these "doubled" mount points. Or, since it is the top most file system that is accessible anyways (and this one is preserved), we can umount("/dev") in a recursive fashion. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=3D2167302 Fixes: 379c0ce4bfed8733dfbde557c359eecc5474ce38 Signed-off-by: Michal Privoznik Reviewed-by: Jim Fehlig --- src/qemu/qemu_namespace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/qemu/qemu_namespace.c b/src/qemu/qemu_namespace.c index 5769a4dfe0..5fc043bd62 100644 --- a/src/qemu/qemu_namespace.c +++ b/src/qemu/qemu_namespace.c @@ -777,7 +777,7 @@ qemuDomainUnshareNamespace(virQEMUDriverConfig *cfg, } =20 #if defined(__linux__) - if (umount("/dev") < 0) { + if (umount2("/dev", MNT_DETACH) < 0) { virReportSystemError(errno, "%s", _("failed to umount devfs on /de= v")); return -1; } --=20 2.39.1