Changeset
hw/9pfs/9p.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
Git apply log
Switched to a new branch '081955e1-84ec-4877-72d4-f4e8b46be350@huawei.com'
Applying: qid path collision issues in 9pfs
To https://github.com/patchew-project/qemu
 * [new tag]               patchew/081955e1-84ec-4877-72d4-f4e8b46be350@huawei.com -> patchew/081955e1-84ec-4877-72d4-f4e8b46be350@huawei.com
Test passed: docker

loading

Test passed: checkpatch

loading

Test passed: s390x

loading

Test passed: ppc

loading

[Qemu-devel] [RFC] qid path collision issues in 9pfs
Posted by Antonios Motakis, 6 days ago
Hello all,

We have found an issue in the 9p implementation of QEMU, with how qid paths are generated, which can cause qid path collisions and several issues caused by them. In our use case (running containers under VMs) these have proven to be critical.

In particular, stat_to_qid in hw/9pfs/9p.c generates a qid path using the inode number of the file as input. According to the 9p spec the path should be able to uniquely identify a file, distinct files should not share a path value.

The current implementation that defines qid.path = inode nr works fine as long as there are not files from multiple partitions visible under the 9p share. In that case, distinct files from different devices are allowed to have the same inode number. So with multiple partitions, we have a very high probability of qid path collisions.

How to demonstrate the issue:
1) Prepare a problematic share:
 - mount one partition under share/p1/ with some files inside
 - mount another one *with identical contents* under share/p2/
 - confirm that both partitions have files with same inode nr, size, etc
2) Demonstrate breakage:
 - start a VM with a virtio-9p pointing to the share
 - mount 9p share with FSCACHE on
 - keep open share/p1/file
 - open and write to share/p2/file

What should happen is, the guest will consider share/p1/file and share/p2/file to be the same file, and since we are using the cache it will not reopen it. We intended to write to partition 2, but we just wrote to partition 1. This is just one example on how the guest may rely on qid paths being unique.

In the use case of containers where we commonly have a few containers per VM, all based on similar images, these kind of qid path collisions are very common and they seem to cause all kinds of funny behavior (sometimes very subtle).

To avoid this situation, the device id of a file needs to be also taken as input for generating a qid path. Unfortunately, the size of both inode nr + device id together would be 96 bits, while we have only 64 bits for the qid path, so we can't just append them and call it a day :(

We have thought of a few approaches, but we would definitely like to hear what the upstream maintainers and community think:

* Full fix: Change the 9p protocol

We would need to support a longer qid path, based on a virtio feature flag. This would take reworking of host and guest parts of virtio-9p, so both QEMU and Linux for most users.

* Fallback and/or interim solutions

A virtio feature flag may be refused by the guest, so we think we still need to make collisions less likely even with 64 bit paths. E.g.
1. XOR the device id with inode nr to produce the qid path (we attach a proof of concept patch)
2. 64 bit hash of device id and inode nr
3. other ideas, such as allocating new qid paths and keep a look up table of some sorts (possibly too expensive)

With our proof of concept patch, the issues caused by qid path collisions go away, so it can be seen as an interim solution of sorts. However, the chance of collisions is not eliminated, we are just replacing the current strategy, which is almost guaranteed to cause collisions in certain use cases, with one that makes them more rare. We think that a virtio feature flag for longer qid paths is the only way to eliminate these issues completely.

This is the extent that we were able to analyze the issue from our side, we would like to hear if there are some better ideas, or which approach would be more appropriate for upstream.

Best regards,

-- 
Antonios Motakis
Virtualization Engineer
Huawei Technologies Duesseldorf GmbH
European Research Center
Riesstrasse 25, 80992 München
From bd59f504e6806dac5b3c1bd9c626de085987f1e0 Mon Sep 17 00:00:00 2001
From: Veaceslav Falico <veaceslav.falico@huawei.com>
Date: Fri, 12 Jan 2018 19:26:18 +0800
Subject: [PATCH] 9pfs: stat_to_qid: use device id as input to qid.path

Currently, only the inode number of a file is being
used as input to the qid.path field. The 9p RFC
specifies that the path needs to be unique per file
in the directory hierarchy, however on the host
side the inode alone does not suffice to uniquely
identify a file, as another file on a different
device may have the same inode number.

To avoid qid path collisions, we take the device id
as input as well.

Signed-off-by: Veaceslav Falico <veaceslav.falico@huawei.com>
Signed-off-by: Antonios Motakis <antonios.motakis@huawei.com>
---
 hw/9pfs/9p.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
index 393a2b2..a810d13 100644
--- a/hw/9pfs/9p.c
+++ b/hw/9pfs/9p.c
@@ -583,11 +583,7 @@ static void virtfs_reset(V9fsPDU *pdu)
 /* This is the algorithm from ufs in spfs */
 static void stat_to_qid(const struct stat *stbuf, V9fsQID *qidp)
 {
-    size_t size;
-
-    memset(&qidp->path, 0, sizeof(qidp->path));
-    size = MIN(sizeof(stbuf->st_ino), sizeof(qidp->path));
-    memcpy(&qidp->path, &stbuf->st_ino, size);
+    qidp->path = stbuf->st_ino ^ ((int64_t)stbuf->st_dev << 16);
     qidp->version = stbuf->st_mtime ^ (stbuf->st_size << 8);
     qidp->type = 0;
     if (S_ISDIR(stbuf->st_mode)) {
-- 
1.8.3.1

Re: [Qemu-devel] [RFC] qid path collision issues in 9pfs
Posted by Daniel P. Berrange, 6 days ago
On Fri, Jan 12, 2018 at 07:32:10PM +0800, Antonios Motakis wrote:
> Hello all,
> 
> We have found an issue in the 9p implementation of QEMU, with how qid paths
> are generated, which can cause qid path collisions and several issues caused
> by them. In our use case (running containers under VMs) these have proven to
> be critical.
> 
> In particular, stat_to_qid in hw/9pfs/9p.c generates a qid path using the
> inode number of the file as input. According to the 9p spec the path should
> be able to uniquely identify a file, distinct files should not share a path
> value.
> 
> The current implementation that defines qid.path = inode nr works fine as long as there are not files from multiple partitions visible under the 9p share. In that case, distinct files from different devices are allowed to have the same inode number. So with multiple partitions, we have a very high probability of qid path collisions.
> 
> How to demonstrate the issue:
> 1) Prepare a problematic share:
>  - mount one partition under share/p1/ with some files inside
>  - mount another one *with identical contents* under share/p2/
>  - confirm that both partitions have files with same inode nr, size, etc
> 2) Demonstrate breakage:
>  - start a VM with a virtio-9p pointing to the share
>  - mount 9p share with FSCACHE on
>  - keep open share/p1/file
>  - open and write to share/p2/file
> 
> What should happen is, the guest will consider share/p1/file and
> share/p2/file to be the same file, and since we are using the cache it will
> not reopen it. We intended to write to partition 2, but we just wrote to
> partition 1. This is just one example on how the guest may rely on qid paths
> being unique.

Ouch, this is a serious security bug. The guest user who has p1/file open may
have different privilege level to the guest user who tried to access p2/file.
So this flaw can be used for privilege escalation and/or confinement escape
by a malicious guest user.

> We have thought of a few approaches, but we would definitely like to hear
> what the upstream maintainers and community think:
> 
> * Full fix: Change the 9p protocol
> 
> We would need to support a longer qid path, based on a virtio feature flag.
> This would take reworking of host and guest parts of virtio-9p, so both QEMU
> and Linux for most users.

Requiring updated guest feels unpleasant because of the co-ordination required
to get the security fix applied. So even if we do that, IMHO, we must also
have a fix in QEMU that does not require guest update.

> 
> * Fallback and/or interim solutions
> 
> A virtio feature flag may be refused by the guest, so we think we still need
> to make collisions less likely even with 64 bit paths. E.g.
> 1. XOR the device id with inode nr to produce the qid path (we attach a proof
> of concept patch)

That just makes collision less likely, but doesnt guarantee its eliminated
and its probably practical for guests to bruteforce collisions depending on
how many different FS are passed through & number of files that can be
opened concurrently

> 2. 64 bit hash of device id and inode nr

This is probably better than (1), but a 64-bit hash algorithm is fairly weak
wrt collision resistance when you have a large number of files.

> 3. other ideas, such as allocating new qid paths and keep a look up table of
> some sorts (possibly too expensive)

I think QEMU will have to maintain a lookup table of files that are currently
open, and their associated qids, in order to provide a real guarantee that
the security flaw is addressed.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

Re: [Qemu-devel] [RFC] qid path collision issues in 9pfs
Posted by Greg Kurz, 6 days ago
On Fri, 12 Jan 2018 14:27:22 +0000
"Daniel P. Berrange" <berrange@redhat.com> wrote:

> On Fri, Jan 12, 2018 at 07:32:10PM +0800, Antonios Motakis wrote:
> > Hello all,
> > 
> > We have found an issue in the 9p implementation of QEMU, with how qid paths
> > are generated, which can cause qid path collisions and several issues caused
> > by them. In our use case (running containers under VMs) these have proven to
> > be critical.
> > 
> > In particular, stat_to_qid in hw/9pfs/9p.c generates a qid path using the
> > inode number of the file as input. According to the 9p spec the path should
> > be able to uniquely identify a file, distinct files should not share a path
> > value.
> > 
> > The current implementation that defines qid.path = inode nr works fine as long as there are not files from multiple partitions visible under the 9p share. In that case, distinct files from different devices are allowed to have the same inode number. So with multiple partitions, we have a very high probability of qid path collisions.
> > 
> > How to demonstrate the issue:
> > 1) Prepare a problematic share:
> >  - mount one partition under share/p1/ with some files inside
> >  - mount another one *with identical contents* under share/p2/
> >  - confirm that both partitions have files with same inode nr, size, etc
> > 2) Demonstrate breakage:
> >  - start a VM with a virtio-9p pointing to the share
> >  - mount 9p share with FSCACHE on
> >  - keep open share/p1/file
> >  - open and write to share/p2/file
> > 
> > What should happen is, the guest will consider share/p1/file and
> > share/p2/file to be the same file, and since we are using the cache it will
> > not reopen it. We intended to write to partition 2, but we just wrote to
> > partition 1. This is just one example on how the guest may rely on qid paths
> > being unique.  
> 
> Ouch, this is a serious security bug. The guest user who has p1/file open may
> have different privilege level to the guest user who tried to access p2/file.
> So this flaw can be used for privilege escalation and/or confinement escape
> by a malicious guest user.
> 

Yeah, you're right... Yet another New Year gift, like last year :-\

> > We have thought of a few approaches, but we would definitely like to hear
> > what the upstream maintainers and community think:
> > 
> > * Full fix: Change the 9p protocol
> > 
> > We would need to support a longer qid path, based on a virtio feature flag.
> > This would take reworking of host and guest parts of virtio-9p, so both QEMU
> > and Linux for most users.  
> 
> Requiring updated guest feels unpleasant because of the co-ordination required
> to get the security fix applied. So even if we do that, IMHO, we must also
> have a fix in QEMU that does not require guest update.
> 

Yes, since the linux 9p client is used with a variety of 9p servers, we
cannot wait for everyone to be in sync. We definitely need a standalone
fallback in QEMU.

> > 
> > * Fallback and/or interim solutions
> > 
> > A virtio feature flag may be refused by the guest, so we think we still need
> > to make collisions less likely even with 64 bit paths. E.g.
> > 1. XOR the device id with inode nr to produce the qid path (we attach a proof
> > of concept patch)  
> 
> That just makes collision less likely, but doesnt guarantee its eliminated
> and its probably practical for guests to bruteforce collisions depending on
> how many different FS are passed through & number of files that can be
> opened concurrently
> 
> > 2. 64 bit hash of device id and inode nr  
> 
> This is probably better than (1), but a 64-bit hash algorithm is fairly weak
> wrt collision resistance when you have a large number of files.
> 
> > 3. other ideas, such as allocating new qid paths and keep a look up table of
> > some sorts (possibly too expensive)  
> 
> I think QEMU will have to maintain a lookup table of files that are currently
> open, and their associated qids, in order to provide a real guarantee that
> the security flaw is addressed.
> 

I cannot think of a better solution right now.

> Regards,
> Daniel

Cheers,

--
Greg

Re: [Qemu-devel] [RFC] qid path collision issues in 9pfs
Posted by Veaceslav Falico, 6 days ago
(sorry for top posting and outlook, sigh...)
(adding v9fs-developer list to the loop)

The crucial part here is fscache, which stores inodes guest-side by key/aux
from qid (key = path, aux = version).

Without fscache, there would always be a directory traversal and we're safe
from *this kind* of situation.

With fscache, first inode to get in cache will always respond to inode number
and version pair, even with different device ids.

If we would be able to somehow get/provide device id from host to guest,
a simple modification of fscache aux verification handling would suffice.

However, I can't currently come up with an idea on how exactly could we get
that information guest-side. Maybe use dentry's qid->version (not used
currently) to transmit this, and populate v9fs inodes' caches during 
walk/readdir/etc.?

-----Original Message-----
From: Daniel P. Berrange [mailto:berrange@redhat.com] 
Sent: Friday, January 12, 2018 3:27 PM
To: Antonios Motakis (Tony)
Cc: qemu-devel@nongnu.org; Greg Kurz; zhangwei (CR); Veaceslav Falico; Eduard Shishkin; Wangguoli (Andy); Jiangyiwen; vfalico@gmail.com; Jani Kokkonen
Subject: Re: [Qemu-devel] [RFC] qid path collision issues in 9pfs

On Fri, Jan 12, 2018 at 07:32:10PM +0800, Antonios Motakis wrote:
> Hello all,
> 
> We have found an issue in the 9p implementation of QEMU, with how qid paths
> are generated, which can cause qid path collisions and several issues caused
> by them. In our use case (running containers under VMs) these have proven to
> be critical.
> 
> In particular, stat_to_qid in hw/9pfs/9p.c generates a qid path using the
> inode number of the file as input. According to the 9p spec the path should
> be able to uniquely identify a file, distinct files should not share a path
> value.
> 
> The current implementation that defines qid.path = inode nr works fine as long as there are not files from multiple partitions visible under the 9p share. In that case, distinct files from different devices are allowed to have the same inode number. So with multiple partitions, we have a very high probability of qid path collisions.
> 
> How to demonstrate the issue:
> 1) Prepare a problematic share:
>  - mount one partition under share/p1/ with some files inside
>  - mount another one *with identical contents* under share/p2/
>  - confirm that both partitions have files with same inode nr, size, etc
> 2) Demonstrate breakage:
>  - start a VM with a virtio-9p pointing to the share
>  - mount 9p share with FSCACHE on
>  - keep open share/p1/file
>  - open and write to share/p2/file
> 
> What should happen is, the guest will consider share/p1/file and
> share/p2/file to be the same file, and since we are using the cache it will
> not reopen it. We intended to write to partition 2, but we just wrote to
> partition 1. This is just one example on how the guest may rely on qid paths
> being unique.

Ouch, this is a serious security bug. The guest user who has p1/file open may
have different privilege level to the guest user who tried to access p2/file.
So this flaw can be used for privilege escalation and/or confinement escape
by a malicious guest user.

> We have thought of a few approaches, but we would definitely like to hear
> what the upstream maintainers and community think:
> 
> * Full fix: Change the 9p protocol
> 
> We would need to support a longer qid path, based on a virtio feature flag.
> This would take reworking of host and guest parts of virtio-9p, so both QEMU
> and Linux for most users.

Requiring updated guest feels unpleasant because of the co-ordination required
to get the security fix applied. So even if we do that, IMHO, we must also
have a fix in QEMU that does not require guest update.

> 
> * Fallback and/or interim solutions
> 
> A virtio feature flag may be refused by the guest, so we think we still need
> to make collisions less likely even with 64 bit paths. E.g.
> 1. XOR the device id with inode nr to produce the qid path (we attach a proof
> of concept patch)

That just makes collision less likely, but doesnt guarantee its eliminated
and its probably practical for guests to bruteforce collisions depending on
how many different FS are passed through & number of files that can be
opened concurrently

> 2. 64 bit hash of device id and inode nr

This is probably better than (1), but a 64-bit hash algorithm is fairly weak
wrt collision resistance when you have a large number of files.

> 3. other ideas, such as allocating new qid paths and keep a look up table of
> some sorts (possibly too expensive)

I think QEMU will have to maintain a lookup table of files that are currently
open, and their associated qids, in order to provide a real guarantee that
the security flaw is addressed.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
Re: [Qemu-devel] [RFC] qid path collision issues in 9pfs
Posted by Greg Kurz, 6 days ago
On Fri, 12 Jan 2018 15:05:24 +0000
Veaceslav Falico <veaceslav.falico@huawei.com> wrote:

> (sorry for top posting and outlook, sigh...)

Well, as long as you're not sending a patch, we can cope with it... but
maybe you could use your gmail account for subsequent posts ;)

> (adding v9fs-developer list to the loop)
> 

Good idea since the issue affects the protocol.

> The crucial part here is fscache, which stores inodes guest-side by key/aux
> from qid (key = path, aux = version).
> 
> Without fscache, there would always be a directory traversal and we're safe
> from *this kind* of situation.
> 

I'm not familiar with fscache, but the problem described by Antonios is a
protocol violation anyway. The 9p server shouldn't generate colliding
qid paths...

> With fscache, first inode to get in cache will always respond to inode number
> and version pair, even with different device ids.
> 

Are you saying that even if the 9p server provides unique qid paths,
no matter how it is achieved, the linux client would still be confused ?

It looks like a client side issue to me...

> If we would be able to somehow get/provide device id from host to guest,
> a simple modification of fscache aux verification handling would suffice.
> 
> However, I can't currently come up with an idea on how exactly could we get
> that information guest-side. Maybe use dentry's qid->version (not used
> currently) to transmit this, and populate v9fs inodes' caches during 
> walk/readdir/etc.?
> 

I'm a bit lost here :)


Cheers,

--
Greg

> -----Original Message-----
> From: Daniel P. Berrange [mailto:berrange@redhat.com] 
> Sent: Friday, January 12, 2018 3:27 PM
> To: Antonios Motakis (Tony)
> Cc: qemu-devel@nongnu.org; Greg Kurz; zhangwei (CR); Veaceslav Falico; Eduard Shishkin; Wangguoli (Andy); Jiangyiwen; vfalico@gmail.com; Jani Kokkonen
> Subject: Re: [Qemu-devel] [RFC] qid path collision issues in 9pfs
> 
> On Fri, Jan 12, 2018 at 07:32:10PM +0800, Antonios Motakis wrote:
> > Hello all,
> > 
> > We have found an issue in the 9p implementation of QEMU, with how qid paths
> > are generated, which can cause qid path collisions and several issues caused
> > by them. In our use case (running containers under VMs) these have proven to
> > be critical.
> > 
> > In particular, stat_to_qid in hw/9pfs/9p.c generates a qid path using the
> > inode number of the file as input. According to the 9p spec the path should
> > be able to uniquely identify a file, distinct files should not share a path
> > value.
> > 
> > The current implementation that defines qid.path = inode nr works fine as long as there are not files from multiple partitions visible under the 9p share. In that case, distinct files from different devices are allowed to have the same inode number. So with multiple partitions, we have a very high probability of qid path collisions.
> > 
> > How to demonstrate the issue:
> > 1) Prepare a problematic share:
> >  - mount one partition under share/p1/ with some files inside
> >  - mount another one *with identical contents* under share/p2/
> >  - confirm that both partitions have files with same inode nr, size, etc
> > 2) Demonstrate breakage:
> >  - start a VM with a virtio-9p pointing to the share
> >  - mount 9p share with FSCACHE on
> >  - keep open share/p1/file
> >  - open and write to share/p2/file
> > 
> > What should happen is, the guest will consider share/p1/file and
> > share/p2/file to be the same file, and since we are using the cache it will
> > not reopen it. We intended to write to partition 2, but we just wrote to
> > partition 1. This is just one example on how the guest may rely on qid paths
> > being unique.  
> 
> Ouch, this is a serious security bug. The guest user who has p1/file open may
> have different privilege level to the guest user who tried to access p2/file.
> So this flaw can be used for privilege escalation and/or confinement escape
> by a malicious guest user.
> 
> > We have thought of a few approaches, but we would definitely like to hear
> > what the upstream maintainers and community think:
> > 
> > * Full fix: Change the 9p protocol
> > 
> > We would need to support a longer qid path, based on a virtio feature flag.
> > This would take reworking of host and guest parts of virtio-9p, so both QEMU
> > and Linux for most users.  
> 
> Requiring updated guest feels unpleasant because of the co-ordination required
> to get the security fix applied. So even if we do that, IMHO, we must also
> have a fix in QEMU that does not require guest update.
> 
> > 
> > * Fallback and/or interim solutions
> > 
> > A virtio feature flag may be refused by the guest, so we think we still need
> > to make collisions less likely even with 64 bit paths. E.g.
> > 1. XOR the device id with inode nr to produce the qid path (we attach a proof
> > of concept patch)  
> 
> That just makes collision less likely, but doesnt guarantee its eliminated
> and its probably practical for guests to bruteforce collisions depending on
> how many different FS are passed through & number of files that can be
> opened concurrently
> 
> > 2. 64 bit hash of device id and inode nr  
> 
> This is probably better than (1), but a 64-bit hash algorithm is fairly weak
> wrt collision resistance when you have a large number of files.
> 
> > 3. other ideas, such as allocating new qid paths and keep a look up table of
> > some sorts (possibly too expensive)  
> 
> I think QEMU will have to maintain a lookup table of files that are currently
> open, and their associated qids, in order to provide a real guarantee that
> the security flaw is addressed.
> 
> Regards,
> Daniel


Re: [Qemu-devel] [RFC] qid path collision issues in 9pfs
Posted by Greg Kurz, 6 days ago
On Fri, 12 Jan 2018 19:32:10 +0800
Antonios Motakis <antonios.motakis@huawei.com> wrote:

> Hello all,
> 

Hi Antonios,

I see you have attached a patch to this email... this really isn't the preferred
way to do things since it prevents to comment the patch (at least with my mail
client). The appropriate way would have been to send the patch with a cover
letter, using git-send-email for example.

> We have found an issue in the 9p implementation of QEMU, with how qid paths are generated, which can cause qid path collisions and several issues caused by them. In our use case (running containers under VMs) these have proven to be critical.
> 

Ouch...

> In particular, stat_to_qid in hw/9pfs/9p.c generates a qid path using the inode number of the file as input. According to the 9p spec the path should be able to uniquely identify a file, distinct files should not share a path value.
> 
> The current implementation that defines qid.path = inode nr works fine as long as there are not files from multiple partitions visible under the 9p share. In that case, distinct files from different devices are allowed to have the same inode number. So with multiple partitions, we have a very high probability of qid path collisions.
> 
> How to demonstrate the issue:
> 1) Prepare a problematic share:
>  - mount one partition under share/p1/ with some files inside
>  - mount another one *with identical contents* under share/p2/
>  - confirm that both partitions have files with same inode nr, size, etc
> 2) Demonstrate breakage:
>  - start a VM with a virtio-9p pointing to the share
>  - mount 9p share with FSCACHE on
>  - keep open share/p1/file
>  - open and write to share/p2/file
> 
> What should happen is, the guest will consider share/p1/file and share/p2/file to be the same file, and since we are using the cache it will not reopen it. We intended to write to partition 2, but we just wrote to partition 1. This is just one example on how the guest may rely on qid paths being unique.
> 
> In the use case of containers where we commonly have a few containers per VM, all based on similar images, these kind of qid path collisions are very common and they seem to cause all kinds of funny behavior (sometimes very subtle).
> 
> To avoid this situation, the device id of a file needs to be also taken as input for generating a qid path. Unfortunately, the size of both inode nr + device id together would be 96 bits, while we have only 64 bits for the qid path, so we can't just append them and call it a day :(
> 
> We have thought of a few approaches, but we would definitely like to hear what the upstream maintainers and community think:
> 
> * Full fix: Change the 9p protocol
> 
> We would need to support a longer qid path, based on a virtio feature flag. This would take reworking of host and guest parts of virtio-9p, so both QEMU and Linux for most users.
> 

I agree for a longer qid path, but we shouldn't tie it to a virtio flag since
9p is transport agnostic. And it happens to be used with a variety of transports.
QEMU has both virtio-9p and a Xen backend for example.

> * Fallback and/or interim solutions
> 
> A virtio feature flag may be refused by the guest, so we think we still need to make collisions less likely even with 64 bit paths. E.g.

In all cases, we would need a fallback solution to support current
guest setups. Also there are several 9p server implementations out
there (ganesha, diod, kvmtool) that are currently used with linux
clients... it will take some time to get everyone in sync :-\

> 1. XOR the device id with inode nr to produce the qid path (we attach a proof of concept patch)

Hmm... this would still allow collisions. Not good for fallback.

> 2. 64 bit hash of device id and inode nr

Same here.

> 3. other ideas, such as allocating new qid paths and keep a look up table of some sorts (possibly too expensive)
> 

This would be acceptable for fallback.

> With our proof of concept patch, the issues caused by qid path collisions go away, so it can be seen as an interim solution of sorts. However, the chance of collisions is not eliminated, we are just replacing the current strategy, which is almost guaranteed to cause collisions in certain use cases, with one that makes them more rare. We think that a virtio feature flag for longer qid paths is the only way to eliminate these issues completely.
> 
> This is the extent that we were able to analyze the issue from our side, we would like to hear if there are some better ideas, or which approach would be more appropriate for upstream.
> 
> Best regards,
> 

Cheers,

--
Greg

Re: [Qemu-devel] [RFC] qid path collision issues in 9pfs
Posted by Antonios Motakis, 3 days ago

On 13-Jan-18 00:14, Greg Kurz wrote:
> On Fri, 12 Jan 2018 19:32:10 +0800
> Antonios Motakis <antonios.motakis@huawei.com> wrote:
> 
>> Hello all,
>>
> 
> Hi Antonios,
> 
> I see you have attached a patch to this email... this really isn't the preferred
> way to do things since it prevents to comment the patch (at least with my mail
> client). The appropriate way would have been to send the patch with a cover
> letter, using git-send-email for example.

I apologize for attaching the patch, I should have known better!

> 
>> We have found an issue in the 9p implementation of QEMU, with how qid paths are generated, which can cause qid path collisions and several issues caused by them. In our use case (running containers under VMs) these have proven to be critical.
>>
> 
> Ouch...
> 
>> In particular, stat_to_qid in hw/9pfs/9p.c generates a qid path using the inode number of the file as input. According to the 9p spec the path should be able to uniquely identify a file, distinct files should not share a path value.
>>
>> The current implementation that defines qid.path = inode nr works fine as long as there are not files from multiple partitions visible under the 9p share. In that case, distinct files from different devices are allowed to have the same inode number. So with multiple partitions, we have a very high probability of qid path collisions.
>>
>> How to demonstrate the issue:
>> 1) Prepare a problematic share:
>>  - mount one partition under share/p1/ with some files inside
>>  - mount another one *with identical contents* under share/p2/
>>  - confirm that both partitions have files with same inode nr, size, etc
>> 2) Demonstrate breakage:
>>  - start a VM with a virtio-9p pointing to the share
>>  - mount 9p share with FSCACHE on
>>  - keep open share/p1/file
>>  - open and write to share/p2/file
>>
>> What should happen is, the guest will consider share/p1/file and share/p2/file to be the same file, and since we are using the cache it will not reopen it. We intended to write to partition 2, but we just wrote to partition 1. This is just one example on how the guest may rely on qid paths being unique.
>>
>> In the use case of containers where we commonly have a few containers per VM, all based on similar images, these kind of qid path collisions are very common and they seem to cause all kinds of funny behavior (sometimes very subtle).
>>
>> To avoid this situation, the device id of a file needs to be also taken as input for generating a qid path. Unfortunately, the size of both inode nr + device id together would be 96 bits, while we have only 64 bits for the qid path, so we can't just append them and call it a day :(
>>
>> We have thought of a few approaches, but we would definitely like to hear what the upstream maintainers and community think:
>>
>> * Full fix: Change the 9p protocol
>>
>> We would need to support a longer qid path, based on a virtio feature flag. This would take reworking of host and guest parts of virtio-9p, so both QEMU and Linux for most users.
>>
> 
> I agree for a longer qid path, but we shouldn't tie it to a virtio flag since
> 9p is transport agnostic. And it happens to be used with a variety of transports.
> QEMU has both virtio-9p and a Xen backend for example.
> 
>> * Fallback and/or interim solutions
>>
>> A virtio feature flag may be refused by the guest, so we think we still need to make collisions less likely even with 64 bit paths. E.g.
> 
> In all cases, we would need a fallback solution to support current
> guest setups. Also there are several 9p server implementations out
> there (ganesha, diod, kvmtool) that are currently used with linux
> clients... it will take some time to get everyone in sync :-\
> 
>> 1. XOR the device id with inode nr to produce the qid path (we attach a proof of concept patch)
> 
> Hmm... this would still allow collisions. Not good for fallback.
> 
>> 2. 64 bit hash of device id and inode nr
> 
> Same here.
> 
>> 3. other ideas, such as allocating new qid paths and keep a look up table of some sorts (possibly too expensive)
>>
> 
> This would be acceptable for fallback.

Maybe we can use the QEMU hash table (https://github.com/qemu/qemu/blob/master/util/qht.c) but I wonder if it scales to millions of entries. Do you think it is appropriate in this case?

I was thinking on how to implement something like this, without having to maintain millions of entries... One option we could consider is to split the bits into a directly-mapped part, and a lookup part. For example:

Inode =
[ 10 first bits ] + [ 54 lowest bits ]

A hash table maps [ inode 10 first bits ] + [ device id ] => [ 10 bit prefix ]
The prefix is uniquely allocated for each input.

Qid path = 
[ 10 bit prefix ] + [ inode 54 lowest bits ]

Since inodes are not completely random, and we usually have a handful of device IDs, we get a much smaller number of entries to track in the hash table.

So what this would give:
(1)	Would be faster and take less memory than mapping the full inode_nr,devi_id tuple to unique QID paths
(2)	Guaranteed not to run out of bits when inode numbers stay below the lowest 54 bits and we have less than 1024 devices.
(3)	When we get beyond this this limit, there is a chance we run out of bits to allocate new QID paths, but we can detect this and refuse to serve the offending files instead of allowing a collision.

We could tweak the prefix size to match the scenarios that we consider more likely, but I think close to 10-16 bits sounds reasonable enough. What do you think?

> 
>> With our proof of concept patch, the issues caused by qid path collisions go away, so it can be seen as an interim solution of sorts. However, the chance of collisions is not eliminated, we are just replacing the current strategy, which is almost guaranteed to cause collisions in certain use cases, with one that makes them more rare. We think that a virtio feature flag for longer qid paths is the only way to eliminate these issues completely.
>>
>> This is the extent that we were able to analyze the issue from our side, we would like to hear if there are some better ideas, or which approach would be more appropriate for upstream.
>>
>> Best regards,
>>
> 
> Cheers,
> 
> --
> Greg
> 

-- 
Antonios Motakis
Virtualization Engineer
Huawei Technologies Duesseldorf GmbH
European Research Center
Riesstrasse 25, 80992 München