When placing BARs with sizes smaller than 4K multiple BARs can end
up mapped to the same guest physical address, and thus won't work
correctly.
Align all BARs placement to 4K in hvmloader to prevent such
overlapping.
Note that the guest can still move the BARs around and create this
collisions, and that BARs not filling up a physical page might leak
access to other MMIO regions placed in the same host physical page.
This is however no worse than what's currently done, and hence should
be considered an improvement over the current state.
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
tools/firmware/hvmloader/pci.c | 4 ++++
tools/firmware/hvmloader/util.h | 2 ++
2 files changed, 6 insertions(+)
diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 0b708bf578..c433b34cd6 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -489,6 +489,10 @@ void pci_setup(void)
resource->base = base;
+ if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
+ PCI_BASE_ADDRESS_SPACE_MEMORY )
+ resource->base = ROUNDUP(resource->base, PAGE_SIZE);
+
pci_writel(devfn, bar_reg, bar_data);
if (using_64bar)
pci_writel(devfn, bar_reg + 4, bar_data_upper);
diff --git a/tools/firmware/hvmloader/util.h b/tools/firmware/hvmloader/util.h
index 7bca6418d2..b5554b5844 100644
--- a/tools/firmware/hvmloader/util.h
+++ b/tools/firmware/hvmloader/util.h
@@ -51,6 +51,8 @@ void __bug(char *file, int line) __attribute__((noreturn));
#define MB(mb) (mb##ULL << 20)
#define GB(gb) (gb##ULL << 30)
+#define ROUNDUP(x, a) (((x) + (a) - 1) & ~((a) - 1))
+
static inline int test_bit(unsigned int b, const void *p)
{
return !!(((const uint8_t *)p)[b>>3] & (1u<<(b&7)));
--
2.24.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On Tue, Jan 14, 2020 at 1:16 PM Roger Pau Monne <roger.pau@citrix.com> wrote: > > When placing BARs with sizes smaller than 4K multiple BARs can end > up mapped to the same guest physical address, and thus won't work > correctly. > > Align all BARs placement to 4K in hvmloader to prevent such > overlapping. > > Note that the guest can still move the BARs around and create this > collisions, and that BARs not filling up a physical page might leak > access to other MMIO regions placed in the same host physical page. > > This is however no worse than what's currently done, and hence should > be considered an improvement over the current state. > > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Tested-by: Jason Andryuk <jandryuk@gmail.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
On 14.01.2020 19:13, Roger Pau Monne wrote: > When placing BARs with sizes smaller than 4K multiple BARs can end > up mapped to the same guest physical address, and thus won't work > correctly. BARs of the same device can very well share a page in the common case, can't they? (There may be reasons, like things getting too complicated, for not actually honoring this, but then the description should say so imo.) > Align all BARs placement to 4K in hvmloader to prevent such > overlapping. > > Note that the guest can still move the BARs around and create this > collisions, and that BARs not filling up a physical page might leak > access to other MMIO regions placed in the same host physical page. Throughout the description and in the title I think you would better say "memory BAR". > --- a/tools/firmware/hvmloader/pci.c > +++ b/tools/firmware/hvmloader/pci.c > @@ -489,6 +489,10 @@ void pci_setup(void) > > resource->base = base; > > + if ( (bar_data & PCI_BASE_ADDRESS_SPACE) == > + PCI_BASE_ADDRESS_SPACE_MEMORY ) > + resource->base = ROUNDUP(resource->base, PAGE_SIZE); Doesn't this need adjustments to the calculation of the MMIO hole size higher up in the function? Also, as per a few lines up, perhaps if ( resource == &mem_resource) ? Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
© 2016 - 2024 Red Hat, Inc.