From: Stefano Stabellini <stefanos@xilinx.com>
On arm64 swiotlb is often (not always) already initialized by mem_init.
We don't want to initialize it twice, which would trigger a second
memory allocation. Moreover, the second memory pool is typically made of
high pages and ends up replacing the original memory pool of low pages.
As a side effect of this change, it is possible to have low pages in
swiotlb-xen on arm64.
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
Changes in v2:
- improve commit message
- don't add nested ifs
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 877baf2..8a3cdd1 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -211,6 +211,15 @@ int __ref xen_swiotlb_init(int verbose, bool early)
retry:
bytes = xen_set_nslabs(xen_io_tlb_nslabs);
order = get_order(xen_io_tlb_nslabs << IO_TLB_SHIFT);
+
+ /*
+ * IO TLB memory already allocated. Just use it.
+ */
+ if (io_tlb_start != 0) {
+ xen_io_tlb_start = phys_to_virt(io_tlb_start);
+ goto end;
+ }
+
/*
* Get IO TLB memory from any location.
*/
@@ -240,7 +249,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
m_ret = XEN_SWIOTLB_ENOMEM;
goto error;
}
- xen_io_tlb_end = xen_io_tlb_start + bytes;
/*
* And replace that memory with pages under 4GB.
*/
@@ -267,6 +275,8 @@ int __ref xen_swiotlb_init(int verbose, bool early)
} else
rc = swiotlb_late_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs);
+end:
+ xen_io_tlb_end = xen_io_tlb_start + bytes;
if (!rc)
swiotlb_set_max_segment(PAGE_SIZE);
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 5/28/19 6:48 PM, Stefano Stabellini wrote: > From: Stefano Stabellini <stefanos@xilinx.com> > > On arm64 swiotlb is often (not always) already initialized by mem_init. > We don't want to initialize it twice, which would trigger a second > memory allocation. Moreover, the second memory pool is typically made of > high pages and ends up replacing the original memory pool of low pages. > As a side effect of this change, it is possible to have low pages in > swiotlb-xen on arm64. > > Signed-off-by: Stefano Stabellini <stefanos@xilinx.com> Has this been tested on x86? -boris _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
On Tue, 28 May 2019, Boris Ostrovsky wrote: > On 5/28/19 6:48 PM, Stefano Stabellini wrote: > > From: Stefano Stabellini <stefanos@xilinx.com> > > > > On arm64 swiotlb is often (not always) already initialized by mem_init. > > We don't want to initialize it twice, which would trigger a second > > memory allocation. Moreover, the second memory pool is typically made of > > high pages and ends up replacing the original memory pool of low pages. > > As a side effect of this change, it is possible to have low pages in > > swiotlb-xen on arm64. > > > > Signed-off-by: Stefano Stabellini <stefanos@xilinx.com> > > Has this been tested on x86? Yes, I managed to test it using QEMU. There are no effects on x86, as the check io_tlb_start != 0 returns false. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
On 6/3/19 2:25 PM, Stefano Stabellini wrote: > On Tue, 28 May 2019, Boris Ostrovsky wrote: >> On 5/28/19 6:48 PM, Stefano Stabellini wrote: >>> From: Stefano Stabellini <stefanos@xilinx.com> >>> >>> On arm64 swiotlb is often (not always) already initialized by mem_init. >>> We don't want to initialize it twice, which would trigger a second >>> memory allocation. Moreover, the second memory pool is typically made of >>> high pages and ends up replacing the original memory pool of low pages. >>> As a side effect of this change, it is possible to have low pages in >>> swiotlb-xen on arm64. >>> >>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com> >> Has this been tested on x86? > Yes, I managed to test it using QEMU. There are no effects on x86, as > the check io_tlb_start != 0 returns false. I wonder though whether this is always the case. When we are called from pci_xen_swiotlb_init_late() for example. -boris _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
On Mon, 3 Jun 2019, Boris Ostrovsky wrote:
> On 6/3/19 2:25 PM, Stefano Stabellini wrote:
> > On Tue, 28 May 2019, Boris Ostrovsky wrote:
> >> On 5/28/19 6:48 PM, Stefano Stabellini wrote:
> >>> From: Stefano Stabellini <stefanos@xilinx.com>
> >>>
> >>> On arm64 swiotlb is often (not always) already initialized by mem_init.
> >>> We don't want to initialize it twice, which would trigger a second
> >>> memory allocation. Moreover, the second memory pool is typically made of
> >>> high pages and ends up replacing the original memory pool of low pages.
> >>> As a side effect of this change, it is possible to have low pages in
> >>> swiotlb-xen on arm64.
> >>>
> >>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> >> Has this been tested on x86?
> > Yes, I managed to test it using QEMU. There are no effects on x86, as
> > the check io_tlb_start != 0 returns false.
>
> I wonder though whether this is always the case. When we are called
> from pci_xen_swiotlb_init_late() for example.
In that case, pci_xen_swiotlb_init_late() is called by
pcifront_connect_and_init_dma, which does:
if (!err && !swiotlb_nr_tbl()) {
err = pci_xen_swiotlb_init_late();
if (err)
dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
}
pci_xen_swiotlb_init_late() is only called when swiotlb_nr_tbl() returns
0. If swiotlb_nr_tbl() returns 0, certainly the swiotlb has not been
allocated yet, and the io_tlb_start != 0 check at the beginning of
xen_swiotlb_init will also fail. The code will take the normal
route, same as today. In short, there should be no effects on x86._______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 6/4/19 12:51 PM, Stefano Stabellini wrote:
> On Mon, 3 Jun 2019, Boris Ostrovsky wrote:
>> On 6/3/19 2:25 PM, Stefano Stabellini wrote:
>>> On Tue, 28 May 2019, Boris Ostrovsky wrote:
>>>> On 5/28/19 6:48 PM, Stefano Stabellini wrote:
>>>>> From: Stefano Stabellini <stefanos@xilinx.com>
>>>>>
>>>>> On arm64 swiotlb is often (not always) already initialized by mem_init.
>>>>> We don't want to initialize it twice, which would trigger a second
>>>>> memory allocation. Moreover, the second memory pool is typically made of
>>>>> high pages and ends up replacing the original memory pool of low pages.
>>>>> As a side effect of this change, it is possible to have low pages in
>>>>> swiotlb-xen on arm64.
>>>>>
>>>>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
>>>> Has this been tested on x86?
>>> Yes, I managed to test it using QEMU. There are no effects on x86, as
>>> the check io_tlb_start != 0 returns false.
>> I wonder though whether this is always the case. When we are called
>> from pci_xen_swiotlb_init_late() for example.
> In that case, pci_xen_swiotlb_init_late() is called by
> pcifront_connect_and_init_dma, which does:
>
> if (!err && !swiotlb_nr_tbl()) {
> err = pci_xen_swiotlb_init_late();
> if (err)
> dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
> }
>
> pci_xen_swiotlb_init_late() is only called when swiotlb_nr_tbl() returns
> 0. If swiotlb_nr_tbl() returns 0, certainly the swiotlb has not been
> allocated yet, and the io_tlb_start != 0 check at the beginning of
> xen_swiotlb_init will also fail. The code will take the normal
> route, same as today. In short, there should be no effects on x86.
OK, thanks.
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On Tue, Jun 04, 2019 at 03:41:40PM -0400, Boris Ostrovsky wrote:
> On 6/4/19 12:51 PM, Stefano Stabellini wrote:
> > On Mon, 3 Jun 2019, Boris Ostrovsky wrote:
> >> On 6/3/19 2:25 PM, Stefano Stabellini wrote:
> >>> On Tue, 28 May 2019, Boris Ostrovsky wrote:
> >>>> On 5/28/19 6:48 PM, Stefano Stabellini wrote:
> >>>>> From: Stefano Stabellini <stefanos@xilinx.com>
> >>>>>
> >>>>> On arm64 swiotlb is often (not always) already initialized by mem_init.
> >>>>> We don't want to initialize it twice, which would trigger a second
> >>>>> memory allocation. Moreover, the second memory pool is typically made of
> >>>>> high pages and ends up replacing the original memory pool of low pages.
> >>>>> As a side effect of this change, it is possible to have low pages in
> >>>>> swiotlb-xen on arm64.
> >>>>>
> >>>>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> >>>> Has this been tested on x86?
> >>> Yes, I managed to test it using QEMU. There are no effects on x86, as
> >>> the check io_tlb_start != 0 returns false.
> >> I wonder though whether this is always the case. When we are called
> >> from pci_xen_swiotlb_init_late() for example.
> > In that case, pci_xen_swiotlb_init_late() is called by
> > pcifront_connect_and_init_dma, which does:
> >
> > if (!err && !swiotlb_nr_tbl()) {
> > err = pci_xen_swiotlb_init_late();
> > if (err)
> > dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
> > }
> >
> > pci_xen_swiotlb_init_late() is only called when swiotlb_nr_tbl() returns
> > 0. If swiotlb_nr_tbl() returns 0, certainly the swiotlb has not been
> > allocated yet, and the io_tlb_start != 0 check at the beginning of
> > xen_swiotlb_init will also fail. The code will take the normal
> > route, same as today. In short, there should be no effects on x86.
>
>
> OK, thanks.
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Pushed in devel/for-linus-5.2 and will eventually move it to stable and push to Linus next-week.
Are there any other patches I should pick up?
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 05.06.19 16:13, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 04, 2019 at 03:41:40PM -0400, Boris Ostrovsky wrote:
>> On 6/4/19 12:51 PM, Stefano Stabellini wrote:
>>> On Mon, 3 Jun 2019, Boris Ostrovsky wrote:
>>>> On 6/3/19 2:25 PM, Stefano Stabellini wrote:
>>>>> On Tue, 28 May 2019, Boris Ostrovsky wrote:
>>>>>> On 5/28/19 6:48 PM, Stefano Stabellini wrote:
>>>>>>> From: Stefano Stabellini <stefanos@xilinx.com>
>>>>>>>
>>>>>>> On arm64 swiotlb is often (not always) already initialized by mem_init.
>>>>>>> We don't want to initialize it twice, which would trigger a second
>>>>>>> memory allocation. Moreover, the second memory pool is typically made of
>>>>>>> high pages and ends up replacing the original memory pool of low pages.
>>>>>>> As a side effect of this change, it is possible to have low pages in
>>>>>>> swiotlb-xen on arm64.
>>>>>>>
>>>>>>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
>>>>>> Has this been tested on x86?
>>>>> Yes, I managed to test it using QEMU. There are no effects on x86, as
>>>>> the check io_tlb_start != 0 returns false.
>>>> I wonder though whether this is always the case. When we are called
>>>> from pci_xen_swiotlb_init_late() for example.
>>> In that case, pci_xen_swiotlb_init_late() is called by
>>> pcifront_connect_and_init_dma, which does:
>>>
>>> if (!err && !swiotlb_nr_tbl()) {
>>> err = pci_xen_swiotlb_init_late();
>>> if (err)
>>> dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
>>> }
>>>
>>> pci_xen_swiotlb_init_late() is only called when swiotlb_nr_tbl() returns
>>> 0. If swiotlb_nr_tbl() returns 0, certainly the swiotlb has not been
>>> allocated yet, and the io_tlb_start != 0 check at the beginning of
>>> xen_swiotlb_init will also fail. The code will take the normal
>>> route, same as today. In short, there should be no effects on x86.
>>
>>
>> OK, thanks.
>>
>> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>
> Pushed in devel/for-linus-5.2 and will eventually move it to stable and push to Linus next-week.
>
> Are there any other patches I should pick up?
>
I think at least the first two patches from my series:
https://patchew.org/Xen/20190529090407.1225-1-jgross@suse.com/
are ready to go in.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On Wed, Jun 05, 2019 at 04:24:06PM +0200, Juergen Gross wrote:
> On 05.06.19 16:13, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jun 04, 2019 at 03:41:40PM -0400, Boris Ostrovsky wrote:
> > > On 6/4/19 12:51 PM, Stefano Stabellini wrote:
> > > > On Mon, 3 Jun 2019, Boris Ostrovsky wrote:
> > > > > On 6/3/19 2:25 PM, Stefano Stabellini wrote:
> > > > > > On Tue, 28 May 2019, Boris Ostrovsky wrote:
> > > > > > > On 5/28/19 6:48 PM, Stefano Stabellini wrote:
> > > > > > > > From: Stefano Stabellini <stefanos@xilinx.com>
> > > > > > > >
> > > > > > > > On arm64 swiotlb is often (not always) already initialized by mem_init.
> > > > > > > > We don't want to initialize it twice, which would trigger a second
> > > > > > > > memory allocation. Moreover, the second memory pool is typically made of
> > > > > > > > high pages and ends up replacing the original memory pool of low pages.
> > > > > > > > As a side effect of this change, it is possible to have low pages in
> > > > > > > > swiotlb-xen on arm64.
> > > > > > > >
> > > > > > > > Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> > > > > > > Has this been tested on x86?
> > > > > > Yes, I managed to test it using QEMU. There are no effects on x86, as
> > > > > > the check io_tlb_start != 0 returns false.
> > > > > I wonder though whether this is always the case. When we are called
> > > > > from pci_xen_swiotlb_init_late() for example.
> > > > In that case, pci_xen_swiotlb_init_late() is called by
> > > > pcifront_connect_and_init_dma, which does:
> > > >
> > > > if (!err && !swiotlb_nr_tbl()) {
> > > > err = pci_xen_swiotlb_init_late();
> > > > if (err)
> > > > dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
> > > > }
> > > >
> > > > pci_xen_swiotlb_init_late() is only called when swiotlb_nr_tbl() returns
> > > > 0. If swiotlb_nr_tbl() returns 0, certainly the swiotlb has not been
> > > > allocated yet, and the io_tlb_start != 0 check at the beginning of
> > > > xen_swiotlb_init will also fail. The code will take the normal
> > > > route, same as today. In short, there should be no effects on x86.
> > >
> > >
> > > OK, thanks.
> > >
> > > Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >
> > Pushed in devel/for-linus-5.2 and will eventually move it to stable and push to Linus next-week.
> >
> > Are there any other patches I should pick up?
> >
>
> I think at least the first two patches from my series:
>
> https://patchew.org/Xen/20190529090407.1225-1-jgross@suse.com/
>
> are ready to go in.
#2 patch says:
"> To be symmetric with setting the flag only after having made the region
> contiguous, and to avoid (perhaps just theoretical) races, wouldn't it be
> better to clear the flag before calling xen_destroy_contiguous_region()?
> Even better would be a TestAndClear...() operation.
I like that idea.
"
?
>
>
> Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 13.06.19 16:23, Konrad Rzeszutek Wilk wrote:
> On Wed, Jun 05, 2019 at 04:24:06PM +0200, Juergen Gross wrote:
>> On 05.06.19 16:13, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Jun 04, 2019 at 03:41:40PM -0400, Boris Ostrovsky wrote:
>>>> On 6/4/19 12:51 PM, Stefano Stabellini wrote:
>>>>> On Mon, 3 Jun 2019, Boris Ostrovsky wrote:
>>>>>> On 6/3/19 2:25 PM, Stefano Stabellini wrote:
>>>>>>> On Tue, 28 May 2019, Boris Ostrovsky wrote:
>>>>>>>> On 5/28/19 6:48 PM, Stefano Stabellini wrote:
>>>>>>>>> From: Stefano Stabellini <stefanos@xilinx.com>
>>>>>>>>>
>>>>>>>>> On arm64 swiotlb is often (not always) already initialized by mem_init.
>>>>>>>>> We don't want to initialize it twice, which would trigger a second
>>>>>>>>> memory allocation. Moreover, the second memory pool is typically made of
>>>>>>>>> high pages and ends up replacing the original memory pool of low pages.
>>>>>>>>> As a side effect of this change, it is possible to have low pages in
>>>>>>>>> swiotlb-xen on arm64.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
>>>>>>>> Has this been tested on x86?
>>>>>>> Yes, I managed to test it using QEMU. There are no effects on x86, as
>>>>>>> the check io_tlb_start != 0 returns false.
>>>>>> I wonder though whether this is always the case. When we are called
>>>>>> from pci_xen_swiotlb_init_late() for example.
>>>>> In that case, pci_xen_swiotlb_init_late() is called by
>>>>> pcifront_connect_and_init_dma, which does:
>>>>>
>>>>> if (!err && !swiotlb_nr_tbl()) {
>>>>> err = pci_xen_swiotlb_init_late();
>>>>> if (err)
>>>>> dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
>>>>> }
>>>>>
>>>>> pci_xen_swiotlb_init_late() is only called when swiotlb_nr_tbl() returns
>>>>> 0. If swiotlb_nr_tbl() returns 0, certainly the swiotlb has not been
>>>>> allocated yet, and the io_tlb_start != 0 check at the beginning of
>>>>> xen_swiotlb_init will also fail. The code will take the normal
>>>>> route, same as today. In short, there should be no effects on x86.
>>>>
>>>>
>>>> OK, thanks.
>>>>
>>>> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>
>>> Pushed in devel/for-linus-5.2 and will eventually move it to stable and push to Linus next-week.
>>>
>>> Are there any other patches I should pick up?
>>>
>>
>> I think at least the first two patches from my series:
>>
>> https://patchew.org/Xen/20190529090407.1225-1-jgross@suse.com/
>>
>> are ready to go in.
>
> #2 patch says:
>
> "> To be symmetric with setting the flag only after having made the region
> > contiguous, and to avoid (perhaps just theoretical) races, wouldn't it be
> > better to clear the flag before calling xen_destroy_contiguous_region()?
> > Even better would be a TestAndClear...() operation.
>
> I like that idea.
> "
> ?
I was hoping for a clarification regarding the Xen specific page flag
names before posting V3.
As Christoph didn't react when I posted possible solutions I think I'll
just modify patch 3 according to Jan's comment and post V3.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
> > > > #2 patch says: > > > > "> To be symmetric with setting the flag only after having made the region > > > contiguous, and to avoid (perhaps just theoretical) races, wouldn't it be > > > better to clear the flag before calling xen_destroy_contiguous_region()? > > > Even better would be a TestAndClear...() operation. > > > > I like that idea. > > " > > ? > > I was hoping for a clarification regarding the Xen specific page flag > names before posting V3. > > As Christoph didn't react when I posted possible solutions I think I'll > just modify patch 3 according to Jan's comment and post V3. OK, will await that patchset. Thank you! BTW, your patch #1 should be upstream now. > > > Juergen _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
On Thu, Jun 13, 2019 at 05:00:57PM -0400, Konrad Rzeszutek Wilk wrote: > > As Christoph didn't react when I posted possible solutions I think I'll > > just modify patch 3 according to Jan's comment and post V3. > > OK, will await that patchset. Thank you! > > BTW, your patch #1 should be upstream now. Still not a big fan of the page flags in the common header, but please go ahead for now if the mm folks are fine with it. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
© 2016 - 2025 Red Hat, Inc.