AMD-Xilinx platform driver does not support iommu or recovery mechanism
yet. Disable both features in platform driver.
Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
---
drivers/remoteproc/xlnx_r5_remoteproc.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
index a51523456c6e..0ffd26a47685 100644
--- a/drivers/remoteproc/xlnx_r5_remoteproc.c
+++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
@@ -938,6 +938,8 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM);
+ r5_rproc->recovery_disabled = true;
+ r5_rproc->has_iommu = false;
r5_rproc->auto_boot = false;
r5_core = r5_rproc->priv;
r5_core->dev = cdev;
--
2.34.1
Good morning, On Wed, Jul 16, 2025 at 02:30:47PM -0700, Tanmay Shah wrote: > AMD-Xilinx platform driver does not support iommu or recovery mechanism > yet. Disable both features in platform driver. > > Signed-off-by: Tanmay Shah <tanmay.shah@amd.com> > --- > drivers/remoteproc/xlnx_r5_remoteproc.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c > index a51523456c6e..0ffd26a47685 100644 > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c > @@ -938,6 +938,8 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev) > > rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM); > > + r5_rproc->recovery_disabled = true; If recovery is not supported, and it is set explicitly here, does it mean the present upstream code is broken? And if it is broken, how was this tested in the first place? > + r5_rproc->has_iommu = false; > r5_rproc->auto_boot = false; > r5_core = r5_rproc->priv; > r5_core->dev = cdev; > -- > 2.34.1 >
On 7/21/25 10:24 AM, Mathieu Poirier wrote: > Good morning, > > On Wed, Jul 16, 2025 at 02:30:47PM -0700, Tanmay Shah wrote: >> AMD-Xilinx platform driver does not support iommu or recovery mechanism >> yet. Disable both features in platform driver. >> >> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com> >> --- >> drivers/remoteproc/xlnx_r5_remoteproc.c | 2 ++ >> 1 file changed, 2 insertions(+) >> >> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c >> index a51523456c6e..0ffd26a47685 100644 >> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c >> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c >> @@ -938,6 +938,8 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev) >> >> rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM); >> >> + r5_rproc->recovery_disabled = true; > > If recovery is not supported, and it is set explicitly here, does it mean the > present upstream code is broken? And if it is broken, how was this tested in > the first place? I think upstream code can be improved to change "recovery_disabled" to "recovery_enabled" and set it in each platform driver if feature is supported and keep it disabled by default. When upstreaming base driver I wasn't aware that there is recovery feature in remoteproc until recently when I started working on it. I guess too focused on fixing base driver features. That is how I missed to add recovery_disabled and test it as well. That is why it's better idea to replace "recovery_disabled" with "recovery_enabled" and keep recovery feature disabled by default. Same as "has_iommu". So platform drivers enable it if they actually support it. I will add recovery feature for xlnx platform later, but until then it's better to disable it in the platform driver. Thanks, Tanmay > >> + r5_rproc->has_iommu = false; >> r5_rproc->auto_boot = false; >> r5_core = r5_rproc->priv; >> r5_core->dev = cdev; >> -- >> 2.34.1 >>
On Mon, Jul 21, 2025 at 11:37:23AM -0500, Tanmay Shah wrote: > > > On 7/21/25 10:24 AM, Mathieu Poirier wrote: > > Good morning, > > > > On Wed, Jul 16, 2025 at 02:30:47PM -0700, Tanmay Shah wrote: > > > AMD-Xilinx platform driver does not support iommu or recovery mechanism > > > yet. Disable both features in platform driver. > > > > > > Signed-off-by: Tanmay Shah <tanmay.shah@amd.com> > > > --- > > > drivers/remoteproc/xlnx_r5_remoteproc.c | 2 ++ > > > 1 file changed, 2 insertions(+) > > > > > > diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c > > > index a51523456c6e..0ffd26a47685 100644 > > > --- a/drivers/remoteproc/xlnx_r5_remoteproc.c > > > +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c > > > @@ -938,6 +938,8 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev) > > > rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM); > > > + r5_rproc->recovery_disabled = true; > > > > If recovery is not supported, and it is set explicitly here, does it mean the > > present upstream code is broken? And if it is broken, how was this tested in > > the first place? > > I think upstream code can be improved to change "recovery_disabled" to > "recovery_enabled" and set it in each platform driver if feature is > supported and keep it disabled by default. > > When upstreaming base driver I wasn't aware that there is recovery feature > in remoteproc until recently when I started working on it. I guess too > focused on fixing base driver features. That is how I missed to add > recovery_disabled and test it as well. > > That is why it's better idea to replace "recovery_disabled" with > "recovery_enabled" and keep recovery feature disabled by default. Same as > "has_iommu". So platform drivers enable it if they actually support it. > > I will add recovery feature for xlnx platform later, but until then it's > better to disable it in the platform driver. > I have applied this set. > Thanks, > Tanmay > > > > > > + r5_rproc->has_iommu = false; > > > r5_rproc->auto_boot = false; > > > r5_core = r5_rproc->priv; > > > r5_core->dev = cdev; > > > -- > > > 2.34.1 > > > >
On Wed, Jul 16, 2025 at 02:30:47PM -0700, Tanmay Shah wrote: >AMD-Xilinx platform driver does not support iommu or recovery mechanism >yet. Disable both features in platform driver. > >Signed-off-by: Tanmay Shah <tanmay.shah@amd.com> >--- > drivers/remoteproc/xlnx_r5_remoteproc.c | 2 ++ > 1 file changed, 2 insertions(+) > >diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c >index a51523456c6e..0ffd26a47685 100644 >--- a/drivers/remoteproc/xlnx_r5_remoteproc.c >+++ b/drivers/remoteproc/xlnx_r5_remoteproc.c >@@ -938,6 +938,8 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev) > > rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM); > >+ r5_rproc->recovery_disabled = true; This does not block sysfs write if my understanding is correct. recovery_store does not do any check. So even you set it to true, user could still write sysfs to set to false. >+ r5_rproc->has_iommu = false; The default value should already be false. Is there a need to set it to false? Regards, Peng > r5_rproc->auto_boot = false; > r5_core = r5_rproc->priv; > r5_core->dev = cdev; >-- >2.34.1 >
On Mon, Jul 21, 2025 at 02:57:24PM +0800, Peng Fan wrote: > On Wed, Jul 16, 2025 at 02:30:47PM -0700, Tanmay Shah wrote: > >AMD-Xilinx platform driver does not support iommu or recovery mechanism > >yet. Disable both features in platform driver. > > > >Signed-off-by: Tanmay Shah <tanmay.shah@amd.com> > >--- > > drivers/remoteproc/xlnx_r5_remoteproc.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > >diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c > >index a51523456c6e..0ffd26a47685 100644 > >--- a/drivers/remoteproc/xlnx_r5_remoteproc.c > >+++ b/drivers/remoteproc/xlnx_r5_remoteproc.c > >@@ -938,6 +938,8 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev) > > > > rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM); > > > >+ r5_rproc->recovery_disabled = true; > > This does not block sysfs write if my understanding is correct. > recovery_store does not do any check. So even you set it to true, > user could still write sysfs to set to false. That is the case for all drivers and not specific to this one. > > >+ r5_rproc->has_iommu = false; > > The default value should already be false. Is there a need to > set it to false? I never mind to see things set explicitly. > > Regards, > Peng > > > r5_rproc->auto_boot = false; > > r5_core = r5_rproc->priv; > > r5_core->dev = cdev; > >-- > >2.34.1 > >
On 7/21/25 10:26 AM, Mathieu Poirier wrote: > On Mon, Jul 21, 2025 at 02:57:24PM +0800, Peng Fan wrote: >> On Wed, Jul 16, 2025 at 02:30:47PM -0700, Tanmay Shah wrote: >>> AMD-Xilinx platform driver does not support iommu or recovery mechanism >>> yet. Disable both features in platform driver. >>> >>> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com> >>> --- >>> drivers/remoteproc/xlnx_r5_remoteproc.c | 2 ++ >>> 1 file changed, 2 insertions(+) >>> >>> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c >>> index a51523456c6e..0ffd26a47685 100644 >>> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c >>> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c >>> @@ -938,6 +938,8 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev) >>> >>> rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM); >>> >>> + r5_rproc->recovery_disabled = true; >> >> This does not block sysfs write if my understanding is correct. >> recovery_store does not do any check. So even you set it to true, >> user could still write sysfs to set to false. > > That is the case for all drivers and not specific to this one. Yup, may be we can make recovery node read-only. Or introduce new flag in remoteproc structure and set by platform driver that decides if recovery can be controlled via sys-fs or not. But that's a separate discussion and patch. > >> >>> + r5_rproc->has_iommu = false; >> >> The default value should already be false. Is there a need to >> set it to false? > > I never mind to see things set explicitly. > >> >> Regards, >> Peng >> >>> r5_rproc->auto_boot = false; >>> r5_core = r5_rproc->priv; >>> r5_core->dev = cdev; >>> -- >>> 2.34.1 >>> >
© 2016 - 2025 Red Hat, Inc.