It's not clear to me why the tool spotted them now and not before, but this has got to be a subtle side effect of introducing the tiny wrapper stubs. Anyway, the "fixes" (perhaps rather workarounds) are pretty straightforward. 1: adjust some shadow_set_l<N>e() callers 2: adjust 2-level case of SHADOW_FOREACH_L2E() Jan
On 13/10/2021 16:36, Jan Beulich wrote: > It's not clear to me why the tool spotted them now and not before, Several reasons. The Coverity backend is a software product just like everything else. IIRC, it releases quarterly. "If something's free, then you are the product". The value of offering free scanning of open source codebases comes from us (the free users) integrating a massive corpus of code into Coverity's system, upon which they can evaluate the effectiveness of new heuristics. Second, and far more likely in this case, "x86/mm: avoid building multiple .o from a single .c file". Coverity has always choked on that in Xen, because it's intermediate database is keyed on source file with latest takes precedent, so we were only seeing the 4-level case previously. And to also answer your question from patch 1 here, there are upper time and complexity bounds on all analysis, because scanning is an exponential problem with the size of the source file. I don't know exactly where the cutoffs are, and I fear that some of our larger files never have later functions looked at. ~Andrew
On 13.10.2021 18:10, Andrew Cooper wrote: > On 13/10/2021 16:36, Jan Beulich wrote: >> It's not clear to me why the tool spotted them now and not before, > > Several reasons. > > The Coverity backend is a software product just like everything else. > IIRC, it releases quarterly. > > "If something's free, then you are the product". The value of offering > free scanning of open source codebases comes from us (the free users) > integrating a massive corpus of code into Coverity's system, upon which > they can evaluate the effectiveness of new heuristics. > > > Second, and far more likely in this case, "x86/mm: avoid building > multiple .o from a single .c file". Coverity has always choked on that > in Xen, because it's intermediate database is keyed on source file with > latest takes precedent, so we were only seeing the 4-level case previously. > > > And to also answer your question from patch 1 here, there are upper time > and complexity bounds on all analysis, because scanning is an > exponential problem with the size of the source file. I don't know > exactly where the cutoffs are, and I fear that some of our larger files > never have later functions looked at. Thanks for the explanations. I have to admit that I would find it helpful if the tool distinguished new issues it found just because code previously wasn't scanned from ones that were truly introduced anew. For patch 1 here this might mean that the report was previously put off when reported against the 4-level case; I think it shouldn't have been ignored, but opinions might diverge and hence there might be a reason why patch 1 isn't wanted then. Patch 2, otoh, doesn't have a 4-level equivalent so is likely to be wanted. Unfortunately your reply didn't include an ack, nak, or at least a vague indication towards either, so I don't really know what (if anything) it means towards the actual patches. Jan
© 2016 - 2026 Red Hat, Inc.