[PATCH v2 00/48] tcg: optimize redundant sign extensions

Richard Henderson posted 48 patches 2 years, 6 months ago
Test checkpatch failed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20211007195456.1168070-1-richard.henderson@linaro.org
Maintainers: Richard Henderson <richard.henderson@linaro.org>
There is a newer version of this series
tcg/optimize.c | 2602 +++++++++++++++++++++++++++++-------------------
1 file changed, 1585 insertions(+), 1017 deletions(-)
[PATCH v2 00/48] tcg: optimize redundant sign extensions
Posted by Richard Henderson 2 years, 6 months ago
Currently, we have support for optimizing redundant zero extensions,
which I think was done with x86 and aarch64 in mind, which zero-extend
all 32-bit operations into the 64-bit register.

But targets like Alpha, MIPS, and RISC-V do sign-extensions instead.
The last 5 patches address this.

But before that, split the quite massive tcg_optimize function.

Changes for v2:
  * Rebase, adjusting MemOpIdx renaming.
  * Apply r-b and some feedback (f4bug).


r~


Richard Henderson (48):
  tcg/optimize: Rename "mask" to "z_mask"
  tcg/optimize: Split out OptContext
  tcg/optimize: Remove do_default label
  tcg/optimize: Change tcg_opt_gen_{mov,movi} interface
  tcg/optimize: Move prev_mb into OptContext
  tcg/optimize: Split out init_arguments
  tcg/optimize: Split out copy_propagate
  tcg/optimize: Split out fold_call
  tcg/optimize: Drop nb_oargs, nb_iargs locals
  tcg/optimize: Change fail return for do_constant_folding_cond*
  tcg/optimize: Return true from tcg_opt_gen_{mov,movi}
  tcg/optimize: Split out finish_folding
  tcg/optimize: Use a boolean to avoid a mass of continues
  tcg/optimize: Split out fold_mb, fold_qemu_{ld,st}
  tcg/optimize: Split out fold_const{1,2}
  tcg/optimize: Split out fold_setcond2
  tcg/optimize: Split out fold_brcond2
  tcg/optimize: Split out fold_brcond
  tcg/optimize: Split out fold_setcond
  tcg/optimize: Split out fold_mulu2_i32
  tcg/optimize: Split out fold_addsub2_i32
  tcg/optimize: Split out fold_movcond
  tcg/optimize: Split out fold_extract2
  tcg/optimize: Split out fold_extract, fold_sextract
  tcg/optimize: Split out fold_deposit
  tcg/optimize: Split out fold_count_zeros
  tcg/optimize: Split out fold_bswap
  tcg/optimize: Split out fold_dup, fold_dup2
  tcg/optimize: Split out fold_mov
  tcg/optimize: Split out fold_xx_to_i
  tcg/optimize: Split out fold_xx_to_x
  tcg/optimize: Split out fold_xi_to_i
  tcg/optimize: Add type to OptContext
  tcg/optimize: Split out fold_to_not
  tcg/optimize: Split out fold_sub_to_neg
  tcg/optimize: Split out fold_xi_to_x
  tcg/optimize: Split out fold_ix_to_i
  tcg/optimize: Split out fold_masks
  tcg/optimize: Expand fold_mulu2_i32 to all 4-arg multiplies
  tcg/optimize: Expand fold_addsub2_i32 to 64-bit ops
  tcg/optimize: Sink commutative operand swapping into fold functions
  tcg/optimize: Add more simplifications for orc
  tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit values
  tcg/optimize: Optimize sign extensions
  tcg/optimize: Propagate sign info for logical operations
  tcg/optimize: Propagate sign info for setcond
  tcg/optimize: Propagate sign info for bit counting
  tcg/optimize: Propagate sign info for shifting

 tcg/optimize.c | 2602 +++++++++++++++++++++++++++++-------------------
 1 file changed, 1585 insertions(+), 1017 deletions(-)

-- 
2.25.1


Re: [PATCH v2 00/48] tcg: optimize redundant sign extensions
Posted by Richard Henderson 2 years, 6 months ago
Ping.

On 10/7/21 12:54 PM, Richard Henderson wrote:
> Currently, we have support for optimizing redundant zero extensions,
> which I think was done with x86 and aarch64 in mind, which zero-extend
> all 32-bit operations into the 64-bit register.
> 
> But targets like Alpha, MIPS, and RISC-V do sign-extensions instead.
> The last 5 patches address this.
> 
> But before that, split the quite massive tcg_optimize function.
> 
> Changes for v2:
>    * Rebase, adjusting MemOpIdx renaming.
>    * Apply r-b and some feedback (f4bug).
> 
> 
> r~
> 
> 
> Richard Henderson (48):
>    tcg/optimize: Rename "mask" to "z_mask"
>    tcg/optimize: Split out OptContext
>    tcg/optimize: Remove do_default label
>    tcg/optimize: Change tcg_opt_gen_{mov,movi} interface
>    tcg/optimize: Move prev_mb into OptContext
>    tcg/optimize: Split out init_arguments
>    tcg/optimize: Split out copy_propagate
>    tcg/optimize: Split out fold_call
>    tcg/optimize: Drop nb_oargs, nb_iargs locals
>    tcg/optimize: Change fail return for do_constant_folding_cond*
>    tcg/optimize: Return true from tcg_opt_gen_{mov,movi}
>    tcg/optimize: Split out finish_folding
>    tcg/optimize: Use a boolean to avoid a mass of continues
>    tcg/optimize: Split out fold_mb, fold_qemu_{ld,st}
>    tcg/optimize: Split out fold_const{1,2}
>    tcg/optimize: Split out fold_setcond2
>    tcg/optimize: Split out fold_brcond2
>    tcg/optimize: Split out fold_brcond
>    tcg/optimize: Split out fold_setcond
>    tcg/optimize: Split out fold_mulu2_i32
>    tcg/optimize: Split out fold_addsub2_i32
>    tcg/optimize: Split out fold_movcond
>    tcg/optimize: Split out fold_extract2
>    tcg/optimize: Split out fold_extract, fold_sextract
>    tcg/optimize: Split out fold_deposit
>    tcg/optimize: Split out fold_count_zeros
>    tcg/optimize: Split out fold_bswap
>    tcg/optimize: Split out fold_dup, fold_dup2
>    tcg/optimize: Split out fold_mov
>    tcg/optimize: Split out fold_xx_to_i
>    tcg/optimize: Split out fold_xx_to_x
>    tcg/optimize: Split out fold_xi_to_i
>    tcg/optimize: Add type to OptContext
>    tcg/optimize: Split out fold_to_not
>    tcg/optimize: Split out fold_sub_to_neg
>    tcg/optimize: Split out fold_xi_to_x
>    tcg/optimize: Split out fold_ix_to_i
>    tcg/optimize: Split out fold_masks
>    tcg/optimize: Expand fold_mulu2_i32 to all 4-arg multiplies
>    tcg/optimize: Expand fold_addsub2_i32 to 64-bit ops
>    tcg/optimize: Sink commutative operand swapping into fold functions
>    tcg/optimize: Add more simplifications for orc
>    tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit values
>    tcg/optimize: Optimize sign extensions
>    tcg/optimize: Propagate sign info for logical operations
>    tcg/optimize: Propagate sign info for setcond
>    tcg/optimize: Propagate sign info for bit counting
>    tcg/optimize: Propagate sign info for shifting
> 
>   tcg/optimize.c | 2602 +++++++++++++++++++++++++++++-------------------
>   1 file changed, 1585 insertions(+), 1017 deletions(-)
> 


Re: [PATCH v2 00/48] tcg: optimize redundant sign extensions
Posted by Alex Bennée 2 years, 6 months ago
Richard Henderson <richard.henderson@linaro.org> writes:

> Currently, we have support for optimizing redundant zero extensions,
> which I think was done with x86 and aarch64 in mind, which zero-extend
> all 32-bit operations into the 64-bit register.
>
> But targets like Alpha, MIPS, and RISC-V do sign-extensions instead.
> The last 5 patches address this.
>
> But before that, split the quite massive tcg_optimize function.

BTW this reminded me of a discussion I was having on another thread:

  Subject: Re: TCG Floating Point Support (Work in Progress)
  Date: Fri, 01 Oct 2021 09:03:41 +0100
  In-reply-to: <CADc=-s5wJ0cBv9r0rXaOk0Ys77Far7mgXq5B+y4KoNr937cC7A@mail.gmail.com>
  Message-ID: <87y27d5ezt.fsf@linaro.org>

about a test harness of TCG. With the changes over the years are we any
closer to being able to lift the TCG code into a unit test so we can add
test cases that exercise and validate the optimiser decisions?

-- 
Alex Bennée

Re: [PATCH v2 00/48] tcg: optimize redundant sign extensions
Posted by Richard Henderson 2 years, 6 months ago
On 10/20/21 9:13 AM, Alex Bennée wrote:
> 
> Richard Henderson <richard.henderson@linaro.org> writes:
> 
>> Currently, we have support for optimizing redundant zero extensions,
>> which I think was done with x86 and aarch64 in mind, which zero-extend
>> all 32-bit operations into the 64-bit register.
>>
>> But targets like Alpha, MIPS, and RISC-V do sign-extensions instead.
>> The last 5 patches address this.
>>
>> But before that, split the quite massive tcg_optimize function.
> 
> BTW this reminded me of a discussion I was having on another thread:
> 
>    Subject: Re: TCG Floating Point Support (Work in Progress)
>    Date: Fri, 01 Oct 2021 09:03:41 +0100
>    In-reply-to: <CADc=-s5wJ0cBv9r0rXaOk0Ys77Far7mgXq5B+y4KoNr937cC7A@mail.gmail.com>
>    Message-ID: <87y27d5ezt.fsf@linaro.org>
> 
> about a test harness of TCG. With the changes over the years are we any
> closer to being able to lift the TCG code into a unit test so we can add
> test cases that exercise and validate the optimiser decisions?

Nope.

I'm not even sure true unit testing is worthwhile.
It would require inventing a "tcg front end", parser, etc.

I could imagine, perhaps, something in which we input real asm and look at the optimized 
opcode dump.  E.g. for x86_64,

_start:
	mov	%eax, %ebx
	mov	%ebx, %ecx
	hlt

should contain only one ext32u_i64 opcode.


r~