On 10/28/21 9:32 PM, Richard Henderson wrote:
> The following changes since commit c52d69e7dbaaed0ffdef8125e79218672c30161d:
>
> Merge remote-tracking branch 'remotes/cschoenebeck/tags/pull-9p-20211027' into staging (2021-10-27 11:45:18 -0700)
>
> are available in the Git repository at:
>
> https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20211028
>
> for you to fetch changes up to efd629fb21e2ff6a8f62642d9ed7a23dfee4d320:
>
> softmmu: fix for "after access" watchpoints (2021-10-28 20:55:07 -0700)
>
> ----------------------------------------------------------------
> Improvements to qemu/int128
> Fixes for 128/64 division.
> Cleanup tcg/optimize.c
> Optimize redundant sign extensions
>
> ----------------------------------------------------------------
> Frédéric Pétrot (1):
> qemu/int128: Add int128_{not,xor}
>
> Luis Pires (4):
> host-utils: move checks out of divu128/divs128
> host-utils: move udiv_qrnnd() to host-utils
> host-utils: add 128-bit quotient support to divu128/divs128
> host-utils: add unit tests for divu128/divs128
>
> Pavel Dovgalyuk (3):
> softmmu: fix watchpoint processing in icount mode
> softmmu: remove useless condition in watchpoint check
> softmmu: fix for "after access" watchpoints
>
> Richard Henderson (52):
> tcg/optimize: Rename "mask" to "z_mask"
> tcg/optimize: Split out OptContext
> tcg/optimize: Remove do_default label
> tcg/optimize: Change tcg_opt_gen_{mov,movi} interface
> tcg/optimize: Move prev_mb into OptContext
> tcg/optimize: Split out init_arguments
> tcg/optimize: Split out copy_propagate
> tcg/optimize: Split out fold_call
> tcg/optimize: Drop nb_oargs, nb_iargs locals
> tcg/optimize: Change fail return for do_constant_folding_cond*
> tcg/optimize: Return true from tcg_opt_gen_{mov,movi}
> tcg/optimize: Split out finish_folding
> tcg/optimize: Use a boolean to avoid a mass of continues
> tcg/optimize: Split out fold_mb, fold_qemu_{ld,st}
> tcg/optimize: Split out fold_const{1,2}
> tcg/optimize: Split out fold_setcond2
> tcg/optimize: Split out fold_brcond2
> tcg/optimize: Split out fold_brcond
> tcg/optimize: Split out fold_setcond
> tcg/optimize: Split out fold_mulu2_i32
> tcg/optimize: Split out fold_addsub2_i32
> tcg/optimize: Split out fold_movcond
> tcg/optimize: Split out fold_extract2
> tcg/optimize: Split out fold_extract, fold_sextract
> tcg/optimize: Split out fold_deposit
> tcg/optimize: Split out fold_count_zeros
> tcg/optimize: Split out fold_bswap
> tcg/optimize: Split out fold_dup, fold_dup2
> tcg/optimize: Split out fold_mov
> tcg/optimize: Split out fold_xx_to_i
> tcg/optimize: Split out fold_xx_to_x
> tcg/optimize: Split out fold_xi_to_i
> tcg/optimize: Add type to OptContext
> tcg/optimize: Split out fold_to_not
> tcg/optimize: Split out fold_sub_to_neg
> tcg/optimize: Split out fold_xi_to_x
> tcg/optimize: Split out fold_ix_to_i
> tcg/optimize: Split out fold_masks
> tcg/optimize: Expand fold_mulu2_i32 to all 4-arg multiplies
> tcg/optimize: Expand fold_addsub2_i32 to 64-bit ops
> tcg/optimize: Sink commutative operand swapping into fold functions
> tcg: Extend call args using the correct opcodes
> tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit values
> tcg/optimize: Use fold_xx_to_i for orc
> tcg/optimize: Use fold_xi_to_x for mul
> tcg/optimize: Use fold_xi_to_x for div
> tcg/optimize: Use fold_xx_to_i for rem
> tcg/optimize: Optimize sign extensions
> tcg/optimize: Propagate sign info for logical operations
> tcg/optimize: Propagate sign info for setcond
> tcg/optimize: Propagate sign info for bit counting
> tcg/optimize: Propagate sign info for shifting
>
> include/fpu/softfloat-macros.h | 82 --
> include/hw/clock.h | 5 +-
> include/qemu/host-utils.h | 121 +-
> include/qemu/int128.h | 20 +
> softmmu/physmem.c | 41 +-
> target/ppc/int_helper.c | 23 +-
> tcg/optimize.c | 2644 ++++++++++++++++++++++++----------------
> tcg/tcg.c | 6 +-
> tests/unit/test-div128.c | 197 +++
> util/host-utils.c | 147 ++-
> tests/unit/meson.build | 1 +
> 11 files changed, 2075 insertions(+), 1212 deletions(-)
> create mode 100644 tests/unit/test-div128.c
Applied, thanks.
r~