On 02/20/2017 09:11 PM, Nikunj A Dadhania wrote:
> For 64-bit mode use the compute ca32 routine. While for 32-bit mode, CA
> and CA32 will have same value.
>
> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
> ---
> target/ppc/translate.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/target/ppc/translate.c b/target/ppc/translate.c
> index 77045be..dd413de 100644
> --- a/target/ppc/translate.c
> +++ b/target/ppc/translate.c
> @@ -1389,6 +1389,7 @@ static inline void gen_op_arith_subf(DisasContext *ctx, TCGv ret, TCGv arg1,
> tcg_temp_free(t1);
> tcg_gen_shri_tl(cpu_ca, cpu_ca, 32); /* extract bit 32 */
> tcg_gen_andi_tl(cpu_ca, cpu_ca, 1);
> + tcg_gen_mov_tl(cpu_ca32, cpu_ca);
> } else {
> if (add_ca) {
> TCGv zero, inv1 = tcg_temp_new();
> @@ -1402,6 +1403,7 @@ static inline void gen_op_arith_subf(DisasContext *ctx, TCGv ret, TCGv arg1,
> tcg_gen_setcond_tl(TCG_COND_GEU, cpu_ca, arg2, arg1);
> tcg_gen_sub_tl(t0, arg2, arg1);
> }
> + gen_op_arith_compute_ca32(ctx, arg1, arg2, add_ca, 1);
> }
Ah, I see what you wanted with the previous patch. However, you won't want to
put this here when you fix ca32 computation as I described, because for the
add_ca case you'll want to pass in inv1 as arg1 so that you don't have to
re-invert it.
r~