OpenCores
URL https://opencores.org/ocsvn/openrisc/openrisc/trunk

Subversion Repositories openrisc

[/] [openrisc/] [tags/] [gnu-src/] [gcc-4.5.1/] [gcc-4.5.1-or32-1.0rc1/] [gcc/] [tree-sra.c] - Diff between revs 280 and 338

Go to most recent revision | Only display areas with differences | Details | Blame | View Log

Rev 280 Rev 338
/* Scalar Replacement of Aggregates (SRA) converts some structure
/* Scalar Replacement of Aggregates (SRA) converts some structure
   references into scalar references, exposing them to the scalar
   references into scalar references, exposing them to the scalar
   optimizers.
   optimizers.
   Copyright (C) 2008, 2009, 2010 Free Software Foundation, Inc.
   Copyright (C) 2008, 2009, 2010 Free Software Foundation, Inc.
   Contributed by Martin Jambor <mjambor@suse.cz>
   Contributed by Martin Jambor <mjambor@suse.cz>
 
 
This file is part of GCC.
This file is part of GCC.
 
 
GCC is free software; you can redistribute it and/or modify it under
GCC is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 3, or (at your option) any later
Software Foundation; either version 3, or (at your option) any later
version.
version.
 
 
GCC is distributed in the hope that it will be useful, but WITHOUT ANY
GCC is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or
WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
for more details.
for more details.
 
 
You should have received a copy of the GNU General Public License
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3.  If not see
along with GCC; see the file COPYING3.  If not see
<http://www.gnu.org/licenses/>.  */
<http://www.gnu.org/licenses/>.  */
 
 
/* This file implements Scalar Reduction of Aggregates (SRA).  SRA is run
/* This file implements Scalar Reduction of Aggregates (SRA).  SRA is run
   twice, once in the early stages of compilation (early SRA) and once in the
   twice, once in the early stages of compilation (early SRA) and once in the
   late stages (late SRA).  The aim of both is to turn references to scalar
   late stages (late SRA).  The aim of both is to turn references to scalar
   parts of aggregates into uses of independent scalar variables.
   parts of aggregates into uses of independent scalar variables.
 
 
   The two passes are nearly identical, the only difference is that early SRA
   The two passes are nearly identical, the only difference is that early SRA
   does not scalarize unions which are used as the result in a GIMPLE_RETURN
   does not scalarize unions which are used as the result in a GIMPLE_RETURN
   statement because together with inlining this can lead to weird type
   statement because together with inlining this can lead to weird type
   conversions.
   conversions.
 
 
   Both passes operate in four stages:
   Both passes operate in four stages:
 
 
   1. The declarations that have properties which make them candidates for
   1. The declarations that have properties which make them candidates for
      scalarization are identified in function find_var_candidates().  The
      scalarization are identified in function find_var_candidates().  The
      candidates are stored in candidate_bitmap.
      candidates are stored in candidate_bitmap.
 
 
   2. The function body is scanned.  In the process, declarations which are
   2. The function body is scanned.  In the process, declarations which are
      used in a manner that prevent their scalarization are removed from the
      used in a manner that prevent their scalarization are removed from the
      candidate bitmap.  More importantly, for every access into an aggregate,
      candidate bitmap.  More importantly, for every access into an aggregate,
      an access structure (struct access) is created by create_access() and
      an access structure (struct access) is created by create_access() and
      stored in a vector associated with the aggregate.  Among other
      stored in a vector associated with the aggregate.  Among other
      information, the aggregate declaration, the offset and size of the access
      information, the aggregate declaration, the offset and size of the access
      and its type are stored in the structure.
      and its type are stored in the structure.
 
 
      On a related note, assign_link structures are created for every assign
      On a related note, assign_link structures are created for every assign
      statement between candidate aggregates and attached to the related
      statement between candidate aggregates and attached to the related
      accesses.
      accesses.
 
 
   3. The vectors of accesses are analyzed.  They are first sorted according to
   3. The vectors of accesses are analyzed.  They are first sorted according to
      their offset and size and then scanned for partially overlapping accesses
      their offset and size and then scanned for partially overlapping accesses
      (i.e. those which overlap but one is not entirely within another).  Such
      (i.e. those which overlap but one is not entirely within another).  Such
      an access disqualifies the whole aggregate from being scalarized.
      an access disqualifies the whole aggregate from being scalarized.
 
 
      If there is no such inhibiting overlap, a representative access structure
      If there is no such inhibiting overlap, a representative access structure
      is chosen for every unique combination of offset and size.  Afterwards,
      is chosen for every unique combination of offset and size.  Afterwards,
      the pass builds a set of trees from these structures, in which children
      the pass builds a set of trees from these structures, in which children
      of an access are within their parent (in terms of offset and size).
      of an access are within their parent (in terms of offset and size).
 
 
      Then accesses  are propagated  whenever possible (i.e.  in cases  when it
      Then accesses  are propagated  whenever possible (i.e.  in cases  when it
      does not create a partially overlapping access) across assign_links from
      does not create a partially overlapping access) across assign_links from
      the right hand side to the left hand side.
      the right hand side to the left hand side.
 
 
      Then the set of trees for each declaration is traversed again and those
      Then the set of trees for each declaration is traversed again and those
      accesses which should be replaced by a scalar are identified.
      accesses which should be replaced by a scalar are identified.
 
 
   4. The function is traversed again, and for every reference into an
   4. The function is traversed again, and for every reference into an
      aggregate that has some component which is about to be scalarized,
      aggregate that has some component which is about to be scalarized,
      statements are amended and new statements are created as necessary.
      statements are amended and new statements are created as necessary.
      Finally, if a parameter got scalarized, the scalar replacements are
      Finally, if a parameter got scalarized, the scalar replacements are
      initialized with values from respective parameter aggregates.  */
      initialized with values from respective parameter aggregates.  */
 
 
#include "config.h"
#include "config.h"
#include "system.h"
#include "system.h"
#include "coretypes.h"
#include "coretypes.h"
#include "alloc-pool.h"
#include "alloc-pool.h"
#include "tm.h"
#include "tm.h"
#include "tree.h"
#include "tree.h"
#include "expr.h"
#include "expr.h"
#include "gimple.h"
#include "gimple.h"
#include "cgraph.h"
#include "cgraph.h"
#include "tree-flow.h"
#include "tree-flow.h"
#include "ipa-prop.h"
#include "ipa-prop.h"
#include "diagnostic.h"
#include "diagnostic.h"
#include "statistics.h"
#include "statistics.h"
#include "tree-dump.h"
#include "tree-dump.h"
#include "timevar.h"
#include "timevar.h"
#include "params.h"
#include "params.h"
#include "target.h"
#include "target.h"
#include "flags.h"
#include "flags.h"
#include "tree-inline.h"
#include "tree-inline.h"
 
 
/* Enumeration of all aggregate reductions we can do.  */
/* Enumeration of all aggregate reductions we can do.  */
enum sra_mode { SRA_MODE_EARLY_IPA,   /* early call regularization */
enum sra_mode { SRA_MODE_EARLY_IPA,   /* early call regularization */
                SRA_MODE_EARLY_INTRA, /* early intraprocedural SRA */
                SRA_MODE_EARLY_INTRA, /* early intraprocedural SRA */
                SRA_MODE_INTRA };     /* late intraprocedural SRA */
                SRA_MODE_INTRA };     /* late intraprocedural SRA */
 
 
/* Global variable describing which aggregate reduction we are performing at
/* Global variable describing which aggregate reduction we are performing at
   the moment.  */
   the moment.  */
static enum sra_mode sra_mode;
static enum sra_mode sra_mode;
 
 
struct assign_link;
struct assign_link;
 
 
/* ACCESS represents each access to an aggregate variable (as a whole or a
/* ACCESS represents each access to an aggregate variable (as a whole or a
   part).  It can also represent a group of accesses that refer to exactly the
   part).  It can also represent a group of accesses that refer to exactly the
   same fragment of an aggregate (i.e. those that have exactly the same offset
   same fragment of an aggregate (i.e. those that have exactly the same offset
   and size).  Such representatives for a single aggregate, once determined,
   and size).  Such representatives for a single aggregate, once determined,
   are linked in a linked list and have the group fields set.
   are linked in a linked list and have the group fields set.
 
 
   Moreover, when doing intraprocedural SRA, a tree is built from those
   Moreover, when doing intraprocedural SRA, a tree is built from those
   representatives (by the means of first_child and next_sibling pointers), in
   representatives (by the means of first_child and next_sibling pointers), in
   which all items in a subtree are "within" the root, i.e. their offset is
   which all items in a subtree are "within" the root, i.e. their offset is
   greater or equal to offset of the root and offset+size is smaller or equal
   greater or equal to offset of the root and offset+size is smaller or equal
   to offset+size of the root.  Children of an access are sorted by offset.
   to offset+size of the root.  Children of an access are sorted by offset.
 
 
   Note that accesses to parts of vector and complex number types always
   Note that accesses to parts of vector and complex number types always
   represented by an access to the whole complex number or a vector.  It is a
   represented by an access to the whole complex number or a vector.  It is a
   duty of the modifying functions to replace them appropriately.  */
   duty of the modifying functions to replace them appropriately.  */
 
 
struct access
struct access
{
{
  /* Values returned by  `get_ref_base_and_extent' for each component reference
  /* Values returned by  `get_ref_base_and_extent' for each component reference
     If EXPR isn't a component reference  just set `BASE = EXPR', `OFFSET = 0',
     If EXPR isn't a component reference  just set `BASE = EXPR', `OFFSET = 0',
     `SIZE = TREE_SIZE (TREE_TYPE (expr))'.  */
     `SIZE = TREE_SIZE (TREE_TYPE (expr))'.  */
  HOST_WIDE_INT offset;
  HOST_WIDE_INT offset;
  HOST_WIDE_INT size;
  HOST_WIDE_INT size;
  tree base;
  tree base;
 
 
  /* Expression.  It is context dependent so do not use it to create new
  /* Expression.  It is context dependent so do not use it to create new
     expressions to access the original aggregate.  See PR 42154 for a
     expressions to access the original aggregate.  See PR 42154 for a
     testcase.  */
     testcase.  */
  tree expr;
  tree expr;
  /* Type.  */
  /* Type.  */
  tree type;
  tree type;
 
 
  /* The statement this access belongs to.  */
  /* The statement this access belongs to.  */
  gimple stmt;
  gimple stmt;
 
 
  /* Next group representative for this aggregate. */
  /* Next group representative for this aggregate. */
  struct access *next_grp;
  struct access *next_grp;
 
 
  /* Pointer to the group representative.  Pointer to itself if the struct is
  /* Pointer to the group representative.  Pointer to itself if the struct is
     the representative.  */
     the representative.  */
  struct access *group_representative;
  struct access *group_representative;
 
 
  /* If this access has any children (in terms of the definition above), this
  /* If this access has any children (in terms of the definition above), this
     points to the first one.  */
     points to the first one.  */
  struct access *first_child;
  struct access *first_child;
 
 
  /* In intraprocedural SRA, pointer to the next sibling in the access tree as
  /* In intraprocedural SRA, pointer to the next sibling in the access tree as
     described above.  In IPA-SRA this is a pointer to the next access
     described above.  In IPA-SRA this is a pointer to the next access
     belonging to the same group (having the same representative).  */
     belonging to the same group (having the same representative).  */
  struct access *next_sibling;
  struct access *next_sibling;
 
 
  /* Pointers to the first and last element in the linked list of assign
  /* Pointers to the first and last element in the linked list of assign
     links.  */
     links.  */
  struct assign_link *first_link, *last_link;
  struct assign_link *first_link, *last_link;
 
 
  /* Pointer to the next access in the work queue.  */
  /* Pointer to the next access in the work queue.  */
  struct access *next_queued;
  struct access *next_queued;
 
 
  /* Replacement variable for this access "region."  Never to be accessed
  /* Replacement variable for this access "region."  Never to be accessed
     directly, always only by the means of get_access_replacement() and only
     directly, always only by the means of get_access_replacement() and only
     when grp_to_be_replaced flag is set.  */
     when grp_to_be_replaced flag is set.  */
  tree replacement_decl;
  tree replacement_decl;
 
 
  /* Is this particular access write access? */
  /* Is this particular access write access? */
  unsigned write : 1;
  unsigned write : 1;
 
 
  /* Is this access an artificial one created to scalarize some record
  /* Is this access an artificial one created to scalarize some record
     entirely? */
     entirely? */
  unsigned total_scalarization : 1;
  unsigned total_scalarization : 1;
 
 
  /* Is this access currently in the work queue?  */
  /* Is this access currently in the work queue?  */
  unsigned grp_queued : 1;
  unsigned grp_queued : 1;
 
 
  /* Does this group contain a write access?  This flag is propagated down the
  /* Does this group contain a write access?  This flag is propagated down the
     access tree.  */
     access tree.  */
  unsigned grp_write : 1;
  unsigned grp_write : 1;
 
 
  /* Does this group contain a read access?  This flag is propagated down the
  /* Does this group contain a read access?  This flag is propagated down the
     access tree.  */
     access tree.  */
  unsigned grp_read : 1;
  unsigned grp_read : 1;
 
 
  /* Does this group contain a read access that comes from an assignment
  /* Does this group contain a read access that comes from an assignment
     statement?  This flag is propagated down the access tree.  */
     statement?  This flag is propagated down the access tree.  */
  unsigned grp_assignment_read : 1;
  unsigned grp_assignment_read : 1;
 
 
  /* Other passes of the analysis use this bit to make function
  /* Other passes of the analysis use this bit to make function
     analyze_access_subtree create scalar replacements for this group if
     analyze_access_subtree create scalar replacements for this group if
     possible.  */
     possible.  */
  unsigned grp_hint : 1;
  unsigned grp_hint : 1;
 
 
  /* Is the subtree rooted in this access fully covered by scalar
  /* Is the subtree rooted in this access fully covered by scalar
     replacements?  */
     replacements?  */
  unsigned grp_covered : 1;
  unsigned grp_covered : 1;
 
 
  /* If set to true, this access and all below it in an access tree must not be
  /* If set to true, this access and all below it in an access tree must not be
     scalarized.  */
     scalarized.  */
  unsigned grp_unscalarizable_region : 1;
  unsigned grp_unscalarizable_region : 1;
 
 
  /* Whether data have been written to parts of the aggregate covered by this
  /* Whether data have been written to parts of the aggregate covered by this
     access which is not to be scalarized.  This flag is propagated up in the
     access which is not to be scalarized.  This flag is propagated up in the
     access tree.  */
     access tree.  */
  unsigned grp_unscalarized_data : 1;
  unsigned grp_unscalarized_data : 1;
 
 
  /* Does this access and/or group contain a write access through a
  /* Does this access and/or group contain a write access through a
     BIT_FIELD_REF?  */
     BIT_FIELD_REF?  */
  unsigned grp_partial_lhs : 1;
  unsigned grp_partial_lhs : 1;
 
 
  /* Set when a scalar replacement should be created for this variable.  We do
  /* Set when a scalar replacement should be created for this variable.  We do
     the decision and creation at different places because create_tmp_var
     the decision and creation at different places because create_tmp_var
     cannot be called from within FOR_EACH_REFERENCED_VAR. */
     cannot be called from within FOR_EACH_REFERENCED_VAR. */
  unsigned grp_to_be_replaced : 1;
  unsigned grp_to_be_replaced : 1;
 
 
  /* Is it possible that the group refers to data which might be (directly or
  /* Is it possible that the group refers to data which might be (directly or
     otherwise) modified?  */
     otherwise) modified?  */
  unsigned grp_maybe_modified : 1;
  unsigned grp_maybe_modified : 1;
 
 
  /* Set when this is a representative of a pointer to scalar (i.e. by
  /* Set when this is a representative of a pointer to scalar (i.e. by
     reference) parameter which we consider for turning into a plain scalar
     reference) parameter which we consider for turning into a plain scalar
     (i.e. a by value parameter).  */
     (i.e. a by value parameter).  */
  unsigned grp_scalar_ptr : 1;
  unsigned grp_scalar_ptr : 1;
 
 
  /* Set when we discover that this pointer is not safe to dereference in the
  /* Set when we discover that this pointer is not safe to dereference in the
     caller.  */
     caller.  */
  unsigned grp_not_necessarilly_dereferenced : 1;
  unsigned grp_not_necessarilly_dereferenced : 1;
};
};
 
 
typedef struct access *access_p;
typedef struct access *access_p;
 
 
DEF_VEC_P (access_p);
DEF_VEC_P (access_p);
DEF_VEC_ALLOC_P (access_p, heap);
DEF_VEC_ALLOC_P (access_p, heap);
 
 
/* Alloc pool for allocating access structures.  */
/* Alloc pool for allocating access structures.  */
static alloc_pool access_pool;
static alloc_pool access_pool;
 
 
/* A structure linking lhs and rhs accesses from an aggregate assignment.  They
/* A structure linking lhs and rhs accesses from an aggregate assignment.  They
   are used to propagate subaccesses from rhs to lhs as long as they don't
   are used to propagate subaccesses from rhs to lhs as long as they don't
   conflict with what is already there.  */
   conflict with what is already there.  */
struct assign_link
struct assign_link
{
{
  struct access *lacc, *racc;
  struct access *lacc, *racc;
  struct assign_link *next;
  struct assign_link *next;
};
};
 
 
/* Alloc pool for allocating assign link structures.  */
/* Alloc pool for allocating assign link structures.  */
static alloc_pool link_pool;
static alloc_pool link_pool;
 
 
/* Base (tree) -> Vector (VEC(access_p,heap) *) map.  */
/* Base (tree) -> Vector (VEC(access_p,heap) *) map.  */
static struct pointer_map_t *base_access_vec;
static struct pointer_map_t *base_access_vec;
 
 
/* Bitmap of candidates.  */
/* Bitmap of candidates.  */
static bitmap candidate_bitmap;
static bitmap candidate_bitmap;
 
 
/* Bitmap of candidates which we should try to entirely scalarize away and
/* Bitmap of candidates which we should try to entirely scalarize away and
   those which cannot be (because they are and need be used as a whole).  */
   those which cannot be (because they are and need be used as a whole).  */
static bitmap should_scalarize_away_bitmap, cannot_scalarize_away_bitmap;
static bitmap should_scalarize_away_bitmap, cannot_scalarize_away_bitmap;
 
 
/* Obstack for creation of fancy names.  */
/* Obstack for creation of fancy names.  */
static struct obstack name_obstack;
static struct obstack name_obstack;
 
 
/* Head of a linked list of accesses that need to have its subaccesses
/* Head of a linked list of accesses that need to have its subaccesses
   propagated to their assignment counterparts. */
   propagated to their assignment counterparts. */
static struct access *work_queue_head;
static struct access *work_queue_head;
 
 
/* Number of parameters of the analyzed function when doing early ipa SRA.  */
/* Number of parameters of the analyzed function when doing early ipa SRA.  */
static int func_param_count;
static int func_param_count;
 
 
/* scan_function sets the following to true if it encounters a call to
/* scan_function sets the following to true if it encounters a call to
   __builtin_apply_args.  */
   __builtin_apply_args.  */
static bool encountered_apply_args;
static bool encountered_apply_args;
 
 
/* Set by scan_function when it finds a recursive call with less actual
/* Set by scan_function when it finds a recursive call with less actual
   arguments than formal parameters..  */
   arguments than formal parameters..  */
static bool encountered_unchangable_recursive_call;
static bool encountered_unchangable_recursive_call;
 
 
/* This is a table in which for each basic block and parameter there is a
/* This is a table in which for each basic block and parameter there is a
   distance (offset + size) in that parameter which is dereferenced and
   distance (offset + size) in that parameter which is dereferenced and
   accessed in that BB.  */
   accessed in that BB.  */
static HOST_WIDE_INT *bb_dereferences;
static HOST_WIDE_INT *bb_dereferences;
/* Bitmap of BBs that can cause the function to "stop" progressing by
/* Bitmap of BBs that can cause the function to "stop" progressing by
   returning, throwing externally, looping infinitely or calling a function
   returning, throwing externally, looping infinitely or calling a function
   which might abort etc.. */
   which might abort etc.. */
static bitmap final_bbs;
static bitmap final_bbs;
 
 
/* Representative of no accesses at all. */
/* Representative of no accesses at all. */
static struct access  no_accesses_representant;
static struct access  no_accesses_representant;
 
 
/* Predicate to test the special value.  */
/* Predicate to test the special value.  */
 
 
static inline bool
static inline bool
no_accesses_p (struct access *access)
no_accesses_p (struct access *access)
{
{
  return access == &no_accesses_representant;
  return access == &no_accesses_representant;
}
}
 
 
/* Dump contents of ACCESS to file F in a human friendly way.  If GRP is true,
/* Dump contents of ACCESS to file F in a human friendly way.  If GRP is true,
   representative fields are dumped, otherwise those which only describe the
   representative fields are dumped, otherwise those which only describe the
   individual access are.  */
   individual access are.  */
 
 
static struct
static struct
{
{
  /* Number of processed aggregates is readily available in
  /* Number of processed aggregates is readily available in
     analyze_all_variable_accesses and so is not stored here.  */
     analyze_all_variable_accesses and so is not stored here.  */
 
 
  /* Number of created scalar replacements.  */
  /* Number of created scalar replacements.  */
  int replacements;
  int replacements;
 
 
  /* Number of times sra_modify_expr or sra_modify_assign themselves changed an
  /* Number of times sra_modify_expr or sra_modify_assign themselves changed an
     expression.  */
     expression.  */
  int exprs;
  int exprs;
 
 
  /* Number of statements created by generate_subtree_copies.  */
  /* Number of statements created by generate_subtree_copies.  */
  int subtree_copies;
  int subtree_copies;
 
 
  /* Number of statements created by load_assign_lhs_subreplacements.  */
  /* Number of statements created by load_assign_lhs_subreplacements.  */
  int subreplacements;
  int subreplacements;
 
 
  /* Number of times sra_modify_assign has deleted a statement.  */
  /* Number of times sra_modify_assign has deleted a statement.  */
  int deleted;
  int deleted;
 
 
  /* Number of times sra_modify_assign has to deal with subaccesses of LHS and
  /* Number of times sra_modify_assign has to deal with subaccesses of LHS and
     RHS reparately due to type conversions or nonexistent matching
     RHS reparately due to type conversions or nonexistent matching
     references.  */
     references.  */
  int separate_lhs_rhs_handling;
  int separate_lhs_rhs_handling;
 
 
  /* Number of parameters that were removed because they were unused.  */
  /* Number of parameters that were removed because they were unused.  */
  int deleted_unused_parameters;
  int deleted_unused_parameters;
 
 
  /* Number of scalars passed as parameters by reference that have been
  /* Number of scalars passed as parameters by reference that have been
     converted to be passed by value.  */
     converted to be passed by value.  */
  int scalar_by_ref_to_by_val;
  int scalar_by_ref_to_by_val;
 
 
  /* Number of aggregate parameters that were replaced by one or more of their
  /* Number of aggregate parameters that were replaced by one or more of their
     components.  */
     components.  */
  int aggregate_params_reduced;
  int aggregate_params_reduced;
 
 
  /* Numbber of components created when splitting aggregate parameters.  */
  /* Numbber of components created when splitting aggregate parameters.  */
  int param_reductions_created;
  int param_reductions_created;
} sra_stats;
} sra_stats;
 
 
static void
static void
dump_access (FILE *f, struct access *access, bool grp)
dump_access (FILE *f, struct access *access, bool grp)
{
{
  fprintf (f, "access { ");
  fprintf (f, "access { ");
  fprintf (f, "base = (%d)'", DECL_UID (access->base));
  fprintf (f, "base = (%d)'", DECL_UID (access->base));
  print_generic_expr (f, access->base, 0);
  print_generic_expr (f, access->base, 0);
  fprintf (f, "', offset = " HOST_WIDE_INT_PRINT_DEC, access->offset);
  fprintf (f, "', offset = " HOST_WIDE_INT_PRINT_DEC, access->offset);
  fprintf (f, ", size = " HOST_WIDE_INT_PRINT_DEC, access->size);
  fprintf (f, ", size = " HOST_WIDE_INT_PRINT_DEC, access->size);
  fprintf (f, ", expr = ");
  fprintf (f, ", expr = ");
  print_generic_expr (f, access->expr, 0);
  print_generic_expr (f, access->expr, 0);
  fprintf (f, ", type = ");
  fprintf (f, ", type = ");
  print_generic_expr (f, access->type, 0);
  print_generic_expr (f, access->type, 0);
  if (grp)
  if (grp)
    fprintf (f, ", grp_write = %d, total_scalarization = %d, "
    fprintf (f, ", grp_write = %d, total_scalarization = %d, "
             "grp_read = %d, grp_hint = %d, grp_assignment_read = %d,"
             "grp_read = %d, grp_hint = %d, grp_assignment_read = %d,"
             "grp_covered = %d, grp_unscalarizable_region = %d, "
             "grp_covered = %d, grp_unscalarizable_region = %d, "
             "grp_unscalarized_data = %d, grp_partial_lhs = %d, "
             "grp_unscalarized_data = %d, grp_partial_lhs = %d, "
             "grp_to_be_replaced = %d, grp_maybe_modified = %d, "
             "grp_to_be_replaced = %d, grp_maybe_modified = %d, "
             "grp_not_necessarilly_dereferenced = %d\n",
             "grp_not_necessarilly_dereferenced = %d\n",
             access->grp_write, access->total_scalarization,
             access->grp_write, access->total_scalarization,
             access->grp_read, access->grp_hint, access->grp_assignment_read,
             access->grp_read, access->grp_hint, access->grp_assignment_read,
             access->grp_covered, access->grp_unscalarizable_region,
             access->grp_covered, access->grp_unscalarizable_region,
             access->grp_unscalarized_data, access->grp_partial_lhs,
             access->grp_unscalarized_data, access->grp_partial_lhs,
             access->grp_to_be_replaced, access->grp_maybe_modified,
             access->grp_to_be_replaced, access->grp_maybe_modified,
             access->grp_not_necessarilly_dereferenced);
             access->grp_not_necessarilly_dereferenced);
  else
  else
    fprintf (f, ", write = %d, total_scalarization = %d, "
    fprintf (f, ", write = %d, total_scalarization = %d, "
             "grp_partial_lhs = %d\n",
             "grp_partial_lhs = %d\n",
             access->write, access->total_scalarization,
             access->write, access->total_scalarization,
             access->grp_partial_lhs);
             access->grp_partial_lhs);
}
}
 
 
/* Dump a subtree rooted in ACCESS to file F, indent by LEVEL.  */
/* Dump a subtree rooted in ACCESS to file F, indent by LEVEL.  */
 
 
static void
static void
dump_access_tree_1 (FILE *f, struct access *access, int level)
dump_access_tree_1 (FILE *f, struct access *access, int level)
{
{
  do
  do
    {
    {
      int i;
      int i;
 
 
      for (i = 0; i < level; i++)
      for (i = 0; i < level; i++)
        fputs ("* ", dump_file);
        fputs ("* ", dump_file);
 
 
      dump_access (f, access, true);
      dump_access (f, access, true);
 
 
      if (access->first_child)
      if (access->first_child)
        dump_access_tree_1 (f, access->first_child, level + 1);
        dump_access_tree_1 (f, access->first_child, level + 1);
 
 
      access = access->next_sibling;
      access = access->next_sibling;
    }
    }
  while (access);
  while (access);
}
}
 
 
/* Dump all access trees for a variable, given the pointer to the first root in
/* Dump all access trees for a variable, given the pointer to the first root in
   ACCESS.  */
   ACCESS.  */
 
 
static void
static void
dump_access_tree (FILE *f, struct access *access)
dump_access_tree (FILE *f, struct access *access)
{
{
  for (; access; access = access->next_grp)
  for (; access; access = access->next_grp)
    dump_access_tree_1 (f, access, 0);
    dump_access_tree_1 (f, access, 0);
}
}
 
 
/* Return true iff ACC is non-NULL and has subaccesses.  */
/* Return true iff ACC is non-NULL and has subaccesses.  */
 
 
static inline bool
static inline bool
access_has_children_p (struct access *acc)
access_has_children_p (struct access *acc)
{
{
  return acc && acc->first_child;
  return acc && acc->first_child;
}
}
 
 
/* Return a vector of pointers to accesses for the variable given in BASE or
/* Return a vector of pointers to accesses for the variable given in BASE or
   NULL if there is none.  */
   NULL if there is none.  */
 
 
static VEC (access_p, heap) *
static VEC (access_p, heap) *
get_base_access_vector (tree base)
get_base_access_vector (tree base)
{
{
  void **slot;
  void **slot;
 
 
  slot = pointer_map_contains (base_access_vec, base);
  slot = pointer_map_contains (base_access_vec, base);
  if (!slot)
  if (!slot)
    return NULL;
    return NULL;
  else
  else
    return *(VEC (access_p, heap) **) slot;
    return *(VEC (access_p, heap) **) slot;
}
}
 
 
/* Find an access with required OFFSET and SIZE in a subtree of accesses rooted
/* Find an access with required OFFSET and SIZE in a subtree of accesses rooted
   in ACCESS.  Return NULL if it cannot be found.  */
   in ACCESS.  Return NULL if it cannot be found.  */
 
 
static struct access *
static struct access *
find_access_in_subtree (struct access *access, HOST_WIDE_INT offset,
find_access_in_subtree (struct access *access, HOST_WIDE_INT offset,
                        HOST_WIDE_INT size)
                        HOST_WIDE_INT size)
{
{
  while (access && (access->offset != offset || access->size != size))
  while (access && (access->offset != offset || access->size != size))
    {
    {
      struct access *child = access->first_child;
      struct access *child = access->first_child;
 
 
      while (child && (child->offset + child->size <= offset))
      while (child && (child->offset + child->size <= offset))
        child = child->next_sibling;
        child = child->next_sibling;
      access = child;
      access = child;
    }
    }
 
 
  return access;
  return access;
}
}
 
 
/* Return the first group representative for DECL or NULL if none exists.  */
/* Return the first group representative for DECL or NULL if none exists.  */
 
 
static struct access *
static struct access *
get_first_repr_for_decl (tree base)
get_first_repr_for_decl (tree base)
{
{
  VEC (access_p, heap) *access_vec;
  VEC (access_p, heap) *access_vec;
 
 
  access_vec = get_base_access_vector (base);
  access_vec = get_base_access_vector (base);
  if (!access_vec)
  if (!access_vec)
    return NULL;
    return NULL;
 
 
  return VEC_index (access_p, access_vec, 0);
  return VEC_index (access_p, access_vec, 0);
}
}
 
 
/* Find an access representative for the variable BASE and given OFFSET and
/* Find an access representative for the variable BASE and given OFFSET and
   SIZE.  Requires that access trees have already been built.  Return NULL if
   SIZE.  Requires that access trees have already been built.  Return NULL if
   it cannot be found.  */
   it cannot be found.  */
 
 
static struct access *
static struct access *
get_var_base_offset_size_access (tree base, HOST_WIDE_INT offset,
get_var_base_offset_size_access (tree base, HOST_WIDE_INT offset,
                                 HOST_WIDE_INT size)
                                 HOST_WIDE_INT size)
{
{
  struct access *access;
  struct access *access;
 
 
  access = get_first_repr_for_decl (base);
  access = get_first_repr_for_decl (base);
  while (access && (access->offset + access->size <= offset))
  while (access && (access->offset + access->size <= offset))
    access = access->next_grp;
    access = access->next_grp;
  if (!access)
  if (!access)
    return NULL;
    return NULL;
 
 
  return find_access_in_subtree (access, offset, size);
  return find_access_in_subtree (access, offset, size);
}
}
 
 
/* Add LINK to the linked list of assign links of RACC.  */
/* Add LINK to the linked list of assign links of RACC.  */
static void
static void
add_link_to_rhs (struct access *racc, struct assign_link *link)
add_link_to_rhs (struct access *racc, struct assign_link *link)
{
{
  gcc_assert (link->racc == racc);
  gcc_assert (link->racc == racc);
 
 
  if (!racc->first_link)
  if (!racc->first_link)
    {
    {
      gcc_assert (!racc->last_link);
      gcc_assert (!racc->last_link);
      racc->first_link = link;
      racc->first_link = link;
    }
    }
  else
  else
    racc->last_link->next = link;
    racc->last_link->next = link;
 
 
  racc->last_link = link;
  racc->last_link = link;
  link->next = NULL;
  link->next = NULL;
}
}
 
 
/* Move all link structures in their linked list in OLD_RACC to the linked list
/* Move all link structures in their linked list in OLD_RACC to the linked list
   in NEW_RACC.  */
   in NEW_RACC.  */
static void
static void
relink_to_new_repr (struct access *new_racc, struct access *old_racc)
relink_to_new_repr (struct access *new_racc, struct access *old_racc)
{
{
  if (!old_racc->first_link)
  if (!old_racc->first_link)
    {
    {
      gcc_assert (!old_racc->last_link);
      gcc_assert (!old_racc->last_link);
      return;
      return;
    }
    }
 
 
  if (new_racc->first_link)
  if (new_racc->first_link)
    {
    {
      gcc_assert (!new_racc->last_link->next);
      gcc_assert (!new_racc->last_link->next);
      gcc_assert (!old_racc->last_link || !old_racc->last_link->next);
      gcc_assert (!old_racc->last_link || !old_racc->last_link->next);
 
 
      new_racc->last_link->next = old_racc->first_link;
      new_racc->last_link->next = old_racc->first_link;
      new_racc->last_link = old_racc->last_link;
      new_racc->last_link = old_racc->last_link;
    }
    }
  else
  else
    {
    {
      gcc_assert (!new_racc->last_link);
      gcc_assert (!new_racc->last_link);
 
 
      new_racc->first_link = old_racc->first_link;
      new_racc->first_link = old_racc->first_link;
      new_racc->last_link = old_racc->last_link;
      new_racc->last_link = old_racc->last_link;
    }
    }
  old_racc->first_link = old_racc->last_link = NULL;
  old_racc->first_link = old_racc->last_link = NULL;
}
}
 
 
/* Add ACCESS to the work queue (which is actually a stack).  */
/* Add ACCESS to the work queue (which is actually a stack).  */
 
 
static void
static void
add_access_to_work_queue (struct access *access)
add_access_to_work_queue (struct access *access)
{
{
  if (!access->grp_queued)
  if (!access->grp_queued)
    {
    {
      gcc_assert (!access->next_queued);
      gcc_assert (!access->next_queued);
      access->next_queued = work_queue_head;
      access->next_queued = work_queue_head;
      access->grp_queued = 1;
      access->grp_queued = 1;
      work_queue_head = access;
      work_queue_head = access;
    }
    }
}
}
 
 
/* Pop an access from the work queue, and return it, assuming there is one.  */
/* Pop an access from the work queue, and return it, assuming there is one.  */
 
 
static struct access *
static struct access *
pop_access_from_work_queue (void)
pop_access_from_work_queue (void)
{
{
  struct access *access = work_queue_head;
  struct access *access = work_queue_head;
 
 
  work_queue_head = access->next_queued;
  work_queue_head = access->next_queued;
  access->next_queued = NULL;
  access->next_queued = NULL;
  access->grp_queued = 0;
  access->grp_queued = 0;
  return access;
  return access;
}
}
 
 
 
 
/* Allocate necessary structures.  */
/* Allocate necessary structures.  */
 
 
static void
static void
sra_initialize (void)
sra_initialize (void)
{
{
  candidate_bitmap = BITMAP_ALLOC (NULL);
  candidate_bitmap = BITMAP_ALLOC (NULL);
  should_scalarize_away_bitmap = BITMAP_ALLOC (NULL);
  should_scalarize_away_bitmap = BITMAP_ALLOC (NULL);
  cannot_scalarize_away_bitmap = BITMAP_ALLOC (NULL);
  cannot_scalarize_away_bitmap = BITMAP_ALLOC (NULL);
  gcc_obstack_init (&name_obstack);
  gcc_obstack_init (&name_obstack);
  access_pool = create_alloc_pool ("SRA accesses", sizeof (struct access), 16);
  access_pool = create_alloc_pool ("SRA accesses", sizeof (struct access), 16);
  link_pool = create_alloc_pool ("SRA links", sizeof (struct assign_link), 16);
  link_pool = create_alloc_pool ("SRA links", sizeof (struct assign_link), 16);
  base_access_vec = pointer_map_create ();
  base_access_vec = pointer_map_create ();
  memset (&sra_stats, 0, sizeof (sra_stats));
  memset (&sra_stats, 0, sizeof (sra_stats));
  encountered_apply_args = false;
  encountered_apply_args = false;
  encountered_unchangable_recursive_call = false;
  encountered_unchangable_recursive_call = false;
}
}
 
 
/* Hook fed to pointer_map_traverse, deallocate stored vectors.  */
/* Hook fed to pointer_map_traverse, deallocate stored vectors.  */
 
 
static bool
static bool
delete_base_accesses (const void *key ATTRIBUTE_UNUSED, void **value,
delete_base_accesses (const void *key ATTRIBUTE_UNUSED, void **value,
                     void *data ATTRIBUTE_UNUSED)
                     void *data ATTRIBUTE_UNUSED)
{
{
  VEC (access_p, heap) *access_vec;
  VEC (access_p, heap) *access_vec;
  access_vec = (VEC (access_p, heap) *) *value;
  access_vec = (VEC (access_p, heap) *) *value;
  VEC_free (access_p, heap, access_vec);
  VEC_free (access_p, heap, access_vec);
 
 
  return true;
  return true;
}
}
 
 
/* Deallocate all general structures.  */
/* Deallocate all general structures.  */
 
 
static void
static void
sra_deinitialize (void)
sra_deinitialize (void)
{
{
  BITMAP_FREE (candidate_bitmap);
  BITMAP_FREE (candidate_bitmap);
  BITMAP_FREE (should_scalarize_away_bitmap);
  BITMAP_FREE (should_scalarize_away_bitmap);
  BITMAP_FREE (cannot_scalarize_away_bitmap);
  BITMAP_FREE (cannot_scalarize_away_bitmap);
  free_alloc_pool (access_pool);
  free_alloc_pool (access_pool);
  free_alloc_pool (link_pool);
  free_alloc_pool (link_pool);
  obstack_free (&name_obstack, NULL);
  obstack_free (&name_obstack, NULL);
 
 
  pointer_map_traverse (base_access_vec, delete_base_accesses, NULL);
  pointer_map_traverse (base_access_vec, delete_base_accesses, NULL);
  pointer_map_destroy (base_access_vec);
  pointer_map_destroy (base_access_vec);
}
}
 
 
/* Remove DECL from candidates for SRA and write REASON to the dump file if
/* Remove DECL from candidates for SRA and write REASON to the dump file if
   there is one.  */
   there is one.  */
static void
static void
disqualify_candidate (tree decl, const char *reason)
disqualify_candidate (tree decl, const char *reason)
{
{
  bitmap_clear_bit (candidate_bitmap, DECL_UID (decl));
  bitmap_clear_bit (candidate_bitmap, DECL_UID (decl));
 
 
  if (dump_file && (dump_flags & TDF_DETAILS))
  if (dump_file && (dump_flags & TDF_DETAILS))
    {
    {
      fprintf (dump_file, "! Disqualifying ");
      fprintf (dump_file, "! Disqualifying ");
      print_generic_expr (dump_file, decl, 0);
      print_generic_expr (dump_file, decl, 0);
      fprintf (dump_file, " - %s\n", reason);
      fprintf (dump_file, " - %s\n", reason);
    }
    }
}
}
 
 
/* Return true iff the type contains a field or an element which does not allow
/* Return true iff the type contains a field or an element which does not allow
   scalarization.  */
   scalarization.  */
 
 
static bool
static bool
type_internals_preclude_sra_p (tree type)
type_internals_preclude_sra_p (tree type)
{
{
  tree fld;
  tree fld;
  tree et;
  tree et;
 
 
  switch (TREE_CODE (type))
  switch (TREE_CODE (type))
    {
    {
    case RECORD_TYPE:
    case RECORD_TYPE:
    case UNION_TYPE:
    case UNION_TYPE:
    case QUAL_UNION_TYPE:
    case QUAL_UNION_TYPE:
      for (fld = TYPE_FIELDS (type); fld; fld = TREE_CHAIN (fld))
      for (fld = TYPE_FIELDS (type); fld; fld = TREE_CHAIN (fld))
        if (TREE_CODE (fld) == FIELD_DECL)
        if (TREE_CODE (fld) == FIELD_DECL)
          {
          {
            tree ft = TREE_TYPE (fld);
            tree ft = TREE_TYPE (fld);
 
 
            if (TREE_THIS_VOLATILE (fld)
            if (TREE_THIS_VOLATILE (fld)
                || !DECL_FIELD_OFFSET (fld) || !DECL_SIZE (fld)
                || !DECL_FIELD_OFFSET (fld) || !DECL_SIZE (fld)
                || !host_integerp (DECL_FIELD_OFFSET (fld), 1)
                || !host_integerp (DECL_FIELD_OFFSET (fld), 1)
                || !host_integerp (DECL_SIZE (fld), 1))
                || !host_integerp (DECL_SIZE (fld), 1))
              return true;
              return true;
 
 
            if (AGGREGATE_TYPE_P (ft)
            if (AGGREGATE_TYPE_P (ft)
                && type_internals_preclude_sra_p (ft))
                && type_internals_preclude_sra_p (ft))
              return true;
              return true;
          }
          }
 
 
      return false;
      return false;
 
 
    case ARRAY_TYPE:
    case ARRAY_TYPE:
      et = TREE_TYPE (type);
      et = TREE_TYPE (type);
 
 
      if (AGGREGATE_TYPE_P (et))
      if (AGGREGATE_TYPE_P (et))
        return type_internals_preclude_sra_p (et);
        return type_internals_preclude_sra_p (et);
      else
      else
        return false;
        return false;
 
 
    default:
    default:
      return false;
      return false;
    }
    }
}
}
 
 
/* If T is an SSA_NAME, return NULL if it is not a default def or return its
/* If T is an SSA_NAME, return NULL if it is not a default def or return its
   base variable if it is.  Return T if it is not an SSA_NAME.  */
   base variable if it is.  Return T if it is not an SSA_NAME.  */
 
 
static tree
static tree
get_ssa_base_param (tree t)
get_ssa_base_param (tree t)
{
{
  if (TREE_CODE (t) == SSA_NAME)
  if (TREE_CODE (t) == SSA_NAME)
    {
    {
      if (SSA_NAME_IS_DEFAULT_DEF (t))
      if (SSA_NAME_IS_DEFAULT_DEF (t))
        return SSA_NAME_VAR (t);
        return SSA_NAME_VAR (t);
      else
      else
        return NULL_TREE;
        return NULL_TREE;
    }
    }
  return t;
  return t;
}
}
 
 
/* Mark a dereference of BASE of distance DIST in a basic block tht STMT
/* Mark a dereference of BASE of distance DIST in a basic block tht STMT
   belongs to, unless the BB has already been marked as a potentially
   belongs to, unless the BB has already been marked as a potentially
   final.  */
   final.  */
 
 
static void
static void
mark_parm_dereference (tree base, HOST_WIDE_INT dist, gimple stmt)
mark_parm_dereference (tree base, HOST_WIDE_INT dist, gimple stmt)
{
{
  basic_block bb = gimple_bb (stmt);
  basic_block bb = gimple_bb (stmt);
  int idx, parm_index = 0;
  int idx, parm_index = 0;
  tree parm;
  tree parm;
 
 
  if (bitmap_bit_p (final_bbs, bb->index))
  if (bitmap_bit_p (final_bbs, bb->index))
    return;
    return;
 
 
  for (parm = DECL_ARGUMENTS (current_function_decl);
  for (parm = DECL_ARGUMENTS (current_function_decl);
       parm && parm != base;
       parm && parm != base;
       parm = TREE_CHAIN (parm))
       parm = TREE_CHAIN (parm))
    parm_index++;
    parm_index++;
 
 
  gcc_assert (parm_index < func_param_count);
  gcc_assert (parm_index < func_param_count);
 
 
  idx = bb->index * func_param_count + parm_index;
  idx = bb->index * func_param_count + parm_index;
  if (bb_dereferences[idx] < dist)
  if (bb_dereferences[idx] < dist)
    bb_dereferences[idx] = dist;
    bb_dereferences[idx] = dist;
}
}
 
 
/* Allocate an access structure for BASE, OFFSET and SIZE, clear it, fill in
/* Allocate an access structure for BASE, OFFSET and SIZE, clear it, fill in
   the three fields.  Also add it to the vector of accesses corresponding to
   the three fields.  Also add it to the vector of accesses corresponding to
   the base.  Finally, return the new access.  */
   the base.  Finally, return the new access.  */
 
 
static struct access *
static struct access *
create_access_1 (tree base, HOST_WIDE_INT offset, HOST_WIDE_INT size)
create_access_1 (tree base, HOST_WIDE_INT offset, HOST_WIDE_INT size)
{
{
  VEC (access_p, heap) *vec;
  VEC (access_p, heap) *vec;
  struct access *access;
  struct access *access;
  void **slot;
  void **slot;
 
 
  access = (struct access *) pool_alloc (access_pool);
  access = (struct access *) pool_alloc (access_pool);
  memset (access, 0, sizeof (struct access));
  memset (access, 0, sizeof (struct access));
  access->base = base;
  access->base = base;
  access->offset = offset;
  access->offset = offset;
  access->size = size;
  access->size = size;
 
 
  slot = pointer_map_contains (base_access_vec, base);
  slot = pointer_map_contains (base_access_vec, base);
  if (slot)
  if (slot)
    vec = (VEC (access_p, heap) *) *slot;
    vec = (VEC (access_p, heap) *) *slot;
  else
  else
    vec = VEC_alloc (access_p, heap, 32);
    vec = VEC_alloc (access_p, heap, 32);
 
 
  VEC_safe_push (access_p, heap, vec, access);
  VEC_safe_push (access_p, heap, vec, access);
 
 
  *((struct VEC (access_p,heap) **)
  *((struct VEC (access_p,heap) **)
        pointer_map_insert (base_access_vec, base)) = vec;
        pointer_map_insert (base_access_vec, base)) = vec;
 
 
  return access;
  return access;
}
}
 
 
/* Create and insert access for EXPR. Return created access, or NULL if it is
/* Create and insert access for EXPR. Return created access, or NULL if it is
   not possible.  */
   not possible.  */
 
 
static struct access *
static struct access *
create_access (tree expr, gimple stmt, bool write)
create_access (tree expr, gimple stmt, bool write)
{
{
  struct access *access;
  struct access *access;
  HOST_WIDE_INT offset, size, max_size;
  HOST_WIDE_INT offset, size, max_size;
  tree base = expr;
  tree base = expr;
  bool ptr, unscalarizable_region = false;
  bool ptr, unscalarizable_region = false;
 
 
  base = get_ref_base_and_extent (expr, &offset, &size, &max_size);
  base = get_ref_base_and_extent (expr, &offset, &size, &max_size);
 
 
  if (sra_mode == SRA_MODE_EARLY_IPA && INDIRECT_REF_P (base))
  if (sra_mode == SRA_MODE_EARLY_IPA && INDIRECT_REF_P (base))
    {
    {
      base = get_ssa_base_param (TREE_OPERAND (base, 0));
      base = get_ssa_base_param (TREE_OPERAND (base, 0));
      if (!base)
      if (!base)
        return NULL;
        return NULL;
      ptr = true;
      ptr = true;
    }
    }
  else
  else
    ptr = false;
    ptr = false;
 
 
  if (!DECL_P (base) || !bitmap_bit_p (candidate_bitmap, DECL_UID (base)))
  if (!DECL_P (base) || !bitmap_bit_p (candidate_bitmap, DECL_UID (base)))
    return NULL;
    return NULL;
 
 
  if (sra_mode == SRA_MODE_EARLY_IPA)
  if (sra_mode == SRA_MODE_EARLY_IPA)
    {
    {
      if (size < 0 || size != max_size)
      if (size < 0 || size != max_size)
        {
        {
          disqualify_candidate (base, "Encountered a variable sized access.");
          disqualify_candidate (base, "Encountered a variable sized access.");
          return NULL;
          return NULL;
        }
        }
      if ((offset % BITS_PER_UNIT) != 0 || (size % BITS_PER_UNIT) != 0)
      if ((offset % BITS_PER_UNIT) != 0 || (size % BITS_PER_UNIT) != 0)
        {
        {
          disqualify_candidate (base,
          disqualify_candidate (base,
                                "Encountered an acces not aligned to a byte.");
                                "Encountered an acces not aligned to a byte.");
          return NULL;
          return NULL;
        }
        }
 
 
      if (ptr)
      if (ptr)
        mark_parm_dereference (base, offset + size, stmt);
        mark_parm_dereference (base, offset + size, stmt);
    }
    }
  else
  else
    {
    {
      if (size != max_size)
      if (size != max_size)
        {
        {
          size = max_size;
          size = max_size;
          unscalarizable_region = true;
          unscalarizable_region = true;
        }
        }
      if (size < 0)
      if (size < 0)
        {
        {
          disqualify_candidate (base, "Encountered an unconstrained access.");
          disqualify_candidate (base, "Encountered an unconstrained access.");
          return NULL;
          return NULL;
        }
        }
    }
    }
 
 
  access = create_access_1 (base, offset, size);
  access = create_access_1 (base, offset, size);
  access->expr = expr;
  access->expr = expr;
  access->type = TREE_TYPE (expr);
  access->type = TREE_TYPE (expr);
  access->write = write;
  access->write = write;
  access->grp_unscalarizable_region = unscalarizable_region;
  access->grp_unscalarizable_region = unscalarizable_region;
  access->stmt = stmt;
  access->stmt = stmt;
 
 
  return access;
  return access;
}
}
 
 
 
 
/* Return true iff TYPE is a RECORD_TYPE with fields that are either of gimple
/* Return true iff TYPE is a RECORD_TYPE with fields that are either of gimple
   register types or (recursively) records with only these two kinds of fields.
   register types or (recursively) records with only these two kinds of fields.
   It also returns false if any of these records has a zero-size field as its
   It also returns false if any of these records has a zero-size field as its
   last field.  */
   last field.  */
 
 
static bool
static bool
type_consists_of_records_p (tree type)
type_consists_of_records_p (tree type)
{
{
  tree fld;
  tree fld;
  bool last_fld_has_zero_size = false;
  bool last_fld_has_zero_size = false;
 
 
  if (TREE_CODE (type) != RECORD_TYPE)
  if (TREE_CODE (type) != RECORD_TYPE)
    return false;
    return false;
 
 
  for (fld = TYPE_FIELDS (type); fld; fld = TREE_CHAIN (fld))
  for (fld = TYPE_FIELDS (type); fld; fld = TREE_CHAIN (fld))
    if (TREE_CODE (fld) == FIELD_DECL)
    if (TREE_CODE (fld) == FIELD_DECL)
      {
      {
        tree ft = TREE_TYPE (fld);
        tree ft = TREE_TYPE (fld);
 
 
        if (!is_gimple_reg_type (ft)
        if (!is_gimple_reg_type (ft)
            && !type_consists_of_records_p (ft))
            && !type_consists_of_records_p (ft))
          return false;
          return false;
 
 
        last_fld_has_zero_size = tree_low_cst (DECL_SIZE (fld), 1) == 0;
        last_fld_has_zero_size = tree_low_cst (DECL_SIZE (fld), 1) == 0;
      }
      }
 
 
  if (last_fld_has_zero_size)
  if (last_fld_has_zero_size)
    return false;
    return false;
 
 
  return true;
  return true;
}
}
 
 
/* Create total_scalarization accesses for all scalar type fields in DECL that
/* Create total_scalarization accesses for all scalar type fields in DECL that
   must be of a RECORD_TYPE conforming to type_consists_of_records_p.  BASE
   must be of a RECORD_TYPE conforming to type_consists_of_records_p.  BASE
   must be the top-most VAR_DECL representing the variable, OFFSET must be the
   must be the top-most VAR_DECL representing the variable, OFFSET must be the
   offset of DECL within BASE.  */
   offset of DECL within BASE.  */
 
 
static void
static void
completely_scalarize_record (tree base, tree decl, HOST_WIDE_INT offset)
completely_scalarize_record (tree base, tree decl, HOST_WIDE_INT offset)
{
{
  tree fld, decl_type = TREE_TYPE (decl);
  tree fld, decl_type = TREE_TYPE (decl);
 
 
  for (fld = TYPE_FIELDS (decl_type); fld; fld = TREE_CHAIN (fld))
  for (fld = TYPE_FIELDS (decl_type); fld; fld = TREE_CHAIN (fld))
    if (TREE_CODE (fld) == FIELD_DECL)
    if (TREE_CODE (fld) == FIELD_DECL)
      {
      {
        HOST_WIDE_INT pos = offset + int_bit_position (fld);
        HOST_WIDE_INT pos = offset + int_bit_position (fld);
        tree ft = TREE_TYPE (fld);
        tree ft = TREE_TYPE (fld);
 
 
        if (is_gimple_reg_type (ft))
        if (is_gimple_reg_type (ft))
          {
          {
            struct access *access;
            struct access *access;
            HOST_WIDE_INT size;
            HOST_WIDE_INT size;
            tree expr;
            tree expr;
            bool ok;
            bool ok;
 
 
            size = tree_low_cst (DECL_SIZE (fld), 1);
            size = tree_low_cst (DECL_SIZE (fld), 1);
            expr = base;
            expr = base;
            ok = build_ref_for_offset (&expr, TREE_TYPE (base), pos,
            ok = build_ref_for_offset (&expr, TREE_TYPE (base), pos,
                                       ft, false);
                                       ft, false);
            gcc_assert (ok);
            gcc_assert (ok);
 
 
            access = create_access_1 (base, pos, size);
            access = create_access_1 (base, pos, size);
            access->expr = expr;
            access->expr = expr;
            access->type = ft;
            access->type = ft;
            access->total_scalarization = 1;
            access->total_scalarization = 1;
            /* Accesses for intraprocedural SRA can have their stmt NULL.  */
            /* Accesses for intraprocedural SRA can have their stmt NULL.  */
          }
          }
        else
        else
          completely_scalarize_record (base, fld, pos);
          completely_scalarize_record (base, fld, pos);
      }
      }
}
}
 
 
 
 
/* Search the given tree for a declaration by skipping handled components and
/* Search the given tree for a declaration by skipping handled components and
   exclude it from the candidates.  */
   exclude it from the candidates.  */
 
 
static void
static void
disqualify_base_of_expr (tree t, const char *reason)
disqualify_base_of_expr (tree t, const char *reason)
{
{
  while (handled_component_p (t))
  while (handled_component_p (t))
    t = TREE_OPERAND (t, 0);
    t = TREE_OPERAND (t, 0);
 
 
  if (sra_mode == SRA_MODE_EARLY_IPA)
  if (sra_mode == SRA_MODE_EARLY_IPA)
    {
    {
      if (INDIRECT_REF_P (t))
      if (INDIRECT_REF_P (t))
        t = TREE_OPERAND (t, 0);
        t = TREE_OPERAND (t, 0);
      t = get_ssa_base_param (t);
      t = get_ssa_base_param (t);
    }
    }
 
 
  if (t && DECL_P (t))
  if (t && DECL_P (t))
    disqualify_candidate (t, reason);
    disqualify_candidate (t, reason);
}
}
 
 
/* Scan expression EXPR and create access structures for all accesses to
/* Scan expression EXPR and create access structures for all accesses to
   candidates for scalarization.  Return the created access or NULL if none is
   candidates for scalarization.  Return the created access or NULL if none is
   created.  */
   created.  */
 
 
static struct access *
static struct access *
build_access_from_expr_1 (tree *expr_ptr, gimple stmt, bool write)
build_access_from_expr_1 (tree *expr_ptr, gimple stmt, bool write)
{
{
  struct access *ret = NULL;
  struct access *ret = NULL;
  tree expr = *expr_ptr;
  tree expr = *expr_ptr;
  bool partial_ref;
  bool partial_ref;
 
 
  if (TREE_CODE (expr) == BIT_FIELD_REF
  if (TREE_CODE (expr) == BIT_FIELD_REF
      || TREE_CODE (expr) == IMAGPART_EXPR
      || TREE_CODE (expr) == IMAGPART_EXPR
      || TREE_CODE (expr) == REALPART_EXPR)
      || TREE_CODE (expr) == REALPART_EXPR)
    {
    {
      expr = TREE_OPERAND (expr, 0);
      expr = TREE_OPERAND (expr, 0);
      partial_ref = true;
      partial_ref = true;
    }
    }
  else
  else
    partial_ref = false;
    partial_ref = false;
 
 
  /* We need to dive through V_C_Es in order to get the size of its parameter
  /* We need to dive through V_C_Es in order to get the size of its parameter
     and not the result type.  Ada produces such statements.  We are also
     and not the result type.  Ada produces such statements.  We are also
     capable of handling the topmost V_C_E but not any of those buried in other
     capable of handling the topmost V_C_E but not any of those buried in other
     handled components.  */
     handled components.  */
  if (TREE_CODE (expr) == VIEW_CONVERT_EXPR)
  if (TREE_CODE (expr) == VIEW_CONVERT_EXPR)
    expr = TREE_OPERAND (expr, 0);
    expr = TREE_OPERAND (expr, 0);
 
 
  if (contains_view_convert_expr_p (expr))
  if (contains_view_convert_expr_p (expr))
    {
    {
      disqualify_base_of_expr (expr, "V_C_E under a different handled "
      disqualify_base_of_expr (expr, "V_C_E under a different handled "
                               "component.");
                               "component.");
      return NULL;
      return NULL;
    }
    }
 
 
  switch (TREE_CODE (expr))
  switch (TREE_CODE (expr))
    {
    {
    case INDIRECT_REF:
    case INDIRECT_REF:
      if (sra_mode != SRA_MODE_EARLY_IPA)
      if (sra_mode != SRA_MODE_EARLY_IPA)
        return NULL;
        return NULL;
      /* fall through */
      /* fall through */
    case VAR_DECL:
    case VAR_DECL:
    case PARM_DECL:
    case PARM_DECL:
    case RESULT_DECL:
    case RESULT_DECL:
    case COMPONENT_REF:
    case COMPONENT_REF:
    case ARRAY_REF:
    case ARRAY_REF:
    case ARRAY_RANGE_REF:
    case ARRAY_RANGE_REF:
      ret = create_access (expr, stmt, write);
      ret = create_access (expr, stmt, write);
      break;
      break;
 
 
    default:
    default:
      break;
      break;
    }
    }
 
 
  if (write && partial_ref && ret)
  if (write && partial_ref && ret)
    ret->grp_partial_lhs = 1;
    ret->grp_partial_lhs = 1;
 
 
  return ret;
  return ret;
}
}
 
 
/* Callback of scan_function.  Scan expression EXPR and create access
/* Callback of scan_function.  Scan expression EXPR and create access
   structures for all accesses to candidates for scalarization.  Return true if
   structures for all accesses to candidates for scalarization.  Return true if
   any access has been inserted.  */
   any access has been inserted.  */
 
 
static bool
static bool
build_access_from_expr (tree *expr_ptr,
build_access_from_expr (tree *expr_ptr,
                        gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED, bool write,
                        gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED, bool write,
                        void *data ATTRIBUTE_UNUSED)
                        void *data ATTRIBUTE_UNUSED)
{
{
  struct access *access;
  struct access *access;
 
 
  access = build_access_from_expr_1 (expr_ptr, gsi_stmt (*gsi), write);
  access = build_access_from_expr_1 (expr_ptr, gsi_stmt (*gsi), write);
  if (access)
  if (access)
    {
    {
      /* This means the aggregate is accesses as a whole in a way other than an
      /* This means the aggregate is accesses as a whole in a way other than an
         assign statement and thus cannot be removed even if we had a scalar
         assign statement and thus cannot be removed even if we had a scalar
         replacement for everything.  */
         replacement for everything.  */
      if (cannot_scalarize_away_bitmap)
      if (cannot_scalarize_away_bitmap)
        bitmap_set_bit (cannot_scalarize_away_bitmap, DECL_UID (access->base));
        bitmap_set_bit (cannot_scalarize_away_bitmap, DECL_UID (access->base));
      return true;
      return true;
    }
    }
  return false;
  return false;
}
}
 
 
/* Disqualify LHS and RHS for scalarization if STMT must end its basic block in
/* Disqualify LHS and RHS for scalarization if STMT must end its basic block in
   modes in which it matters, return true iff they have been disqualified.  RHS
   modes in which it matters, return true iff they have been disqualified.  RHS
   may be NULL, in that case ignore it.  If we scalarize an aggregate in
   may be NULL, in that case ignore it.  If we scalarize an aggregate in
   intra-SRA we may need to add statements after each statement.  This is not
   intra-SRA we may need to add statements after each statement.  This is not
   possible if a statement unconditionally has to end the basic block.  */
   possible if a statement unconditionally has to end the basic block.  */
static bool
static bool
disqualify_ops_if_throwing_stmt (gimple stmt, tree lhs, tree rhs)
disqualify_ops_if_throwing_stmt (gimple stmt, tree lhs, tree rhs)
{
{
  if ((sra_mode == SRA_MODE_EARLY_INTRA || sra_mode == SRA_MODE_INTRA)
  if ((sra_mode == SRA_MODE_EARLY_INTRA || sra_mode == SRA_MODE_INTRA)
      && (stmt_can_throw_internal (stmt) || stmt_ends_bb_p (stmt)))
      && (stmt_can_throw_internal (stmt) || stmt_ends_bb_p (stmt)))
    {
    {
      disqualify_base_of_expr (lhs, "LHS of a throwing stmt.");
      disqualify_base_of_expr (lhs, "LHS of a throwing stmt.");
      if (rhs)
      if (rhs)
        disqualify_base_of_expr (rhs, "RHS of a throwing stmt.");
        disqualify_base_of_expr (rhs, "RHS of a throwing stmt.");
      return true;
      return true;
    }
    }
  return false;
  return false;
}
}
 
 
 
 
/* Result code for scan_assign callback for scan_function.  */
/* Result code for scan_assign callback for scan_function.  */
enum scan_assign_result { SRA_SA_NONE,       /* nothing done for the stmt */
enum scan_assign_result { SRA_SA_NONE,       /* nothing done for the stmt */
                          SRA_SA_PROCESSED,  /* stmt analyzed/changed */
                          SRA_SA_PROCESSED,  /* stmt analyzed/changed */
                          SRA_SA_REMOVED };  /* stmt redundant and eliminated */
                          SRA_SA_REMOVED };  /* stmt redundant and eliminated */
 
 
 
 
/* Callback of scan_function.  Scan expressions occuring in the statement
/* Callback of scan_function.  Scan expressions occuring in the statement
   pointed to by STMT_EXPR, create access structures for all accesses to
   pointed to by STMT_EXPR, create access structures for all accesses to
   candidates for scalarization and remove those candidates which occur in
   candidates for scalarization and remove those candidates which occur in
   statements or expressions that prevent them from being split apart.  Return
   statements or expressions that prevent them from being split apart.  Return
   true if any access has been inserted.  */
   true if any access has been inserted.  */
 
 
static enum scan_assign_result
static enum scan_assign_result
build_accesses_from_assign (gimple *stmt_ptr,
build_accesses_from_assign (gimple *stmt_ptr,
                            gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
                            gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
                            void *data ATTRIBUTE_UNUSED)
                            void *data ATTRIBUTE_UNUSED)
{
{
  gimple stmt = *stmt_ptr;
  gimple stmt = *stmt_ptr;
  tree *lhs_ptr, *rhs_ptr;
  tree *lhs_ptr, *rhs_ptr;
  struct access *lacc, *racc;
  struct access *lacc, *racc;
 
 
  if (!gimple_assign_single_p (stmt))
  if (!gimple_assign_single_p (stmt))
    return SRA_SA_NONE;
    return SRA_SA_NONE;
 
 
  lhs_ptr = gimple_assign_lhs_ptr (stmt);
  lhs_ptr = gimple_assign_lhs_ptr (stmt);
  rhs_ptr = gimple_assign_rhs1_ptr (stmt);
  rhs_ptr = gimple_assign_rhs1_ptr (stmt);
 
 
  if (disqualify_ops_if_throwing_stmt (stmt, *lhs_ptr, *rhs_ptr))
  if (disqualify_ops_if_throwing_stmt (stmt, *lhs_ptr, *rhs_ptr))
    return SRA_SA_NONE;
    return SRA_SA_NONE;
 
 
  racc = build_access_from_expr_1 (rhs_ptr, stmt, false);
  racc = build_access_from_expr_1 (rhs_ptr, stmt, false);
  lacc = build_access_from_expr_1 (lhs_ptr, stmt, true);
  lacc = build_access_from_expr_1 (lhs_ptr, stmt, true);
 
 
  if (racc)
  if (racc)
    {
    {
      racc->grp_assignment_read = 1;
      racc->grp_assignment_read = 1;
      if (should_scalarize_away_bitmap && !gimple_has_volatile_ops (stmt)
      if (should_scalarize_away_bitmap && !gimple_has_volatile_ops (stmt)
          && !is_gimple_reg_type (racc->type))
          && !is_gimple_reg_type (racc->type))
        bitmap_set_bit (should_scalarize_away_bitmap, DECL_UID (racc->base));
        bitmap_set_bit (should_scalarize_away_bitmap, DECL_UID (racc->base));
    }
    }
 
 
  if (lacc && racc
  if (lacc && racc
      && (sra_mode == SRA_MODE_EARLY_INTRA || sra_mode == SRA_MODE_INTRA)
      && (sra_mode == SRA_MODE_EARLY_INTRA || sra_mode == SRA_MODE_INTRA)
      && !lacc->grp_unscalarizable_region
      && !lacc->grp_unscalarizable_region
      && !racc->grp_unscalarizable_region
      && !racc->grp_unscalarizable_region
      && AGGREGATE_TYPE_P (TREE_TYPE (*lhs_ptr))
      && AGGREGATE_TYPE_P (TREE_TYPE (*lhs_ptr))
      /* FIXME: Turn the following line into an assert after PR 40058 is
      /* FIXME: Turn the following line into an assert after PR 40058 is
         fixed.  */
         fixed.  */
      && lacc->size == racc->size
      && lacc->size == racc->size
      && useless_type_conversion_p (lacc->type, racc->type))
      && useless_type_conversion_p (lacc->type, racc->type))
    {
    {
      struct assign_link *link;
      struct assign_link *link;
 
 
      link = (struct assign_link *) pool_alloc (link_pool);
      link = (struct assign_link *) pool_alloc (link_pool);
      memset (link, 0, sizeof (struct assign_link));
      memset (link, 0, sizeof (struct assign_link));
 
 
      link->lacc = lacc;
      link->lacc = lacc;
      link->racc = racc;
      link->racc = racc;
 
 
      add_link_to_rhs (racc, link);
      add_link_to_rhs (racc, link);
    }
    }
 
 
  return (lacc || racc) ? SRA_SA_PROCESSED : SRA_SA_NONE;
  return (lacc || racc) ? SRA_SA_PROCESSED : SRA_SA_NONE;
}
}
 
 
/* Callback of walk_stmt_load_store_addr_ops visit_addr used to determine
/* Callback of walk_stmt_load_store_addr_ops visit_addr used to determine
   GIMPLE_ASM operands with memory constrains which cannot be scalarized.  */
   GIMPLE_ASM operands with memory constrains which cannot be scalarized.  */
 
 
static bool
static bool
asm_visit_addr (gimple stmt ATTRIBUTE_UNUSED, tree op,
asm_visit_addr (gimple stmt ATTRIBUTE_UNUSED, tree op,
                void *data ATTRIBUTE_UNUSED)
                void *data ATTRIBUTE_UNUSED)
{
{
  if (DECL_P (op))
  if (DECL_P (op))
    disqualify_candidate (op, "Non-scalarizable GIMPLE_ASM operand.");
    disqualify_candidate (op, "Non-scalarizable GIMPLE_ASM operand.");
 
 
  return false;
  return false;
}
}
 
 
/* Return true iff callsite CALL has at least as many actual arguments as there
/* Return true iff callsite CALL has at least as many actual arguments as there
   are formal parameters of the function currently processed by IPA-SRA.  */
   are formal parameters of the function currently processed by IPA-SRA.  */
 
 
static inline bool
static inline bool
callsite_has_enough_arguments_p (gimple call)
callsite_has_enough_arguments_p (gimple call)
{
{
  return gimple_call_num_args (call) >= (unsigned) func_param_count;
  return gimple_call_num_args (call) >= (unsigned) func_param_count;
}
}
 
 
/* Scan function and look for interesting statements. Return true if any has
/* Scan function and look for interesting statements. Return true if any has
   been found or processed, as indicated by callbacks.  SCAN_EXPR is a callback
   been found or processed, as indicated by callbacks.  SCAN_EXPR is a callback
   called on all expressions within statements except assign statements and
   called on all expressions within statements except assign statements and
   those deemed entirely unsuitable for some reason (all operands in such
   those deemed entirely unsuitable for some reason (all operands in such
   statements and expression are removed from candidate_bitmap).  SCAN_ASSIGN
   statements and expression are removed from candidate_bitmap).  SCAN_ASSIGN
   is a callback called on all assign statements, HANDLE_SSA_DEFS is a callback
   is a callback called on all assign statements, HANDLE_SSA_DEFS is a callback
   called on assign statements and those call statements which have a lhs, it
   called on assign statements and those call statements which have a lhs, it
   can be NULL.  ANALYSIS_STAGE is true when running in the analysis stage of a
   can be NULL.  ANALYSIS_STAGE is true when running in the analysis stage of a
   pass and thus no statement is being modified.  DATA is a pointer passed to
   pass and thus no statement is being modified.  DATA is a pointer passed to
   all callbacks.  If any single callback returns true, this function also
   all callbacks.  If any single callback returns true, this function also
   returns true, otherwise it returns false.  */
   returns true, otherwise it returns false.  */
 
 
static bool
static bool
scan_function (bool (*scan_expr) (tree *, gimple_stmt_iterator *, bool, void *),
scan_function (bool (*scan_expr) (tree *, gimple_stmt_iterator *, bool, void *),
               enum scan_assign_result (*scan_assign) (gimple *,
               enum scan_assign_result (*scan_assign) (gimple *,
                                                       gimple_stmt_iterator *,
                                                       gimple_stmt_iterator *,
                                                       void *),
                                                       void *),
               bool (*handle_ssa_defs)(gimple, void *),
               bool (*handle_ssa_defs)(gimple, void *),
               bool analysis_stage, void *data)
               bool analysis_stage, void *data)
{
{
  gimple_stmt_iterator gsi;
  gimple_stmt_iterator gsi;
  basic_block bb;
  basic_block bb;
  unsigned i;
  unsigned i;
  tree *t;
  tree *t;
  bool ret = false;
  bool ret = false;
 
 
  FOR_EACH_BB (bb)
  FOR_EACH_BB (bb)
    {
    {
      bool bb_changed = false;
      bool bb_changed = false;
 
 
      if (handle_ssa_defs)
      if (handle_ssa_defs)
        for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
        for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
          ret |= handle_ssa_defs (gsi_stmt (gsi), data);
          ret |= handle_ssa_defs (gsi_stmt (gsi), data);
 
 
      gsi = gsi_start_bb (bb);
      gsi = gsi_start_bb (bb);
      while (!gsi_end_p (gsi))
      while (!gsi_end_p (gsi))
        {
        {
          gimple stmt = gsi_stmt (gsi);
          gimple stmt = gsi_stmt (gsi);
          enum scan_assign_result assign_result;
          enum scan_assign_result assign_result;
          bool any = false, deleted = false;
          bool any = false, deleted = false;
 
 
          if (analysis_stage && final_bbs && stmt_can_throw_external (stmt))
          if (analysis_stage && final_bbs && stmt_can_throw_external (stmt))
            bitmap_set_bit (final_bbs, bb->index);
            bitmap_set_bit (final_bbs, bb->index);
          switch (gimple_code (stmt))
          switch (gimple_code (stmt))
            {
            {
            case GIMPLE_RETURN:
            case GIMPLE_RETURN:
              t = gimple_return_retval_ptr (stmt);
              t = gimple_return_retval_ptr (stmt);
              if (*t != NULL_TREE)
              if (*t != NULL_TREE)
                any |= scan_expr (t, &gsi, false, data);
                any |= scan_expr (t, &gsi, false, data);
              if (analysis_stage && final_bbs)
              if (analysis_stage && final_bbs)
                bitmap_set_bit (final_bbs, bb->index);
                bitmap_set_bit (final_bbs, bb->index);
              break;
              break;
 
 
            case GIMPLE_ASSIGN:
            case GIMPLE_ASSIGN:
              assign_result = scan_assign (&stmt, &gsi, data);
              assign_result = scan_assign (&stmt, &gsi, data);
              any |= assign_result == SRA_SA_PROCESSED;
              any |= assign_result == SRA_SA_PROCESSED;
              deleted = assign_result == SRA_SA_REMOVED;
              deleted = assign_result == SRA_SA_REMOVED;
              if (handle_ssa_defs && assign_result != SRA_SA_REMOVED)
              if (handle_ssa_defs && assign_result != SRA_SA_REMOVED)
                any |= handle_ssa_defs (stmt, data);
                any |= handle_ssa_defs (stmt, data);
              break;
              break;
 
 
            case GIMPLE_CALL:
            case GIMPLE_CALL:
              /* Operands must be processed before the lhs.  */
              /* Operands must be processed before the lhs.  */
              for (i = 0; i < gimple_call_num_args (stmt); i++)
              for (i = 0; i < gimple_call_num_args (stmt); i++)
                {
                {
                  tree *argp = gimple_call_arg_ptr (stmt, i);
                  tree *argp = gimple_call_arg_ptr (stmt, i);
                  any |= scan_expr (argp, &gsi, false, data);
                  any |= scan_expr (argp, &gsi, false, data);
                }
                }
 
 
              if (analysis_stage && sra_mode == SRA_MODE_EARLY_IPA)
              if (analysis_stage && sra_mode == SRA_MODE_EARLY_IPA)
                {
                {
                  tree dest = gimple_call_fndecl (stmt);
                  tree dest = gimple_call_fndecl (stmt);
                  int flags = gimple_call_flags (stmt);
                  int flags = gimple_call_flags (stmt);
 
 
                  if (dest)
                  if (dest)
                    {
                    {
                      if (DECL_BUILT_IN_CLASS (dest) == BUILT_IN_NORMAL
                      if (DECL_BUILT_IN_CLASS (dest) == BUILT_IN_NORMAL
                          && DECL_FUNCTION_CODE (dest) == BUILT_IN_APPLY_ARGS)
                          && DECL_FUNCTION_CODE (dest) == BUILT_IN_APPLY_ARGS)
                        encountered_apply_args = true;
                        encountered_apply_args = true;
                      if (cgraph_get_node (dest)
                      if (cgraph_get_node (dest)
                          == cgraph_get_node (current_function_decl)
                          == cgraph_get_node (current_function_decl)
                          && !callsite_has_enough_arguments_p (stmt))
                          && !callsite_has_enough_arguments_p (stmt))
                        encountered_unchangable_recursive_call = true;
                        encountered_unchangable_recursive_call = true;
                    }
                    }
 
 
                  if (final_bbs
                  if (final_bbs
                      && (flags & (ECF_CONST | ECF_PURE)) == 0)
                      && (flags & (ECF_CONST | ECF_PURE)) == 0)
                    bitmap_set_bit (final_bbs, bb->index);
                    bitmap_set_bit (final_bbs, bb->index);
                }
                }
 
 
              if (gimple_call_lhs (stmt))
              if (gimple_call_lhs (stmt))
                {
                {
                  tree *lhs_ptr = gimple_call_lhs_ptr (stmt);
                  tree *lhs_ptr = gimple_call_lhs_ptr (stmt);
                  if (!analysis_stage
                  if (!analysis_stage
                      || !disqualify_ops_if_throwing_stmt (stmt,
                      || !disqualify_ops_if_throwing_stmt (stmt,
                                                           *lhs_ptr, NULL))
                                                           *lhs_ptr, NULL))
                    {
                    {
                      any |= scan_expr (lhs_ptr, &gsi, true, data);
                      any |= scan_expr (lhs_ptr, &gsi, true, data);
                      if (handle_ssa_defs)
                      if (handle_ssa_defs)
                        any |= handle_ssa_defs (stmt, data);
                        any |= handle_ssa_defs (stmt, data);
                    }
                    }
                }
                }
              break;
              break;
 
 
            case GIMPLE_ASM:
            case GIMPLE_ASM:
              if (analysis_stage)
              if (analysis_stage)
                {
                {
                  walk_stmt_load_store_addr_ops (stmt, NULL, NULL, NULL,
                  walk_stmt_load_store_addr_ops (stmt, NULL, NULL, NULL,
                                                 asm_visit_addr);
                                                 asm_visit_addr);
                  if (final_bbs)
                  if (final_bbs)
                    bitmap_set_bit (final_bbs, bb->index);
                    bitmap_set_bit (final_bbs, bb->index);
                }
                }
              for (i = 0; i < gimple_asm_ninputs (stmt); i++)
              for (i = 0; i < gimple_asm_ninputs (stmt); i++)
                {
                {
                  tree *op = &TREE_VALUE (gimple_asm_input_op (stmt, i));
                  tree *op = &TREE_VALUE (gimple_asm_input_op (stmt, i));
                  any |= scan_expr (op, &gsi, false, data);
                  any |= scan_expr (op, &gsi, false, data);
                }
                }
              for (i = 0; i < gimple_asm_noutputs (stmt); i++)
              for (i = 0; i < gimple_asm_noutputs (stmt); i++)
                {
                {
                  tree *op = &TREE_VALUE (gimple_asm_output_op (stmt, i));
                  tree *op = &TREE_VALUE (gimple_asm_output_op (stmt, i));
                  any |= scan_expr (op, &gsi, true, data);
                  any |= scan_expr (op, &gsi, true, data);
                }
                }
              break;
              break;
 
 
            default:
            default:
              break;
              break;
            }
            }
 
 
          if (any)
          if (any)
            {
            {
              ret = true;
              ret = true;
 
 
              if (!analysis_stage)
              if (!analysis_stage)
                {
                {
                  bb_changed = true;
                  bb_changed = true;
                  update_stmt (stmt);
                  update_stmt (stmt);
                  maybe_clean_eh_stmt (stmt);
                  maybe_clean_eh_stmt (stmt);
                }
                }
            }
            }
          if (deleted)
          if (deleted)
            bb_changed = true;
            bb_changed = true;
          else
          else
            {
            {
              gsi_next (&gsi);
              gsi_next (&gsi);
              ret = true;
              ret = true;
            }
            }
        }
        }
      if (!analysis_stage && bb_changed && sra_mode == SRA_MODE_EARLY_IPA)
      if (!analysis_stage && bb_changed && sra_mode == SRA_MODE_EARLY_IPA)
        gimple_purge_dead_eh_edges (bb);
        gimple_purge_dead_eh_edges (bb);
    }
    }
 
 
  return ret;
  return ret;
}
}
 
 
/* Helper of QSORT function. There are pointers to accesses in the array.  An
/* Helper of QSORT function. There are pointers to accesses in the array.  An
   access is considered smaller than another if it has smaller offset or if the
   access is considered smaller than another if it has smaller offset or if the
   offsets are the same but is size is bigger. */
   offsets are the same but is size is bigger. */
 
 
static int
static int
compare_access_positions (const void *a, const void *b)
compare_access_positions (const void *a, const void *b)
{
{
  const access_p *fp1 = (const access_p *) a;
  const access_p *fp1 = (const access_p *) a;
  const access_p *fp2 = (const access_p *) b;
  const access_p *fp2 = (const access_p *) b;
  const access_p f1 = *fp1;
  const access_p f1 = *fp1;
  const access_p f2 = *fp2;
  const access_p f2 = *fp2;
 
 
  if (f1->offset != f2->offset)
  if (f1->offset != f2->offset)
    return f1->offset < f2->offset ? -1 : 1;
    return f1->offset < f2->offset ? -1 : 1;
 
 
  if (f1->size == f2->size)
  if (f1->size == f2->size)
    {
    {
      if (f1->type == f2->type)
      if (f1->type == f2->type)
        return 0;
        return 0;
      /* Put any non-aggregate type before any aggregate type.  */
      /* Put any non-aggregate type before any aggregate type.  */
      else if (!is_gimple_reg_type (f1->type)
      else if (!is_gimple_reg_type (f1->type)
          && is_gimple_reg_type (f2->type))
          && is_gimple_reg_type (f2->type))
        return 1;
        return 1;
      else if (is_gimple_reg_type (f1->type)
      else if (is_gimple_reg_type (f1->type)
               && !is_gimple_reg_type (f2->type))
               && !is_gimple_reg_type (f2->type))
        return -1;
        return -1;
      /* Put any complex or vector type before any other scalar type.  */
      /* Put any complex or vector type before any other scalar type.  */
      else if (TREE_CODE (f1->type) != COMPLEX_TYPE
      else if (TREE_CODE (f1->type) != COMPLEX_TYPE
               && TREE_CODE (f1->type) != VECTOR_TYPE
               && TREE_CODE (f1->type) != VECTOR_TYPE
               && (TREE_CODE (f2->type) == COMPLEX_TYPE
               && (TREE_CODE (f2->type) == COMPLEX_TYPE
                   || TREE_CODE (f2->type) == VECTOR_TYPE))
                   || TREE_CODE (f2->type) == VECTOR_TYPE))
        return 1;
        return 1;
      else if ((TREE_CODE (f1->type) == COMPLEX_TYPE
      else if ((TREE_CODE (f1->type) == COMPLEX_TYPE
                || TREE_CODE (f1->type) == VECTOR_TYPE)
                || TREE_CODE (f1->type) == VECTOR_TYPE)
               && TREE_CODE (f2->type) != COMPLEX_TYPE
               && TREE_CODE (f2->type) != COMPLEX_TYPE
               && TREE_CODE (f2->type) != VECTOR_TYPE)
               && TREE_CODE (f2->type) != VECTOR_TYPE)
        return -1;
        return -1;
      /* Put the integral type with the bigger precision first.  */
      /* Put the integral type with the bigger precision first.  */
      else if (INTEGRAL_TYPE_P (f1->type)
      else if (INTEGRAL_TYPE_P (f1->type)
               && INTEGRAL_TYPE_P (f2->type))
               && INTEGRAL_TYPE_P (f2->type))
        return TYPE_PRECISION (f2->type) - TYPE_PRECISION (f1->type);
        return TYPE_PRECISION (f2->type) - TYPE_PRECISION (f1->type);
      /* Put any integral type with non-full precision last.  */
      /* Put any integral type with non-full precision last.  */
      else if (INTEGRAL_TYPE_P (f1->type)
      else if (INTEGRAL_TYPE_P (f1->type)
               && (TREE_INT_CST_LOW (TYPE_SIZE (f1->type))
               && (TREE_INT_CST_LOW (TYPE_SIZE (f1->type))
                   != TYPE_PRECISION (f1->type)))
                   != TYPE_PRECISION (f1->type)))
        return 1;
        return 1;
      else if (INTEGRAL_TYPE_P (f2->type)
      else if (INTEGRAL_TYPE_P (f2->type)
               && (TREE_INT_CST_LOW (TYPE_SIZE (f2->type))
               && (TREE_INT_CST_LOW (TYPE_SIZE (f2->type))
                   != TYPE_PRECISION (f2->type)))
                   != TYPE_PRECISION (f2->type)))
        return -1;
        return -1;
      /* Stabilize the sort.  */
      /* Stabilize the sort.  */
      return TYPE_UID (f1->type) - TYPE_UID (f2->type);
      return TYPE_UID (f1->type) - TYPE_UID (f2->type);
    }
    }
 
 
  /* We want the bigger accesses first, thus the opposite operator in the next
  /* We want the bigger accesses first, thus the opposite operator in the next
     line: */
     line: */
  return f1->size > f2->size ? -1 : 1;
  return f1->size > f2->size ? -1 : 1;
}
}
 
 
 
 
/* Append a name of the declaration to the name obstack.  A helper function for
/* Append a name of the declaration to the name obstack.  A helper function for
   make_fancy_name.  */
   make_fancy_name.  */
 
 
static void
static void
make_fancy_decl_name (tree decl)
make_fancy_decl_name (tree decl)
{
{
  char buffer[32];
  char buffer[32];
 
 
  tree name = DECL_NAME (decl);
  tree name = DECL_NAME (decl);
  if (name)
  if (name)
    obstack_grow (&name_obstack, IDENTIFIER_POINTER (name),
    obstack_grow (&name_obstack, IDENTIFIER_POINTER (name),
                  IDENTIFIER_LENGTH (name));
                  IDENTIFIER_LENGTH (name));
  else
  else
    {
    {
      sprintf (buffer, "D%u", DECL_UID (decl));
      sprintf (buffer, "D%u", DECL_UID (decl));
      obstack_grow (&name_obstack, buffer, strlen (buffer));
      obstack_grow (&name_obstack, buffer, strlen (buffer));
    }
    }
}
}
 
 
/* Helper for make_fancy_name.  */
/* Helper for make_fancy_name.  */
 
 
static void
static void
make_fancy_name_1 (tree expr)
make_fancy_name_1 (tree expr)
{
{
  char buffer[32];
  char buffer[32];
  tree index;
  tree index;
 
 
  if (DECL_P (expr))
  if (DECL_P (expr))
    {
    {
      make_fancy_decl_name (expr);
      make_fancy_decl_name (expr);
      return;
      return;
    }
    }
 
 
  switch (TREE_CODE (expr))
  switch (TREE_CODE (expr))
    {
    {
    case COMPONENT_REF:
    case COMPONENT_REF:
      make_fancy_name_1 (TREE_OPERAND (expr, 0));
      make_fancy_name_1 (TREE_OPERAND (expr, 0));
      obstack_1grow (&name_obstack, '$');
      obstack_1grow (&name_obstack, '$');
      make_fancy_decl_name (TREE_OPERAND (expr, 1));
      make_fancy_decl_name (TREE_OPERAND (expr, 1));
      break;
      break;
 
 
    case ARRAY_REF:
    case ARRAY_REF:
      make_fancy_name_1 (TREE_OPERAND (expr, 0));
      make_fancy_name_1 (TREE_OPERAND (expr, 0));
      obstack_1grow (&name_obstack, '$');
      obstack_1grow (&name_obstack, '$');
      /* Arrays with only one element may not have a constant as their
      /* Arrays with only one element may not have a constant as their
         index. */
         index. */
      index = TREE_OPERAND (expr, 1);
      index = TREE_OPERAND (expr, 1);
      if (TREE_CODE (index) != INTEGER_CST)
      if (TREE_CODE (index) != INTEGER_CST)
        break;
        break;
      sprintf (buffer, HOST_WIDE_INT_PRINT_DEC, TREE_INT_CST_LOW (index));
      sprintf (buffer, HOST_WIDE_INT_PRINT_DEC, TREE_INT_CST_LOW (index));
      obstack_grow (&name_obstack, buffer, strlen (buffer));
      obstack_grow (&name_obstack, buffer, strlen (buffer));
 
 
      break;
      break;
 
 
    case BIT_FIELD_REF:
    case BIT_FIELD_REF:
    case REALPART_EXPR:
    case REALPART_EXPR:
    case IMAGPART_EXPR:
    case IMAGPART_EXPR:
      gcc_unreachable ();       /* we treat these as scalars.  */
      gcc_unreachable ();       /* we treat these as scalars.  */
      break;
      break;
    default:
    default:
      break;
      break;
    }
    }
}
}
 
 
/* Create a human readable name for replacement variable of ACCESS.  */
/* Create a human readable name for replacement variable of ACCESS.  */
 
 
static char *
static char *
make_fancy_name (tree expr)
make_fancy_name (tree expr)
{
{
  make_fancy_name_1 (expr);
  make_fancy_name_1 (expr);
  obstack_1grow (&name_obstack, '\0');
  obstack_1grow (&name_obstack, '\0');
  return XOBFINISH (&name_obstack, char *);
  return XOBFINISH (&name_obstack, char *);
}
}
 
 
/* Helper function for build_ref_for_offset.  */
/* Helper function for build_ref_for_offset.  */
 
 
static bool
static bool
build_ref_for_offset_1 (tree *res, tree type, HOST_WIDE_INT offset,
build_ref_for_offset_1 (tree *res, tree type, HOST_WIDE_INT offset,
                        tree exp_type)
                        tree exp_type)
{
{
  while (1)
  while (1)
    {
    {
      tree fld;
      tree fld;
      tree tr_size, index, minidx;
      tree tr_size, index, minidx;
      HOST_WIDE_INT el_size;
      HOST_WIDE_INT el_size;
 
 
      if (offset == 0 && exp_type
      if (offset == 0 && exp_type
          && types_compatible_p (exp_type, type))
          && types_compatible_p (exp_type, type))
        return true;
        return true;
 
 
      switch (TREE_CODE (type))
      switch (TREE_CODE (type))
        {
        {
        case UNION_TYPE:
        case UNION_TYPE:
        case QUAL_UNION_TYPE:
        case QUAL_UNION_TYPE:
        case RECORD_TYPE:
        case RECORD_TYPE:
          for (fld = TYPE_FIELDS (type); fld; fld = TREE_CHAIN (fld))
          for (fld = TYPE_FIELDS (type); fld; fld = TREE_CHAIN (fld))
            {
            {
              HOST_WIDE_INT pos, size;
              HOST_WIDE_INT pos, size;
              tree expr, *expr_ptr;
              tree expr, *expr_ptr;
 
 
              if (TREE_CODE (fld) != FIELD_DECL)
              if (TREE_CODE (fld) != FIELD_DECL)
                continue;
                continue;
 
 
              pos = int_bit_position (fld);
              pos = int_bit_position (fld);
              gcc_assert (TREE_CODE (type) == RECORD_TYPE || pos == 0);
              gcc_assert (TREE_CODE (type) == RECORD_TYPE || pos == 0);
              tr_size = DECL_SIZE (fld);
              tr_size = DECL_SIZE (fld);
              if (!tr_size || !host_integerp (tr_size, 1))
              if (!tr_size || !host_integerp (tr_size, 1))
                continue;
                continue;
              size = tree_low_cst (tr_size, 1);
              size = tree_low_cst (tr_size, 1);
              if (size == 0)
              if (size == 0)
                {
                {
                  if (pos != offset)
                  if (pos != offset)
                    continue;
                    continue;
                }
                }
              else if (pos > offset || (pos + size) <= offset)
              else if (pos > offset || (pos + size) <= offset)
                continue;
                continue;
 
 
              if (res)
              if (res)
                {
                {
                  expr = build3 (COMPONENT_REF, TREE_TYPE (fld), *res, fld,
                  expr = build3 (COMPONENT_REF, TREE_TYPE (fld), *res, fld,
                                 NULL_TREE);
                                 NULL_TREE);
                  expr_ptr = &expr;
                  expr_ptr = &expr;
                }
                }
              else
              else
                expr_ptr = NULL;
                expr_ptr = NULL;
              if (build_ref_for_offset_1 (expr_ptr, TREE_TYPE (fld),
              if (build_ref_for_offset_1 (expr_ptr, TREE_TYPE (fld),
                                          offset - pos, exp_type))
                                          offset - pos, exp_type))
                {
                {
                  if (res)
                  if (res)
                    *res = expr;
                    *res = expr;
                  return true;
                  return true;
                }
                }
            }
            }
          return false;
          return false;
 
 
        case ARRAY_TYPE:
        case ARRAY_TYPE:
          tr_size = TYPE_SIZE (TREE_TYPE (type));
          tr_size = TYPE_SIZE (TREE_TYPE (type));
          if (!tr_size || !host_integerp (tr_size, 1))
          if (!tr_size || !host_integerp (tr_size, 1))
            return false;
            return false;
          el_size = tree_low_cst (tr_size, 1);
          el_size = tree_low_cst (tr_size, 1);
 
 
          minidx = TYPE_MIN_VALUE (TYPE_DOMAIN (type));
          minidx = TYPE_MIN_VALUE (TYPE_DOMAIN (type));
          if (TREE_CODE (minidx) != INTEGER_CST || el_size == 0)
          if (TREE_CODE (minidx) != INTEGER_CST || el_size == 0)
            return false;
            return false;
          if (res)
          if (res)
            {
            {
              index = build_int_cst (TYPE_DOMAIN (type), offset / el_size);
              index = build_int_cst (TYPE_DOMAIN (type), offset / el_size);
              if (!integer_zerop (minidx))
              if (!integer_zerop (minidx))
                index = int_const_binop (PLUS_EXPR, index, minidx, 0);
                index = int_const_binop (PLUS_EXPR, index, minidx, 0);
              *res = build4 (ARRAY_REF, TREE_TYPE (type), *res, index,
              *res = build4 (ARRAY_REF, TREE_TYPE (type), *res, index,
                             NULL_TREE, NULL_TREE);
                             NULL_TREE, NULL_TREE);
            }
            }
          offset = offset % el_size;
          offset = offset % el_size;
          type = TREE_TYPE (type);
          type = TREE_TYPE (type);
          break;
          break;
 
 
        default:
        default:
          if (offset != 0)
          if (offset != 0)
            return false;
            return false;
 
 
          if (exp_type)
          if (exp_type)
            return false;
            return false;
          else
          else
            return true;
            return true;
        }
        }
    }
    }
}
}
 
 
/* Construct an expression that would reference a part of aggregate *EXPR of
/* Construct an expression that would reference a part of aggregate *EXPR of
   type TYPE at the given OFFSET of the type EXP_TYPE.  If EXPR is NULL, the
   type TYPE at the given OFFSET of the type EXP_TYPE.  If EXPR is NULL, the
   function only determines whether it can build such a reference without
   function only determines whether it can build such a reference without
   actually doing it, otherwise, the tree it points to is unshared first and
   actually doing it, otherwise, the tree it points to is unshared first and
   then used as a base for furhter sub-references.
   then used as a base for furhter sub-references.
 
 
   FIXME: Eventually this should be replaced with
   FIXME: Eventually this should be replaced with
   maybe_fold_offset_to_reference() from tree-ssa-ccp.c but that requires a
   maybe_fold_offset_to_reference() from tree-ssa-ccp.c but that requires a
   minor rewrite of fold_stmt.
   minor rewrite of fold_stmt.
 */
 */
 
 
bool
bool
build_ref_for_offset (tree *expr, tree type, HOST_WIDE_INT offset,
build_ref_for_offset (tree *expr, tree type, HOST_WIDE_INT offset,
                      tree exp_type, bool allow_ptr)
                      tree exp_type, bool allow_ptr)
{
{
  location_t loc = expr ? EXPR_LOCATION (*expr) : UNKNOWN_LOCATION;
  location_t loc = expr ? EXPR_LOCATION (*expr) : UNKNOWN_LOCATION;
 
 
  if (expr)
  if (expr)
    *expr = unshare_expr (*expr);
    *expr = unshare_expr (*expr);
 
 
  if (allow_ptr && POINTER_TYPE_P (type))
  if (allow_ptr && POINTER_TYPE_P (type))
    {
    {
      type = TREE_TYPE (type);
      type = TREE_TYPE (type);
      if (expr)
      if (expr)
        *expr = fold_build1_loc (loc, INDIRECT_REF, type, *expr);
        *expr = fold_build1_loc (loc, INDIRECT_REF, type, *expr);
    }
    }
 
 
  return build_ref_for_offset_1 (expr, type, offset, exp_type);
  return build_ref_for_offset_1 (expr, type, offset, exp_type);
}
}
 
 
/* Return true iff TYPE is stdarg va_list type.  */
/* Return true iff TYPE is stdarg va_list type.  */
 
 
static inline bool
static inline bool
is_va_list_type (tree type)
is_va_list_type (tree type)
{
{
  return TYPE_MAIN_VARIANT (type) == TYPE_MAIN_VARIANT (va_list_type_node);
  return TYPE_MAIN_VARIANT (type) == TYPE_MAIN_VARIANT (va_list_type_node);
}
}
 
 
/* The very first phase of intraprocedural SRA.  It marks in candidate_bitmap
/* The very first phase of intraprocedural SRA.  It marks in candidate_bitmap
   those with type which is suitable for scalarization.  */
   those with type which is suitable for scalarization.  */
 
 
static bool
static bool
find_var_candidates (void)
find_var_candidates (void)
{
{
  tree var, type;
  tree var, type;
  referenced_var_iterator rvi;
  referenced_var_iterator rvi;
  bool ret = false;
  bool ret = false;
 
 
  FOR_EACH_REFERENCED_VAR (var, rvi)
  FOR_EACH_REFERENCED_VAR (var, rvi)
    {
    {
      if (TREE_CODE (var) != VAR_DECL && TREE_CODE (var) != PARM_DECL)
      if (TREE_CODE (var) != VAR_DECL && TREE_CODE (var) != PARM_DECL)
        continue;
        continue;
      type = TREE_TYPE (var);
      type = TREE_TYPE (var);
 
 
      if (!AGGREGATE_TYPE_P (type)
      if (!AGGREGATE_TYPE_P (type)
          || needs_to_live_in_memory (var)
          || needs_to_live_in_memory (var)
          || TREE_THIS_VOLATILE (var)
          || TREE_THIS_VOLATILE (var)
          || !COMPLETE_TYPE_P (type)
          || !COMPLETE_TYPE_P (type)
          || !host_integerp (TYPE_SIZE (type), 1)
          || !host_integerp (TYPE_SIZE (type), 1)
          || tree_low_cst (TYPE_SIZE (type), 1) == 0
          || tree_low_cst (TYPE_SIZE (type), 1) == 0
          || type_internals_preclude_sra_p (type)
          || type_internals_preclude_sra_p (type)
          /* Fix for PR 41089.  tree-stdarg.c needs to have va_lists intact but
          /* Fix for PR 41089.  tree-stdarg.c needs to have va_lists intact but
              we also want to schedule it rather late.  Thus we ignore it in
              we also want to schedule it rather late.  Thus we ignore it in
              the early pass. */
              the early pass. */
          || (sra_mode == SRA_MODE_EARLY_INTRA
          || (sra_mode == SRA_MODE_EARLY_INTRA
              && is_va_list_type (type)))
              && is_va_list_type (type)))
        continue;
        continue;
 
 
      bitmap_set_bit (candidate_bitmap, DECL_UID (var));
      bitmap_set_bit (candidate_bitmap, DECL_UID (var));
 
 
      if (dump_file && (dump_flags & TDF_DETAILS))
      if (dump_file && (dump_flags & TDF_DETAILS))
        {
        {
          fprintf (dump_file, "Candidate (%d): ", DECL_UID (var));
          fprintf (dump_file, "Candidate (%d): ", DECL_UID (var));
          print_generic_expr (dump_file, var, 0);
          print_generic_expr (dump_file, var, 0);
          fprintf (dump_file, "\n");
          fprintf (dump_file, "\n");
        }
        }
      ret = true;
      ret = true;
    }
    }
 
 
  return ret;
  return ret;
}
}
 
 
/* Sort all accesses for the given variable, check for partial overlaps and
/* Sort all accesses for the given variable, check for partial overlaps and
   return NULL if there are any.  If there are none, pick a representative for
   return NULL if there are any.  If there are none, pick a representative for
   each combination of offset and size and create a linked list out of them.
   each combination of offset and size and create a linked list out of them.
   Return the pointer to the first representative and make sure it is the first
   Return the pointer to the first representative and make sure it is the first
   one in the vector of accesses.  */
   one in the vector of accesses.  */
 
 
static struct access *
static struct access *
sort_and_splice_var_accesses (tree var)
sort_and_splice_var_accesses (tree var)
{
{
  int i, j, access_count;
  int i, j, access_count;
  struct access *res, **prev_acc_ptr = &res;
  struct access *res, **prev_acc_ptr = &res;
  VEC (access_p, heap) *access_vec;
  VEC (access_p, heap) *access_vec;
  bool first = true;
  bool first = true;
  HOST_WIDE_INT low = -1, high = 0;
  HOST_WIDE_INT low = -1, high = 0;
 
 
  access_vec = get_base_access_vector (var);
  access_vec = get_base_access_vector (var);
  if (!access_vec)
  if (!access_vec)
    return NULL;
    return NULL;
  access_count = VEC_length (access_p, access_vec);
  access_count = VEC_length (access_p, access_vec);
 
 
  /* Sort by <OFFSET, SIZE>.  */
  /* Sort by <OFFSET, SIZE>.  */
  qsort (VEC_address (access_p, access_vec), access_count, sizeof (access_p),
  qsort (VEC_address (access_p, access_vec), access_count, sizeof (access_p),
         compare_access_positions);
         compare_access_positions);
 
 
  i = 0;
  i = 0;
  while (i < access_count)
  while (i < access_count)
    {
    {
      struct access *access = VEC_index (access_p, access_vec, i);
      struct access *access = VEC_index (access_p, access_vec, i);
      bool grp_write = access->write;
      bool grp_write = access->write;
      bool grp_read = !access->write;
      bool grp_read = !access->write;
      bool grp_assignment_read = access->grp_assignment_read;
      bool grp_assignment_read = access->grp_assignment_read;
      bool multiple_reads = false;
      bool multiple_reads = false;
      bool total_scalarization = access->total_scalarization;
      bool total_scalarization = access->total_scalarization;
      bool grp_partial_lhs = access->grp_partial_lhs;
      bool grp_partial_lhs = access->grp_partial_lhs;
      bool first_scalar = is_gimple_reg_type (access->type);
      bool first_scalar = is_gimple_reg_type (access->type);
      bool unscalarizable_region = access->grp_unscalarizable_region;
      bool unscalarizable_region = access->grp_unscalarizable_region;
 
 
      if (first || access->offset >= high)
      if (first || access->offset >= high)
        {
        {
          first = false;
          first = false;
          low = access->offset;
          low = access->offset;
          high = access->offset + access->size;
          high = access->offset + access->size;
        }
        }
      else if (access->offset > low && access->offset + access->size > high)
      else if (access->offset > low && access->offset + access->size > high)
        return NULL;
        return NULL;
      else
      else
        gcc_assert (access->offset >= low
        gcc_assert (access->offset >= low
                    && access->offset + access->size <= high);
                    && access->offset + access->size <= high);
 
 
      j = i + 1;
      j = i + 1;
      while (j < access_count)
      while (j < access_count)
        {
        {
          struct access *ac2 = VEC_index (access_p, access_vec, j);
          struct access *ac2 = VEC_index (access_p, access_vec, j);
          if (ac2->offset != access->offset || ac2->size != access->size)
          if (ac2->offset != access->offset || ac2->size != access->size)
            break;
            break;
          if (ac2->write)
          if (ac2->write)
            grp_write = true;
            grp_write = true;
          else
          else
            {
            {
              if (grp_read)
              if (grp_read)
                multiple_reads = true;
                multiple_reads = true;
              else
              else
                grp_read = true;
                grp_read = true;
            }
            }
          grp_assignment_read |= ac2->grp_assignment_read;
          grp_assignment_read |= ac2->grp_assignment_read;
          grp_partial_lhs |= ac2->grp_partial_lhs;
          grp_partial_lhs |= ac2->grp_partial_lhs;
          unscalarizable_region |= ac2->grp_unscalarizable_region;
          unscalarizable_region |= ac2->grp_unscalarizable_region;
          total_scalarization |= ac2->total_scalarization;
          total_scalarization |= ac2->total_scalarization;
          relink_to_new_repr (access, ac2);
          relink_to_new_repr (access, ac2);
 
 
          /* If there are both aggregate-type and scalar-type accesses with
          /* If there are both aggregate-type and scalar-type accesses with
             this combination of size and offset, the comparison function
             this combination of size and offset, the comparison function
             should have put the scalars first.  */
             should have put the scalars first.  */
          gcc_assert (first_scalar || !is_gimple_reg_type (ac2->type));
          gcc_assert (first_scalar || !is_gimple_reg_type (ac2->type));
          ac2->group_representative = access;
          ac2->group_representative = access;
          j++;
          j++;
        }
        }
 
 
      i = j;
      i = j;
 
 
      access->group_representative = access;
      access->group_representative = access;
      access->grp_write = grp_write;
      access->grp_write = grp_write;
      access->grp_read = grp_read;
      access->grp_read = grp_read;
      access->grp_assignment_read = grp_assignment_read;
      access->grp_assignment_read = grp_assignment_read;
      access->grp_hint = multiple_reads || total_scalarization;
      access->grp_hint = multiple_reads || total_scalarization;
      access->grp_partial_lhs = grp_partial_lhs;
      access->grp_partial_lhs = grp_partial_lhs;
      access->grp_unscalarizable_region = unscalarizable_region;
      access->grp_unscalarizable_region = unscalarizable_region;
      if (access->first_link)
      if (access->first_link)
        add_access_to_work_queue (access);
        add_access_to_work_queue (access);
 
 
      *prev_acc_ptr = access;
      *prev_acc_ptr = access;
      prev_acc_ptr = &access->next_grp;
      prev_acc_ptr = &access->next_grp;
    }
    }
 
 
  gcc_assert (res == VEC_index (access_p, access_vec, 0));
  gcc_assert (res == VEC_index (access_p, access_vec, 0));
  return res;
  return res;
}
}
 
 
/* Create a variable for the given ACCESS which determines the type, name and a
/* Create a variable for the given ACCESS which determines the type, name and a
   few other properties.  Return the variable declaration and store it also to
   few other properties.  Return the variable declaration and store it also to
   ACCESS->replacement.  */
   ACCESS->replacement.  */
 
 
static tree
static tree
create_access_replacement (struct access *access, bool rename)
create_access_replacement (struct access *access, bool rename)
{
{
  tree repl;
  tree repl;
 
 
  repl = create_tmp_var (access->type, "SR");
  repl = create_tmp_var (access->type, "SR");
  get_var_ann (repl);
  get_var_ann (repl);
  add_referenced_var (repl);
  add_referenced_var (repl);
  if (rename)
  if (rename)
    mark_sym_for_renaming (repl);
    mark_sym_for_renaming (repl);
 
 
  if (!access->grp_partial_lhs
  if (!access->grp_partial_lhs
      && (TREE_CODE (access->type) == COMPLEX_TYPE
      && (TREE_CODE (access->type) == COMPLEX_TYPE
          || TREE_CODE (access->type) == VECTOR_TYPE))
          || TREE_CODE (access->type) == VECTOR_TYPE))
    DECL_GIMPLE_REG_P (repl) = 1;
    DECL_GIMPLE_REG_P (repl) = 1;
 
 
  DECL_SOURCE_LOCATION (repl) = DECL_SOURCE_LOCATION (access->base);
  DECL_SOURCE_LOCATION (repl) = DECL_SOURCE_LOCATION (access->base);
  DECL_ARTIFICIAL (repl) = 1;
  DECL_ARTIFICIAL (repl) = 1;
  DECL_IGNORED_P (repl) = DECL_IGNORED_P (access->base);
  DECL_IGNORED_P (repl) = DECL_IGNORED_P (access->base);
 
 
  if (DECL_NAME (access->base)
  if (DECL_NAME (access->base)
      && !DECL_IGNORED_P (access->base)
      && !DECL_IGNORED_P (access->base)
      && !DECL_ARTIFICIAL (access->base))
      && !DECL_ARTIFICIAL (access->base))
    {
    {
      char *pretty_name = make_fancy_name (access->expr);
      char *pretty_name = make_fancy_name (access->expr);
 
 
      DECL_NAME (repl) = get_identifier (pretty_name);
      DECL_NAME (repl) = get_identifier (pretty_name);
      obstack_free (&name_obstack, pretty_name);
      obstack_free (&name_obstack, pretty_name);
 
 
      SET_DECL_DEBUG_EXPR (repl, access->expr);
      SET_DECL_DEBUG_EXPR (repl, access->expr);
      DECL_DEBUG_EXPR_IS_FROM (repl) = 1;
      DECL_DEBUG_EXPR_IS_FROM (repl) = 1;
      TREE_NO_WARNING (repl) = TREE_NO_WARNING (access->base);
      TREE_NO_WARNING (repl) = TREE_NO_WARNING (access->base);
    }
    }
  else
  else
    TREE_NO_WARNING (repl) = 1;
    TREE_NO_WARNING (repl) = 1;
 
 
  if (dump_file)
  if (dump_file)
    {
    {
      fprintf (dump_file, "Created a replacement for ");
      fprintf (dump_file, "Created a replacement for ");
      print_generic_expr (dump_file, access->base, 0);
      print_generic_expr (dump_file, access->base, 0);
      fprintf (dump_file, " offset: %u, size: %u: ",
      fprintf (dump_file, " offset: %u, size: %u: ",
               (unsigned) access->offset, (unsigned) access->size);
               (unsigned) access->offset, (unsigned) access->size);
      print_generic_expr (dump_file, repl, 0);
      print_generic_expr (dump_file, repl, 0);
      fprintf (dump_file, "\n");
      fprintf (dump_file, "\n");
    }
    }
  sra_stats.replacements++;
  sra_stats.replacements++;
 
 
  return repl;
  return repl;
}
}
 
 
/* Return ACCESS scalar replacement, create it if it does not exist yet.  */
/* Return ACCESS scalar replacement, create it if it does not exist yet.  */
 
 
static inline tree
static inline tree
get_access_replacement (struct access *access)
get_access_replacement (struct access *access)
{
{
  gcc_assert (access->grp_to_be_replaced);
  gcc_assert (access->grp_to_be_replaced);
 
 
  if (!access->replacement_decl)
  if (!access->replacement_decl)
    access->replacement_decl = create_access_replacement (access, true);
    access->replacement_decl = create_access_replacement (access, true);
  return access->replacement_decl;
  return access->replacement_decl;
}
}
 
 
/* Return ACCESS scalar replacement, create it if it does not exist yet but do
/* Return ACCESS scalar replacement, create it if it does not exist yet but do
   not mark it for renaming.  */
   not mark it for renaming.  */
 
 
static inline tree
static inline tree
get_unrenamed_access_replacement (struct access *access)
get_unrenamed_access_replacement (struct access *access)
{
{
  gcc_assert (!access->grp_to_be_replaced);
  gcc_assert (!access->grp_to_be_replaced);
 
 
  if (!access->replacement_decl)
  if (!access->replacement_decl)
    access->replacement_decl = create_access_replacement (access, false);
    access->replacement_decl = create_access_replacement (access, false);
  return access->replacement_decl;
  return access->replacement_decl;
}
}
 
 
/* Build a subtree of accesses rooted in *ACCESS, and move the pointer in the
/* Build a subtree of accesses rooted in *ACCESS, and move the pointer in the
   linked list along the way.  Stop when *ACCESS is NULL or the access pointed
   linked list along the way.  Stop when *ACCESS is NULL or the access pointed
   to it is not "within" the root.  Return false iff some accesses partially
   to it is not "within" the root.  Return false iff some accesses partially
   overlap.  */
   overlap.  */
 
 
static bool
static bool
build_access_subtree (struct access **access)
build_access_subtree (struct access **access)
{
{
  struct access *root = *access, *last_child = NULL;
  struct access *root = *access, *last_child = NULL;
  HOST_WIDE_INT limit = root->offset + root->size;
  HOST_WIDE_INT limit = root->offset + root->size;
 
 
  *access = (*access)->next_grp;
  *access = (*access)->next_grp;
  while  (*access && (*access)->offset + (*access)->size <= limit)
  while  (*access && (*access)->offset + (*access)->size <= limit)
    {
    {
      if (!last_child)
      if (!last_child)
        root->first_child = *access;
        root->first_child = *access;
      else
      else
        last_child->next_sibling = *access;
        last_child->next_sibling = *access;
      last_child = *access;
      last_child = *access;
 
 
      if (!build_access_subtree (access))
      if (!build_access_subtree (access))
        return false;
        return false;
    }
    }
 
 
  if (*access && (*access)->offset < limit)
  if (*access && (*access)->offset < limit)
    return false;
    return false;
 
 
  return true;
  return true;
}
}
 
 
/* Build a tree of access representatives, ACCESS is the pointer to the first
/* Build a tree of access representatives, ACCESS is the pointer to the first
   one, others are linked in a list by the next_grp field.  Return false iff
   one, others are linked in a list by the next_grp field.  Return false iff
   some accesses partially overlap.  */
   some accesses partially overlap.  */
 
 
static bool
static bool
build_access_trees (struct access *access)
build_access_trees (struct access *access)
{
{
  while (access)
  while (access)
    {
    {
      struct access *root = access;
      struct access *root = access;
 
 
      if (!build_access_subtree (&access))
      if (!build_access_subtree (&access))
        return false;
        return false;
      root->next_grp = access;
      root->next_grp = access;
    }
    }
  return true;
  return true;
}
}
 
 
/* Return true if expr contains some ARRAY_REFs into a variable bounded
/* Return true if expr contains some ARRAY_REFs into a variable bounded
   array.  */
   array.  */
 
 
static bool
static bool
expr_with_var_bounded_array_refs_p (tree expr)
expr_with_var_bounded_array_refs_p (tree expr)
{
{
  while (handled_component_p (expr))
  while (handled_component_p (expr))
    {
    {
      if (TREE_CODE (expr) == ARRAY_REF
      if (TREE_CODE (expr) == ARRAY_REF
          && !host_integerp (array_ref_low_bound (expr), 0))
          && !host_integerp (array_ref_low_bound (expr), 0))
        return true;
        return true;
      expr = TREE_OPERAND (expr, 0);
      expr = TREE_OPERAND (expr, 0);
    }
    }
  return false;
  return false;
}
}
 
 
enum mark_read_status { SRA_MR_NOT_READ, SRA_MR_READ, SRA_MR_ASSIGN_READ};
enum mark_read_status { SRA_MR_NOT_READ, SRA_MR_READ, SRA_MR_ASSIGN_READ};
 
 
/* Analyze the subtree of accesses rooted in ROOT, scheduling replacements when
/* Analyze the subtree of accesses rooted in ROOT, scheduling replacements when
   both seeming beneficial and when ALLOW_REPLACEMENTS allows it.  Also set all
   both seeming beneficial and when ALLOW_REPLACEMENTS allows it.  Also set all
   sorts of access flags appropriately along the way, notably always set
   sorts of access flags appropriately along the way, notably always set
   grp_read and grp_assign_read according to MARK_READ and grp_write when
   grp_read and grp_assign_read according to MARK_READ and grp_write when
   MARK_WRITE is true.  */
   MARK_WRITE is true.  */
 
 
static bool
static bool
analyze_access_subtree (struct access *root, bool allow_replacements,
analyze_access_subtree (struct access *root, bool allow_replacements,
                        enum mark_read_status mark_read, bool mark_write)
                        enum mark_read_status mark_read, bool mark_write)
{
{
  struct access *child;
  struct access *child;
  HOST_WIDE_INT limit = root->offset + root->size;
  HOST_WIDE_INT limit = root->offset + root->size;
  HOST_WIDE_INT covered_to = root->offset;
  HOST_WIDE_INT covered_to = root->offset;
  bool scalar = is_gimple_reg_type (root->type);
  bool scalar = is_gimple_reg_type (root->type);
  bool hole = false, sth_created = false;
  bool hole = false, sth_created = false;
  bool direct_read = root->grp_read;
  bool direct_read = root->grp_read;
 
 
  if (mark_read == SRA_MR_ASSIGN_READ)
  if (mark_read == SRA_MR_ASSIGN_READ)
    {
    {
      root->grp_read = 1;
      root->grp_read = 1;
      root->grp_assignment_read = 1;
      root->grp_assignment_read = 1;
    }
    }
  if (mark_read == SRA_MR_READ)
  if (mark_read == SRA_MR_READ)
    root->grp_read = 1;
    root->grp_read = 1;
  else if (root->grp_assignment_read)
  else if (root->grp_assignment_read)
    mark_read = SRA_MR_ASSIGN_READ;
    mark_read = SRA_MR_ASSIGN_READ;
  else if (root->grp_read)
  else if (root->grp_read)
    mark_read = SRA_MR_READ;
    mark_read = SRA_MR_READ;
 
 
  if (mark_write)
  if (mark_write)
    root->grp_write = true;
    root->grp_write = true;
  else if (root->grp_write)
  else if (root->grp_write)
    mark_write = true;
    mark_write = true;
 
 
  if (root->grp_unscalarizable_region)
  if (root->grp_unscalarizable_region)
    allow_replacements = false;
    allow_replacements = false;
 
 
  if (allow_replacements && expr_with_var_bounded_array_refs_p (root->expr))
  if (allow_replacements && expr_with_var_bounded_array_refs_p (root->expr))
    allow_replacements = false;
    allow_replacements = false;
 
 
  for (child = root->first_child; child; child = child->next_sibling)
  for (child = root->first_child; child; child = child->next_sibling)
    {
    {
      if (!hole && child->offset < covered_to)
      if (!hole && child->offset < covered_to)
        hole = true;
        hole = true;
      else
      else
        covered_to += child->size;
        covered_to += child->size;
 
 
      sth_created |= analyze_access_subtree (child,
      sth_created |= analyze_access_subtree (child,
                                             allow_replacements && !scalar,
                                             allow_replacements && !scalar,
                                             mark_read, mark_write);
                                             mark_read, mark_write);
 
 
      root->grp_unscalarized_data |= child->grp_unscalarized_data;
      root->grp_unscalarized_data |= child->grp_unscalarized_data;
      hole |= !child->grp_covered;
      hole |= !child->grp_covered;
    }
    }
 
 
  if (allow_replacements && scalar && !root->first_child
  if (allow_replacements && scalar && !root->first_child
      && (root->grp_hint
      && (root->grp_hint
          || (root->grp_write && (direct_read || root->grp_assignment_read)))
          || (root->grp_write && (direct_read || root->grp_assignment_read)))
      /* We must not ICE later on when trying to build an access to the
      /* We must not ICE later on when trying to build an access to the
         original data within the aggregate even when it is impossible to do in
         original data within the aggregate even when it is impossible to do in
         a defined way like in the PR 42703 testcase.  Therefore we check
         a defined way like in the PR 42703 testcase.  Therefore we check
         pre-emptively here that we will be able to do that.  */
         pre-emptively here that we will be able to do that.  */
      && build_ref_for_offset (NULL, TREE_TYPE (root->base), root->offset,
      && build_ref_for_offset (NULL, TREE_TYPE (root->base), root->offset,
                               root->type, false))
                               root->type, false))
    {
    {
      if (dump_file && (dump_flags & TDF_DETAILS))
      if (dump_file && (dump_flags & TDF_DETAILS))
        {
        {
          fprintf (dump_file, "Marking ");
          fprintf (dump_file, "Marking ");
          print_generic_expr (dump_file, root->base, 0);
          print_generic_expr (dump_file, root->base, 0);
          fprintf (dump_file, " offset: %u, size: %u: ",
          fprintf (dump_file, " offset: %u, size: %u: ",
                   (unsigned) root->offset, (unsigned) root->size);
                   (unsigned) root->offset, (unsigned) root->size);
          fprintf (dump_file, " to be replaced.\n");
          fprintf (dump_file, " to be replaced.\n");
        }
        }
 
 
      root->grp_to_be_replaced = 1;
      root->grp_to_be_replaced = 1;
      sth_created = true;
      sth_created = true;
      hole = false;
      hole = false;
    }
    }
  else if (covered_to < limit)
  else if (covered_to < limit)
    hole = true;
    hole = true;
 
 
  if (sth_created && !hole)
  if (sth_created && !hole)
    {
    {
      root->grp_covered = 1;
      root->grp_covered = 1;
      return true;
      return true;
    }
    }
  if (root->grp_write || TREE_CODE (root->base) == PARM_DECL)
  if (root->grp_write || TREE_CODE (root->base) == PARM_DECL)
    root->grp_unscalarized_data = 1; /* not covered and written to */
    root->grp_unscalarized_data = 1; /* not covered and written to */
  if (sth_created)
  if (sth_created)
    return true;
    return true;
  return false;
  return false;
}
}
 
 
/* Analyze all access trees linked by next_grp by the means of
/* Analyze all access trees linked by next_grp by the means of
   analyze_access_subtree.  */
   analyze_access_subtree.  */
static bool
static bool
analyze_access_trees (struct access *access)
analyze_access_trees (struct access *access)
{
{
  bool ret = false;
  bool ret = false;
 
 
  while (access)
  while (access)
    {
    {
      if (analyze_access_subtree (access, true, SRA_MR_NOT_READ, false))
      if (analyze_access_subtree (access, true, SRA_MR_NOT_READ, false))
        ret = true;
        ret = true;
      access = access->next_grp;
      access = access->next_grp;
    }
    }
 
 
  return ret;
  return ret;
}
}
 
 
/* Return true iff a potential new child of LACC at offset OFFSET and with size
/* Return true iff a potential new child of LACC at offset OFFSET and with size
   SIZE would conflict with an already existing one.  If exactly such a child
   SIZE would conflict with an already existing one.  If exactly such a child
   already exists in LACC, store a pointer to it in EXACT_MATCH.  */
   already exists in LACC, store a pointer to it in EXACT_MATCH.  */
 
 
static bool
static bool
child_would_conflict_in_lacc (struct access *lacc, HOST_WIDE_INT norm_offset,
child_would_conflict_in_lacc (struct access *lacc, HOST_WIDE_INT norm_offset,
                              HOST_WIDE_INT size, struct access **exact_match)
                              HOST_WIDE_INT size, struct access **exact_match)
{
{
  struct access *child;
  struct access *child;
 
 
  for (child = lacc->first_child; child; child = child->next_sibling)
  for (child = lacc->first_child; child; child = child->next_sibling)
    {
    {
      if (child->offset == norm_offset && child->size == size)
      if (child->offset == norm_offset && child->size == size)
        {
        {
          *exact_match = child;
          *exact_match = child;
          return true;
          return true;
        }
        }
 
 
      if (child->offset < norm_offset + size
      if (child->offset < norm_offset + size
          && child->offset + child->size > norm_offset)
          && child->offset + child->size > norm_offset)
        return true;
        return true;
    }
    }
 
 
  return false;
  return false;
}
}
 
 
/* Create a new child access of PARENT, with all properties just like MODEL
/* Create a new child access of PARENT, with all properties just like MODEL
   except for its offset and with its grp_write false and grp_read true.
   except for its offset and with its grp_write false and grp_read true.
   Return the new access or NULL if it cannot be created.  Note that this access
   Return the new access or NULL if it cannot be created.  Note that this access
   is created long after all splicing and sorting, it's not located in any
   is created long after all splicing and sorting, it's not located in any
   access vector and is automatically a representative of its group.  */
   access vector and is automatically a representative of its group.  */
 
 
static struct access *
static struct access *
create_artificial_child_access (struct access *parent, struct access *model,
create_artificial_child_access (struct access *parent, struct access *model,
                                HOST_WIDE_INT new_offset)
                                HOST_WIDE_INT new_offset)
{
{
  struct access *access;
  struct access *access;
  struct access **child;
  struct access **child;
  tree expr = parent->base;;
  tree expr = parent->base;;
 
 
  gcc_assert (!model->grp_unscalarizable_region);
  gcc_assert (!model->grp_unscalarizable_region);
 
 
  if (!build_ref_for_offset (&expr, TREE_TYPE (expr), new_offset,
  if (!build_ref_for_offset (&expr, TREE_TYPE (expr), new_offset,
                             model->type, false))
                             model->type, false))
    return NULL;
    return NULL;
 
 
  access = (struct access *) pool_alloc (access_pool);
  access = (struct access *) pool_alloc (access_pool);
  memset (access, 0, sizeof (struct access));
  memset (access, 0, sizeof (struct access));
  access->base = parent->base;
  access->base = parent->base;
  access->expr = expr;
  access->expr = expr;
  access->offset = new_offset;
  access->offset = new_offset;
  access->size = model->size;
  access->size = model->size;
  access->type = model->type;
  access->type = model->type;
  access->grp_write = true;
  access->grp_write = true;
  access->grp_read = false;
  access->grp_read = false;
 
 
  child = &parent->first_child;
  child = &parent->first_child;
  while (*child && (*child)->offset < new_offset)
  while (*child && (*child)->offset < new_offset)
    child = &(*child)->next_sibling;
    child = &(*child)->next_sibling;
 
 
  access->next_sibling = *child;
  access->next_sibling = *child;
  *child = access;
  *child = access;
 
 
  return access;
  return access;
}
}
 
 
 
 
/* Propagate all subaccesses of RACC across an assignment link to LACC. Return
/* Propagate all subaccesses of RACC across an assignment link to LACC. Return
   true if any new subaccess was created.  Additionally, if RACC is a scalar
   true if any new subaccess was created.  Additionally, if RACC is a scalar
   access but LACC is not, change the type of the latter, if possible.  */
   access but LACC is not, change the type of the latter, if possible.  */
 
 
static bool
static bool
propagate_subaccesses_across_link (struct access *lacc, struct access *racc)
propagate_subaccesses_across_link (struct access *lacc, struct access *racc)
{
{
  struct access *rchild;
  struct access *rchild;
  HOST_WIDE_INT norm_delta = lacc->offset - racc->offset;
  HOST_WIDE_INT norm_delta = lacc->offset - racc->offset;
  bool ret = false;
  bool ret = false;
 
 
  if (is_gimple_reg_type (lacc->type)
  if (is_gimple_reg_type (lacc->type)
      || lacc->grp_unscalarizable_region
      || lacc->grp_unscalarizable_region
      || racc->grp_unscalarizable_region)
      || racc->grp_unscalarizable_region)
    return false;
    return false;
 
 
  if (!lacc->first_child && !racc->first_child
  if (!lacc->first_child && !racc->first_child
      && is_gimple_reg_type (racc->type))
      && is_gimple_reg_type (racc->type))
    {
    {
      tree t = lacc->base;
      tree t = lacc->base;
 
 
      if (build_ref_for_offset (&t, TREE_TYPE (t), lacc->offset, racc->type,
      if (build_ref_for_offset (&t, TREE_TYPE (t), lacc->offset, racc->type,
                                false))
                                false))
        {
        {
          lacc->expr = t;
          lacc->expr = t;
          lacc->type = racc->type;
          lacc->type = racc->type;
        }
        }
      return false;
      return false;
    }
    }
 
 
  for (rchild = racc->first_child; rchild; rchild = rchild->next_sibling)
  for (rchild = racc->first_child; rchild; rchild = rchild->next_sibling)
    {
    {
      struct access *new_acc = NULL;
      struct access *new_acc = NULL;
      HOST_WIDE_INT norm_offset = rchild->offset + norm_delta;
      HOST_WIDE_INT norm_offset = rchild->offset + norm_delta;
 
 
      if (rchild->grp_unscalarizable_region)
      if (rchild->grp_unscalarizable_region)
        continue;
        continue;
 
 
      if (child_would_conflict_in_lacc (lacc, norm_offset, rchild->size,
      if (child_would_conflict_in_lacc (lacc, norm_offset, rchild->size,
                                        &new_acc))
                                        &new_acc))
        {
        {
          if (new_acc)
          if (new_acc)
            {
            {
              rchild->grp_hint = 1;
              rchild->grp_hint = 1;
              new_acc->grp_hint |= new_acc->grp_read;
              new_acc->grp_hint |= new_acc->grp_read;
              if (rchild->first_child)
              if (rchild->first_child)
                ret |= propagate_subaccesses_across_link (new_acc, rchild);
                ret |= propagate_subaccesses_across_link (new_acc, rchild);
            }
            }
          continue;
          continue;
        }
        }
 
 
      /* If a (part of) a union field is on the RHS of an assignment, it can
      /* If a (part of) a union field is on the RHS of an assignment, it can
         have sub-accesses which do not make sense on the LHS (PR 40351).
         have sub-accesses which do not make sense on the LHS (PR 40351).
         Check that this is not the case.  */
         Check that this is not the case.  */
      if (!build_ref_for_offset (NULL, TREE_TYPE (lacc->base), norm_offset,
      if (!build_ref_for_offset (NULL, TREE_TYPE (lacc->base), norm_offset,
                                 rchild->type, false))
                                 rchild->type, false))
        continue;
        continue;
 
 
      rchild->grp_hint = 1;
      rchild->grp_hint = 1;
      new_acc = create_artificial_child_access (lacc, rchild, norm_offset);
      new_acc = create_artificial_child_access (lacc, rchild, norm_offset);
      if (new_acc)
      if (new_acc)
        {
        {
          ret = true;
          ret = true;
          if (racc->first_child)
          if (racc->first_child)
            propagate_subaccesses_across_link (new_acc, rchild);
            propagate_subaccesses_across_link (new_acc, rchild);
        }
        }
    }
    }
 
 
  return ret;
  return ret;
}
}
 
 
/* Propagate all subaccesses across assignment links.  */
/* Propagate all subaccesses across assignment links.  */
 
 
static void
static void
propagate_all_subaccesses (void)
propagate_all_subaccesses (void)
{
{
  while (work_queue_head)
  while (work_queue_head)
    {
    {
      struct access *racc = pop_access_from_work_queue ();
      struct access *racc = pop_access_from_work_queue ();
      struct assign_link *link;
      struct assign_link *link;
 
 
      gcc_assert (racc->first_link);
      gcc_assert (racc->first_link);
 
 
      for (link = racc->first_link; link; link = link->next)
      for (link = racc->first_link; link; link = link->next)
        {
        {
          struct access *lacc = link->lacc;
          struct access *lacc = link->lacc;
 
 
          if (!bitmap_bit_p (candidate_bitmap, DECL_UID (lacc->base)))
          if (!bitmap_bit_p (candidate_bitmap, DECL_UID (lacc->base)))
            continue;
            continue;
          lacc = lacc->group_representative;
          lacc = lacc->group_representative;
          if (propagate_subaccesses_across_link (lacc, racc)
          if (propagate_subaccesses_across_link (lacc, racc)
              && lacc->first_link)
              && lacc->first_link)
            add_access_to_work_queue (lacc);
            add_access_to_work_queue (lacc);
        }
        }
    }
    }
}
}
 
 
/* Go through all accesses collected throughout the (intraprocedural) analysis
/* Go through all accesses collected throughout the (intraprocedural) analysis
   stage, exclude overlapping ones, identify representatives and build trees
   stage, exclude overlapping ones, identify representatives and build trees
   out of them, making decisions about scalarization on the way.  Return true
   out of them, making decisions about scalarization on the way.  Return true
   iff there are any to-be-scalarized variables after this stage. */
   iff there are any to-be-scalarized variables after this stage. */
 
 
static bool
static bool
analyze_all_variable_accesses (void)
analyze_all_variable_accesses (void)
{
{
  int res = 0;
  int res = 0;
  bitmap tmp = BITMAP_ALLOC (NULL);
  bitmap tmp = BITMAP_ALLOC (NULL);
  bitmap_iterator bi;
  bitmap_iterator bi;
  unsigned i, max_total_scalarization_size;
  unsigned i, max_total_scalarization_size;
 
 
  max_total_scalarization_size = UNITS_PER_WORD * BITS_PER_UNIT
  max_total_scalarization_size = UNITS_PER_WORD * BITS_PER_UNIT
    * MOVE_RATIO (optimize_function_for_speed_p (cfun));
    * MOVE_RATIO (optimize_function_for_speed_p (cfun));
 
 
  EXECUTE_IF_SET_IN_BITMAP (candidate_bitmap, 0, i, bi)
  EXECUTE_IF_SET_IN_BITMAP (candidate_bitmap, 0, i, bi)
    if (bitmap_bit_p (should_scalarize_away_bitmap, i)
    if (bitmap_bit_p (should_scalarize_away_bitmap, i)
        && !bitmap_bit_p (cannot_scalarize_away_bitmap, i))
        && !bitmap_bit_p (cannot_scalarize_away_bitmap, i))
      {
      {
        tree var = referenced_var (i);
        tree var = referenced_var (i);
 
 
        if (TREE_CODE (var) == VAR_DECL
        if (TREE_CODE (var) == VAR_DECL
            && ((unsigned) tree_low_cst (TYPE_SIZE (TREE_TYPE (var)), 1)
            && ((unsigned) tree_low_cst (TYPE_SIZE (TREE_TYPE (var)), 1)
                <= max_total_scalarization_size)
                <= max_total_scalarization_size)
            && type_consists_of_records_p (TREE_TYPE (var)))
            && type_consists_of_records_p (TREE_TYPE (var)))
          {
          {
            completely_scalarize_record (var, var, 0);
            completely_scalarize_record (var, var, 0);
            if (dump_file && (dump_flags & TDF_DETAILS))
            if (dump_file && (dump_flags & TDF_DETAILS))
              {
              {
                fprintf (dump_file, "Will attempt to totally scalarize ");
                fprintf (dump_file, "Will attempt to totally scalarize ");
                print_generic_expr (dump_file, var, 0);
                print_generic_expr (dump_file, var, 0);
                fprintf (dump_file, " (UID: %u): \n", DECL_UID (var));
                fprintf (dump_file, " (UID: %u): \n", DECL_UID (var));
              }
              }
          }
          }
      }
      }
 
 
  bitmap_copy (tmp, candidate_bitmap);
  bitmap_copy (tmp, candidate_bitmap);
  EXECUTE_IF_SET_IN_BITMAP (tmp, 0, i, bi)
  EXECUTE_IF_SET_IN_BITMAP (tmp, 0, i, bi)
    {
    {
      tree var = referenced_var (i);
      tree var = referenced_var (i);
      struct access *access;
      struct access *access;
 
 
      access = sort_and_splice_var_accesses (var);
      access = sort_and_splice_var_accesses (var);
      if (!access || !build_access_trees (access))
      if (!access || !build_access_trees (access))
        disqualify_candidate (var,
        disqualify_candidate (var,
                              "No or inhibitingly overlapping accesses.");
                              "No or inhibitingly overlapping accesses.");
    }
    }
 
 
  propagate_all_subaccesses ();
  propagate_all_subaccesses ();
 
 
  bitmap_copy (tmp, candidate_bitmap);
  bitmap_copy (tmp, candidate_bitmap);
  EXECUTE_IF_SET_IN_BITMAP (tmp, 0, i, bi)
  EXECUTE_IF_SET_IN_BITMAP (tmp, 0, i, bi)
    {
    {
      tree var = referenced_var (i);
      tree var = referenced_var (i);
      struct access *access = get_first_repr_for_decl (var);
      struct access *access = get_first_repr_for_decl (var);
 
 
      if (analyze_access_trees (access))
      if (analyze_access_trees (access))
        {
        {
          res++;
          res++;
          if (dump_file && (dump_flags & TDF_DETAILS))
          if (dump_file && (dump_flags & TDF_DETAILS))
            {
            {
              fprintf (dump_file, "\nAccess trees for ");
              fprintf (dump_file, "\nAccess trees for ");
              print_generic_expr (dump_file, var, 0);
              print_generic_expr (dump_file, var, 0);
              fprintf (dump_file, " (UID: %u): \n", DECL_UID (var));
              fprintf (dump_file, " (UID: %u): \n", DECL_UID (var));
              dump_access_tree (dump_file, access);
              dump_access_tree (dump_file, access);
              fprintf (dump_file, "\n");
              fprintf (dump_file, "\n");
            }
            }
        }
        }
      else
      else
        disqualify_candidate (var, "No scalar replacements to be created.");
        disqualify_candidate (var, "No scalar replacements to be created.");
    }
    }
 
 
  BITMAP_FREE (tmp);
  BITMAP_FREE (tmp);
 
 
  if (res)
  if (res)
    {
    {
      statistics_counter_event (cfun, "Scalarized aggregates", res);
      statistics_counter_event (cfun, "Scalarized aggregates", res);
      return true;
      return true;
    }
    }
  else
  else
    return false;
    return false;
}
}
 
 
/* Return true iff a reference statement into aggregate AGG can be built for
/* Return true iff a reference statement into aggregate AGG can be built for
   every single to-be-replaced accesses that is a child of ACCESS, its sibling
   every single to-be-replaced accesses that is a child of ACCESS, its sibling
   or a child of its sibling. TOP_OFFSET is the offset from the processed
   or a child of its sibling. TOP_OFFSET is the offset from the processed
   access subtree that has to be subtracted from offset of each access.  */
   access subtree that has to be subtracted from offset of each access.  */
 
 
static bool
static bool
ref_expr_for_all_replacements_p (struct access *access, tree agg,
ref_expr_for_all_replacements_p (struct access *access, tree agg,
                                 HOST_WIDE_INT top_offset)
                                 HOST_WIDE_INT top_offset)
{
{
  do
  do
    {
    {
      if (access->grp_to_be_replaced
      if (access->grp_to_be_replaced
          && !build_ref_for_offset (NULL, TREE_TYPE (agg),
          && !build_ref_for_offset (NULL, TREE_TYPE (agg),
                                    access->offset - top_offset,
                                    access->offset - top_offset,
                                    access->type, false))
                                    access->type, false))
        return false;
        return false;
 
 
      if (access->first_child
      if (access->first_child
          && !ref_expr_for_all_replacements_p (access->first_child, agg,
          && !ref_expr_for_all_replacements_p (access->first_child, agg,
                                               top_offset))
                                               top_offset))
        return false;
        return false;
 
 
      access = access->next_sibling;
      access = access->next_sibling;
    }
    }
  while (access);
  while (access);
 
 
  return true;
  return true;
}
}
 
 
/* Generate statements copying scalar replacements of accesses within a subtree
/* Generate statements copying scalar replacements of accesses within a subtree
   into or out of AGG.  ACCESS is the first child of the root of the subtree to
   into or out of AGG.  ACCESS is the first child of the root of the subtree to
   be processed.  AGG is an aggregate type expression (can be a declaration but
   be processed.  AGG is an aggregate type expression (can be a declaration but
   does not have to be, it can for example also be an indirect_ref).
   does not have to be, it can for example also be an indirect_ref).
   TOP_OFFSET is the offset of the processed subtree which has to be subtracted
   TOP_OFFSET is the offset of the processed subtree which has to be subtracted
   from offsets of individual accesses to get corresponding offsets for AGG.
   from offsets of individual accesses to get corresponding offsets for AGG.
   If CHUNK_SIZE is non-null, copy only replacements in the interval
   If CHUNK_SIZE is non-null, copy only replacements in the interval
   <start_offset, start_offset + chunk_size>, otherwise copy all.  GSI is a
   <start_offset, start_offset + chunk_size>, otherwise copy all.  GSI is a
   statement iterator used to place the new statements.  WRITE should be true
   statement iterator used to place the new statements.  WRITE should be true
   when the statements should write from AGG to the replacement and false if
   when the statements should write from AGG to the replacement and false if
   vice versa.  if INSERT_AFTER is true, new statements will be added after the
   vice versa.  if INSERT_AFTER is true, new statements will be added after the
   current statement in GSI, they will be added before the statement
   current statement in GSI, they will be added before the statement
   otherwise.  */
   otherwise.  */
 
 
static void
static void
generate_subtree_copies (struct access *access, tree agg,
generate_subtree_copies (struct access *access, tree agg,
                         HOST_WIDE_INT top_offset,
                         HOST_WIDE_INT top_offset,
                         HOST_WIDE_INT start_offset, HOST_WIDE_INT chunk_size,
                         HOST_WIDE_INT start_offset, HOST_WIDE_INT chunk_size,
                         gimple_stmt_iterator *gsi, bool write,
                         gimple_stmt_iterator *gsi, bool write,
                         bool insert_after)
                         bool insert_after)
{
{
  do
  do
    {
    {
      tree expr = agg;
      tree expr = agg;
 
 
      if (chunk_size && access->offset >= start_offset + chunk_size)
      if (chunk_size && access->offset >= start_offset + chunk_size)
        return;
        return;
 
 
      if (access->grp_to_be_replaced
      if (access->grp_to_be_replaced
          && (chunk_size == 0
          && (chunk_size == 0
              || access->offset + access->size > start_offset))
              || access->offset + access->size > start_offset))
        {
        {
          tree repl = get_access_replacement (access);
          tree repl = get_access_replacement (access);
          bool ref_found;
          bool ref_found;
          gimple stmt;
          gimple stmt;
 
 
          ref_found = build_ref_for_offset (&expr, TREE_TYPE (agg),
          ref_found = build_ref_for_offset (&expr, TREE_TYPE (agg),
                                             access->offset - top_offset,
                                             access->offset - top_offset,
                                             access->type, false);
                                             access->type, false);
          gcc_assert (ref_found);
          gcc_assert (ref_found);
 
 
          if (write)
          if (write)
            {
            {
              if (access->grp_partial_lhs)
              if (access->grp_partial_lhs)
                expr = force_gimple_operand_gsi (gsi, expr, true, NULL_TREE,
                expr = force_gimple_operand_gsi (gsi, expr, true, NULL_TREE,
                                                 !insert_after,
                                                 !insert_after,
                                                 insert_after ? GSI_NEW_STMT
                                                 insert_after ? GSI_NEW_STMT
                                                 : GSI_SAME_STMT);
                                                 : GSI_SAME_STMT);
              stmt = gimple_build_assign (repl, expr);
              stmt = gimple_build_assign (repl, expr);
            }
            }
          else
          else
            {
            {
              TREE_NO_WARNING (repl) = 1;
              TREE_NO_WARNING (repl) = 1;
              if (access->grp_partial_lhs)
              if (access->grp_partial_lhs)
                repl = force_gimple_operand_gsi (gsi, repl, true, NULL_TREE,
                repl = force_gimple_operand_gsi (gsi, repl, true, NULL_TREE,
                                                 !insert_after,
                                                 !insert_after,
                                                 insert_after ? GSI_NEW_STMT
                                                 insert_after ? GSI_NEW_STMT
                                                 : GSI_SAME_STMT);
                                                 : GSI_SAME_STMT);
              stmt = gimple_build_assign (expr, repl);
              stmt = gimple_build_assign (expr, repl);
            }
            }
 
 
          if (insert_after)
          if (insert_after)
            gsi_insert_after (gsi, stmt, GSI_NEW_STMT);
            gsi_insert_after (gsi, stmt, GSI_NEW_STMT);
          else
          else
            gsi_insert_before (gsi, stmt, GSI_SAME_STMT);
            gsi_insert_before (gsi, stmt, GSI_SAME_STMT);
          update_stmt (stmt);
          update_stmt (stmt);
          sra_stats.subtree_copies++;
          sra_stats.subtree_copies++;
        }
        }
 
 
      if (access->first_child)
      if (access->first_child)
        generate_subtree_copies (access->first_child, agg, top_offset,
        generate_subtree_copies (access->first_child, agg, top_offset,
                                 start_offset, chunk_size, gsi,
                                 start_offset, chunk_size, gsi,
                                 write, insert_after);
                                 write, insert_after);
 
 
      access = access->next_sibling;
      access = access->next_sibling;
    }
    }
  while (access);
  while (access);
}
}
 
 
/* Assign zero to all scalar replacements in an access subtree.  ACCESS is the
/* Assign zero to all scalar replacements in an access subtree.  ACCESS is the
   the root of the subtree to be processed.  GSI is the statement iterator used
   the root of the subtree to be processed.  GSI is the statement iterator used
   for inserting statements which are added after the current statement if
   for inserting statements which are added after the current statement if
   INSERT_AFTER is true or before it otherwise.  */
   INSERT_AFTER is true or before it otherwise.  */
 
 
static void
static void
init_subtree_with_zero (struct access *access, gimple_stmt_iterator *gsi,
init_subtree_with_zero (struct access *access, gimple_stmt_iterator *gsi,
                        bool insert_after)
                        bool insert_after)
 
 
{
{
  struct access *child;
  struct access *child;
 
 
  if (access->grp_to_be_replaced)
  if (access->grp_to_be_replaced)
    {
    {
      gimple stmt;
      gimple stmt;
 
 
      stmt = gimple_build_assign (get_access_replacement (access),
      stmt = gimple_build_assign (get_access_replacement (access),
                                  fold_convert (access->type,
                                  fold_convert (access->type,
                                                integer_zero_node));
                                                integer_zero_node));
      if (insert_after)
      if (insert_after)
        gsi_insert_after (gsi, stmt, GSI_NEW_STMT);
        gsi_insert_after (gsi, stmt, GSI_NEW_STMT);
      else
      else
        gsi_insert_before (gsi, stmt, GSI_SAME_STMT);
        gsi_insert_before (gsi, stmt, GSI_SAME_STMT);
      update_stmt (stmt);
      update_stmt (stmt);
    }
    }
 
 
  for (child = access->first_child; child; child = child->next_sibling)
  for (child = access->first_child; child; child = child->next_sibling)
    init_subtree_with_zero (child, gsi, insert_after);
    init_subtree_with_zero (child, gsi, insert_after);
}
}
 
 
/* Search for an access representative for the given expression EXPR and
/* Search for an access representative for the given expression EXPR and
   return it or NULL if it cannot be found.  */
   return it or NULL if it cannot be found.  */
 
 
static struct access *
static struct access *
get_access_for_expr (tree expr)
get_access_for_expr (tree expr)
{
{
  HOST_WIDE_INT offset, size, max_size;
  HOST_WIDE_INT offset, size, max_size;
  tree base;
  tree base;
 
 
  /* FIXME: This should not be necessary but Ada produces V_C_Es with a type of
  /* FIXME: This should not be necessary but Ada produces V_C_Es with a type of
     a different size than the size of its argument and we need the latter
     a different size than the size of its argument and we need the latter
     one.  */
     one.  */
  if (TREE_CODE (expr) == VIEW_CONVERT_EXPR)
  if (TREE_CODE (expr) == VIEW_CONVERT_EXPR)
    expr = TREE_OPERAND (expr, 0);
    expr = TREE_OPERAND (expr, 0);
 
 
  base = get_ref_base_and_extent (expr, &offset, &size, &max_size);
  base = get_ref_base_and_extent (expr, &offset, &size, &max_size);
  if (max_size == -1 || !DECL_P (base))
  if (max_size == -1 || !DECL_P (base))
    return NULL;
    return NULL;
 
 
  if (!bitmap_bit_p (candidate_bitmap, DECL_UID (base)))
  if (!bitmap_bit_p (candidate_bitmap, DECL_UID (base)))
    return NULL;
    return NULL;
 
 
  return get_var_base_offset_size_access (base, offset, max_size);
  return get_var_base_offset_size_access (base, offset, max_size);
}
}
 
 
/* Callback for scan_function.  Replace the expression EXPR with a scalar
/* Callback for scan_function.  Replace the expression EXPR with a scalar
   replacement if there is one and generate other statements to do type
   replacement if there is one and generate other statements to do type
   conversion or subtree copying if necessary.  GSI is used to place newly
   conversion or subtree copying if necessary.  GSI is used to place newly
   created statements, WRITE is true if the expression is being written to (it
   created statements, WRITE is true if the expression is being written to (it
   is on a LHS of a statement or output in an assembly statement).  */
   is on a LHS of a statement or output in an assembly statement).  */
 
 
static bool
static bool
sra_modify_expr (tree *expr, gimple_stmt_iterator *gsi, bool write,
sra_modify_expr (tree *expr, gimple_stmt_iterator *gsi, bool write,
                 void *data ATTRIBUTE_UNUSED)
                 void *data ATTRIBUTE_UNUSED)
{
{
  struct access *access;
  struct access *access;
  tree type, bfr;
  tree type, bfr;
 
 
  if (TREE_CODE (*expr) == BIT_FIELD_REF)
  if (TREE_CODE (*expr) == BIT_FIELD_REF)
    {
    {
      bfr = *expr;
      bfr = *expr;
      expr = &TREE_OPERAND (*expr, 0);
      expr = &TREE_OPERAND (*expr, 0);
    }
    }
  else
  else
    bfr = NULL_TREE;
    bfr = NULL_TREE;
 
 
  if (TREE_CODE (*expr) == REALPART_EXPR || TREE_CODE (*expr) == IMAGPART_EXPR)
  if (TREE_CODE (*expr) == REALPART_EXPR || TREE_CODE (*expr) == IMAGPART_EXPR)
    expr = &TREE_OPERAND (*expr, 0);
    expr = &TREE_OPERAND (*expr, 0);
  access = get_access_for_expr (*expr);
  access = get_access_for_expr (*expr);
  if (!access)
  if (!access)
    return false;
    return false;
  type = TREE_TYPE (*expr);
  type = TREE_TYPE (*expr);
 
 
  if (access->grp_to_be_replaced)
  if (access->grp_to_be_replaced)
    {
    {
      tree repl = get_access_replacement (access);
      tree repl = get_access_replacement (access);
      /* If we replace a non-register typed access simply use the original
      /* If we replace a non-register typed access simply use the original
         access expression to extract the scalar component afterwards.
         access expression to extract the scalar component afterwards.
         This happens if scalarizing a function return value or parameter
         This happens if scalarizing a function return value or parameter
         like in gcc.c-torture/execute/20041124-1.c, 20050316-1.c and
         like in gcc.c-torture/execute/20041124-1.c, 20050316-1.c and
         gcc.c-torture/compile/20011217-1.c.
         gcc.c-torture/compile/20011217-1.c.
 
 
         We also want to use this when accessing a complex or vector which can
         We also want to use this when accessing a complex or vector which can
         be accessed as a different type too, potentially creating a need for
         be accessed as a different type too, potentially creating a need for
         type conversion (see PR42196) and when scalarized unions are involved
         type conversion (see PR42196) and when scalarized unions are involved
         in assembler statements (see PR42398).  */
         in assembler statements (see PR42398).  */
      if (!useless_type_conversion_p (type, access->type))
      if (!useless_type_conversion_p (type, access->type))
        {
        {
          tree ref = access->base;
          tree ref = access->base;
          bool ok;
          bool ok;
 
 
          ok = build_ref_for_offset (&ref, TREE_TYPE (ref),
          ok = build_ref_for_offset (&ref, TREE_TYPE (ref),
                                     access->offset, access->type, false);
                                     access->offset, access->type, false);
          gcc_assert (ok);
          gcc_assert (ok);
 
 
          if (write)
          if (write)
            {
            {
              gimple stmt;
              gimple stmt;
 
 
              if (access->grp_partial_lhs)
              if (access->grp_partial_lhs)
                ref = force_gimple_operand_gsi (gsi, ref, true, NULL_TREE,
                ref = force_gimple_operand_gsi (gsi, ref, true, NULL_TREE,
                                                 false, GSI_NEW_STMT);
                                                 false, GSI_NEW_STMT);
              stmt = gimple_build_assign (repl, ref);
              stmt = gimple_build_assign (repl, ref);
              gsi_insert_after (gsi, stmt, GSI_NEW_STMT);
              gsi_insert_after (gsi, stmt, GSI_NEW_STMT);
            }
            }
          else
          else
            {
            {
              gimple stmt;
              gimple stmt;
 
 
              if (access->grp_partial_lhs)
              if (access->grp_partial_lhs)
                repl = force_gimple_operand_gsi (gsi, repl, true, NULL_TREE,
                repl = force_gimple_operand_gsi (gsi, repl, true, NULL_TREE,
                                                 true, GSI_SAME_STMT);
                                                 true, GSI_SAME_STMT);
              stmt = gimple_build_assign (ref, repl);
              stmt = gimple_build_assign (ref, repl);
              gsi_insert_before (gsi, stmt, GSI_SAME_STMT);
              gsi_insert_before (gsi, stmt, GSI_SAME_STMT);
            }
            }
        }
        }
      else
      else
        *expr = repl;
        *expr = repl;
      sra_stats.exprs++;
      sra_stats.exprs++;
    }
    }
 
 
  if (access->first_child)
  if (access->first_child)
    {
    {
      HOST_WIDE_INT start_offset, chunk_size;
      HOST_WIDE_INT start_offset, chunk_size;
      if (bfr
      if (bfr
          && host_integerp (TREE_OPERAND (bfr, 1), 1)
          && host_integerp (TREE_OPERAND (bfr, 1), 1)
          && host_integerp (TREE_OPERAND (bfr, 2), 1))
          && host_integerp (TREE_OPERAND (bfr, 2), 1))
        {
        {
          chunk_size = tree_low_cst (TREE_OPERAND (bfr, 1), 1);
          chunk_size = tree_low_cst (TREE_OPERAND (bfr, 1), 1);
          start_offset = access->offset
          start_offset = access->offset
            + tree_low_cst (TREE_OPERAND (bfr, 2), 1);
            + tree_low_cst (TREE_OPERAND (bfr, 2), 1);
        }
        }
      else
      else
        start_offset = chunk_size = 0;
        start_offset = chunk_size = 0;
 
 
      generate_subtree_copies (access->first_child, access->base, 0,
      generate_subtree_copies (access->first_child, access->base, 0,
                               start_offset, chunk_size, gsi, write, write);
                               start_offset, chunk_size, gsi, write, write);
    }
    }
  return true;
  return true;
}
}
 
 
/* Where scalar replacements of the RHS have been written to when a replacement
/* Where scalar replacements of the RHS have been written to when a replacement
   of a LHS of an assigments cannot be direclty loaded from a replacement of
   of a LHS of an assigments cannot be direclty loaded from a replacement of
   the RHS. */
   the RHS. */
enum unscalarized_data_handling { SRA_UDH_NONE,  /* Nothing done so far. */
enum unscalarized_data_handling { SRA_UDH_NONE,  /* Nothing done so far. */
                                  SRA_UDH_RIGHT, /* Data flushed to the RHS. */
                                  SRA_UDH_RIGHT, /* Data flushed to the RHS. */
                                  SRA_UDH_LEFT }; /* Data flushed to the LHS. */
                                  SRA_UDH_LEFT }; /* Data flushed to the LHS. */
 
 
/* Store all replacements in the access tree rooted in TOP_RACC either to their
/* Store all replacements in the access tree rooted in TOP_RACC either to their
   base aggregate if there are unscalarized data or directly to LHS
   base aggregate if there are unscalarized data or directly to LHS
   otherwise.  */
   otherwise.  */
 
 
static enum unscalarized_data_handling
static enum unscalarized_data_handling
handle_unscalarized_data_in_subtree (struct access *top_racc, tree lhs,
handle_unscalarized_data_in_subtree (struct access *top_racc, tree lhs,
                                     gimple_stmt_iterator *gsi)
                                     gimple_stmt_iterator *gsi)
{
{
  if (top_racc->grp_unscalarized_data)
  if (top_racc->grp_unscalarized_data)
    {
    {
      generate_subtree_copies (top_racc->first_child, top_racc->base, 0, 0, 0,
      generate_subtree_copies (top_racc->first_child, top_racc->base, 0, 0, 0,
                               gsi, false, false);
                               gsi, false, false);
      return SRA_UDH_RIGHT;
      return SRA_UDH_RIGHT;
    }
    }
  else
  else
    {
    {
      generate_subtree_copies (top_racc->first_child, lhs, top_racc->offset,
      generate_subtree_copies (top_racc->first_child, lhs, top_racc->offset,
                               0, 0, gsi, false, false);
                               0, 0, gsi, false, false);
      return SRA_UDH_LEFT;
      return SRA_UDH_LEFT;
    }
    }
}
}
 
 
 
 
/* Try to generate statements to load all sub-replacements in an access
/* Try to generate statements to load all sub-replacements in an access
   (sub)tree (LACC is the first child) from scalar replacements in the TOP_RACC
   (sub)tree (LACC is the first child) from scalar replacements in the TOP_RACC
   (sub)tree.  If that is not possible, refresh the TOP_RACC base aggregate and
   (sub)tree.  If that is not possible, refresh the TOP_RACC base aggregate and
   load the accesses from it.  LEFT_OFFSET is the offset of the left whole
   load the accesses from it.  LEFT_OFFSET is the offset of the left whole
   subtree being copied, RIGHT_OFFSET is the same thing for the right subtree.
   subtree being copied, RIGHT_OFFSET is the same thing for the right subtree.
   NEW_GSI is stmt iterator used for statement insertions after the original
   NEW_GSI is stmt iterator used for statement insertions after the original
   assignment, OLD_GSI is used to insert statements before the assignment.
   assignment, OLD_GSI is used to insert statements before the assignment.
   *REFRESHED keeps the information whether we have needed to refresh
   *REFRESHED keeps the information whether we have needed to refresh
   replacements of the LHS and from which side of the assignments this takes
   replacements of the LHS and from which side of the assignments this takes
   place.  */
   place.  */
 
 
static void
static void
load_assign_lhs_subreplacements (struct access *lacc, struct access *top_racc,
load_assign_lhs_subreplacements (struct access *lacc, struct access *top_racc,
                                 HOST_WIDE_INT left_offset,
                                 HOST_WIDE_INT left_offset,
                                 HOST_WIDE_INT right_offset,
                                 HOST_WIDE_INT right_offset,
                                 gimple_stmt_iterator *old_gsi,
                                 gimple_stmt_iterator *old_gsi,
                                 gimple_stmt_iterator *new_gsi,
                                 gimple_stmt_iterator *new_gsi,
                                 enum unscalarized_data_handling *refreshed,
                                 enum unscalarized_data_handling *refreshed,
                                 tree lhs)
                                 tree lhs)
{
{
  location_t loc = EXPR_LOCATION (lacc->expr);
  location_t loc = EXPR_LOCATION (lacc->expr);
  do
  do
    {
    {
      if (lacc->grp_to_be_replaced)
      if (lacc->grp_to_be_replaced)
        {
        {
          struct access *racc;
          struct access *racc;
          HOST_WIDE_INT offset = lacc->offset - left_offset + right_offset;
          HOST_WIDE_INT offset = lacc->offset - left_offset + right_offset;
          gimple stmt;
          gimple stmt;
          tree rhs;
          tree rhs;
 
 
          racc = find_access_in_subtree (top_racc, offset, lacc->size);
          racc = find_access_in_subtree (top_racc, offset, lacc->size);
          if (racc && racc->grp_to_be_replaced)
          if (racc && racc->grp_to_be_replaced)
            {
            {
              rhs = get_access_replacement (racc);
              rhs = get_access_replacement (racc);
              if (!useless_type_conversion_p (lacc->type, racc->type))
              if (!useless_type_conversion_p (lacc->type, racc->type))
                rhs = fold_build1_loc (loc, VIEW_CONVERT_EXPR, lacc->type, rhs);
                rhs = fold_build1_loc (loc, VIEW_CONVERT_EXPR, lacc->type, rhs);
            }
            }
          else
          else
            {
            {
              /* No suitable access on the right hand side, need to load from
              /* No suitable access on the right hand side, need to load from
                 the aggregate.  See if we have to update it first... */
                 the aggregate.  See if we have to update it first... */
              if (*refreshed == SRA_UDH_NONE)
              if (*refreshed == SRA_UDH_NONE)
                *refreshed = handle_unscalarized_data_in_subtree (top_racc,
                *refreshed = handle_unscalarized_data_in_subtree (top_racc,
                                                                  lhs, old_gsi);
                                                                  lhs, old_gsi);
 
 
              if (*refreshed == SRA_UDH_LEFT)
              if (*refreshed == SRA_UDH_LEFT)
                {
                {
                  bool repl_found;
                  bool repl_found;
 
 
                  rhs = lacc->base;
                  rhs = lacc->base;
                  repl_found = build_ref_for_offset (&rhs, TREE_TYPE (rhs),
                  repl_found = build_ref_for_offset (&rhs, TREE_TYPE (rhs),
                                                     lacc->offset, lacc->type,
                                                     lacc->offset, lacc->type,
                                                     false);
                                                     false);
                  gcc_assert (repl_found);
                  gcc_assert (repl_found);
                }
                }
              else
              else
                {
                {
                  bool repl_found;
                  bool repl_found;
 
 
                  rhs = top_racc->base;
                  rhs = top_racc->base;
                  repl_found = build_ref_for_offset (&rhs,
                  repl_found = build_ref_for_offset (&rhs,
                                                     TREE_TYPE (top_racc->base),
                                                     TREE_TYPE (top_racc->base),
                                                     offset, lacc->type, false);
                                                     offset, lacc->type, false);
                  gcc_assert (repl_found);
                  gcc_assert (repl_found);
                }
                }
            }
            }
 
 
          stmt = gimple_build_assign (get_access_replacement (lacc), rhs);
          stmt = gimple_build_assign (get_access_replacement (lacc), rhs);
          gsi_insert_after (new_gsi, stmt, GSI_NEW_STMT);
          gsi_insert_after (new_gsi, stmt, GSI_NEW_STMT);
          update_stmt (stmt);
          update_stmt (stmt);
          sra_stats.subreplacements++;
          sra_stats.subreplacements++;
        }
        }
      else if (*refreshed == SRA_UDH_NONE
      else if (*refreshed == SRA_UDH_NONE
               && lacc->grp_read && !lacc->grp_covered)
               && lacc->grp_read && !lacc->grp_covered)
        *refreshed = handle_unscalarized_data_in_subtree (top_racc, lhs,
        *refreshed = handle_unscalarized_data_in_subtree (top_racc, lhs,
                                                          old_gsi);
                                                          old_gsi);
 
 
      if (lacc->first_child)
      if (lacc->first_child)
        load_assign_lhs_subreplacements (lacc->first_child, top_racc,
        load_assign_lhs_subreplacements (lacc->first_child, top_racc,
                                         left_offset, right_offset,
                                         left_offset, right_offset,
                                         old_gsi, new_gsi, refreshed, lhs);
                                         old_gsi, new_gsi, refreshed, lhs);
      lacc = lacc->next_sibling;
      lacc = lacc->next_sibling;
    }
    }
  while (lacc);
  while (lacc);
}
}
 
 
/* Modify assignments with a CONSTRUCTOR on their RHS.  STMT contains a pointer
/* Modify assignments with a CONSTRUCTOR on their RHS.  STMT contains a pointer
   to the assignment and GSI is the statement iterator pointing at it.  Returns
   to the assignment and GSI is the statement iterator pointing at it.  Returns
   the same values as sra_modify_assign.  */
   the same values as sra_modify_assign.  */
 
 
static enum scan_assign_result
static enum scan_assign_result
sra_modify_constructor_assign (gimple *stmt, gimple_stmt_iterator *gsi)
sra_modify_constructor_assign (gimple *stmt, gimple_stmt_iterator *gsi)
{
{
  tree lhs = gimple_assign_lhs (*stmt);
  tree lhs = gimple_assign_lhs (*stmt);
  struct access *acc;
  struct access *acc;
 
 
  acc = get_access_for_expr (lhs);
  acc = get_access_for_expr (lhs);
  if (!acc)
  if (!acc)
    return SRA_SA_NONE;
    return SRA_SA_NONE;
 
 
  if (VEC_length (constructor_elt,
  if (VEC_length (constructor_elt,
                  CONSTRUCTOR_ELTS (gimple_assign_rhs1 (*stmt))) > 0)
                  CONSTRUCTOR_ELTS (gimple_assign_rhs1 (*stmt))) > 0)
    {
    {
      /* I have never seen this code path trigger but if it can happen the
      /* I have never seen this code path trigger but if it can happen the
         following should handle it gracefully.  */
         following should handle it gracefully.  */
      if (access_has_children_p (acc))
      if (access_has_children_p (acc))
        generate_subtree_copies (acc->first_child, acc->base, 0, 0, 0, gsi,
        generate_subtree_copies (acc->first_child, acc->base, 0, 0, 0, gsi,
                                 true, true);
                                 true, true);
      return SRA_SA_PROCESSED;
      return SRA_SA_PROCESSED;
    }
    }
 
 
  if (acc->grp_covered)
  if (acc->grp_covered)
    {
    {
      init_subtree_with_zero (acc, gsi, false);
      init_subtree_with_zero (acc, gsi, false);
      unlink_stmt_vdef (*stmt);
      unlink_stmt_vdef (*stmt);
      gsi_remove (gsi, true);
      gsi_remove (gsi, true);
      return SRA_SA_REMOVED;
      return SRA_SA_REMOVED;
    }
    }
  else
  else
    {
    {
      init_subtree_with_zero (acc, gsi, true);
      init_subtree_with_zero (acc, gsi, true);
      return SRA_SA_PROCESSED;
      return SRA_SA_PROCESSED;
    }
    }
}
}
 
 
/* Create a new suitable default definition SSA_NAME and replace all uses of
/* Create a new suitable default definition SSA_NAME and replace all uses of
   SSA with it, RACC is access describing the uninitialized part of an
   SSA with it, RACC is access describing the uninitialized part of an
   aggregate that is being loaded.  */
   aggregate that is being loaded.  */
 
 
static void
static void
replace_uses_with_default_def_ssa_name (tree ssa, struct access *racc)
replace_uses_with_default_def_ssa_name (tree ssa, struct access *racc)
{
{
  tree repl, decl;
  tree repl, decl;
 
 
  decl = get_unrenamed_access_replacement (racc);
  decl = get_unrenamed_access_replacement (racc);
 
 
  repl = gimple_default_def (cfun, decl);
  repl = gimple_default_def (cfun, decl);
  if (!repl)
  if (!repl)
    {
    {
      repl = make_ssa_name (decl, gimple_build_nop ());
      repl = make_ssa_name (decl, gimple_build_nop ());
      set_default_def (decl, repl);
      set_default_def (decl, repl);
    }
    }
 
 
  replace_uses_by (ssa, repl);
  replace_uses_by (ssa, repl);
}
}
 
 
/* Callback of scan_function to process assign statements.  It examines both
/* Callback of scan_function to process assign statements.  It examines both
   sides of the statement, replaces them with a scalare replacement if there is
   sides of the statement, replaces them with a scalare replacement if there is
   one and generating copying of replacements if scalarized aggregates have been
   one and generating copying of replacements if scalarized aggregates have been
   used in the assignment.  STMT is a pointer to the assign statement, GSI is
   used in the assignment.  STMT is a pointer to the assign statement, GSI is
   used to hold generated statements for type conversions and subtree
   used to hold generated statements for type conversions and subtree
   copying.  */
   copying.  */
 
 
static enum scan_assign_result
static enum scan_assign_result
sra_modify_assign (gimple *stmt, gimple_stmt_iterator *gsi,
sra_modify_assign (gimple *stmt, gimple_stmt_iterator *gsi,
                   void *data ATTRIBUTE_UNUSED)
                   void *data ATTRIBUTE_UNUSED)
{
{
  struct access *lacc, *racc;
  struct access *lacc, *racc;
  tree lhs, rhs;
  tree lhs, rhs;
  bool modify_this_stmt = false;
  bool modify_this_stmt = false;
  bool force_gimple_rhs = false;
  bool force_gimple_rhs = false;
  location_t loc = gimple_location (*stmt);
  location_t loc = gimple_location (*stmt);
  gimple_stmt_iterator orig_gsi = *gsi;
  gimple_stmt_iterator orig_gsi = *gsi;
 
 
  if (!gimple_assign_single_p (*stmt))
  if (!gimple_assign_single_p (*stmt))
    return SRA_SA_NONE;
    return SRA_SA_NONE;
  lhs = gimple_assign_lhs (*stmt);
  lhs = gimple_assign_lhs (*stmt);
  rhs = gimple_assign_rhs1 (*stmt);
  rhs = gimple_assign_rhs1 (*stmt);
 
 
  if (TREE_CODE (rhs) == CONSTRUCTOR)
  if (TREE_CODE (rhs) == CONSTRUCTOR)
    return sra_modify_constructor_assign (stmt, gsi);
    return sra_modify_constructor_assign (stmt, gsi);
 
 
  if (TREE_CODE (rhs) == REALPART_EXPR || TREE_CODE (lhs) == REALPART_EXPR
  if (TREE_CODE (rhs) == REALPART_EXPR || TREE_CODE (lhs) == REALPART_EXPR
      || TREE_CODE (rhs) == IMAGPART_EXPR || TREE_CODE (lhs) == IMAGPART_EXPR
      || TREE_CODE (rhs) == IMAGPART_EXPR || TREE_CODE (lhs) == IMAGPART_EXPR
      || TREE_CODE (rhs) == BIT_FIELD_REF || TREE_CODE (lhs) == BIT_FIELD_REF)
      || TREE_CODE (rhs) == BIT_FIELD_REF || TREE_CODE (lhs) == BIT_FIELD_REF)
    {
    {
      modify_this_stmt = sra_modify_expr (gimple_assign_rhs1_ptr (*stmt),
      modify_this_stmt = sra_modify_expr (gimple_assign_rhs1_ptr (*stmt),
                                          gsi, false, data);
                                          gsi, false, data);
      modify_this_stmt |= sra_modify_expr (gimple_assign_lhs_ptr (*stmt),
      modify_this_stmt |= sra_modify_expr (gimple_assign_lhs_ptr (*stmt),
                                           gsi, true, data);
                                           gsi, true, data);
      return modify_this_stmt ? SRA_SA_PROCESSED : SRA_SA_NONE;
      return modify_this_stmt ? SRA_SA_PROCESSED : SRA_SA_NONE;
    }
    }
 
 
  lacc = get_access_for_expr (lhs);
  lacc = get_access_for_expr (lhs);
  racc = get_access_for_expr (rhs);
  racc = get_access_for_expr (rhs);
  if (!lacc && !racc)
  if (!lacc && !racc)
    return SRA_SA_NONE;
    return SRA_SA_NONE;
 
 
  if (lacc && lacc->grp_to_be_replaced)
  if (lacc && lacc->grp_to_be_replaced)
    {
    {
      lhs = get_access_replacement (lacc);
      lhs = get_access_replacement (lacc);
      gimple_assign_set_lhs (*stmt, lhs);
      gimple_assign_set_lhs (*stmt, lhs);
      modify_this_stmt = true;
      modify_this_stmt = true;
      if (lacc->grp_partial_lhs)
      if (lacc->grp_partial_lhs)
        force_gimple_rhs = true;
        force_gimple_rhs = true;
      sra_stats.exprs++;
      sra_stats.exprs++;
    }
    }
 
 
  if (racc && racc->grp_to_be_replaced)
  if (racc && racc->grp_to_be_replaced)
    {
    {
      rhs = get_access_replacement (racc);
      rhs = get_access_replacement (racc);
      modify_this_stmt = true;
      modify_this_stmt = true;
      if (racc->grp_partial_lhs)
      if (racc->grp_partial_lhs)
        force_gimple_rhs = true;
        force_gimple_rhs = true;
      sra_stats.exprs++;
      sra_stats.exprs++;
    }
    }
 
 
  if (modify_this_stmt)
  if (modify_this_stmt)
    {
    {
      if (!useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs)))
      if (!useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs)))
        {
        {
          /* If we can avoid creating a VIEW_CONVERT_EXPR do so.
          /* If we can avoid creating a VIEW_CONVERT_EXPR do so.
             ???  This should move to fold_stmt which we simply should
             ???  This should move to fold_stmt which we simply should
             call after building a VIEW_CONVERT_EXPR here.  */
             call after building a VIEW_CONVERT_EXPR here.  */
          if (AGGREGATE_TYPE_P (TREE_TYPE (lhs))
          if (AGGREGATE_TYPE_P (TREE_TYPE (lhs))
              && !access_has_children_p (lacc))
              && !access_has_children_p (lacc))
            {
            {
              tree expr = lhs;
              tree expr = lhs;
              if (build_ref_for_offset (&expr, TREE_TYPE (lhs), 0,
              if (build_ref_for_offset (&expr, TREE_TYPE (lhs), 0,
                                        TREE_TYPE (rhs), false))
                                        TREE_TYPE (rhs), false))
                {
                {
                  lhs = expr;
                  lhs = expr;
                  gimple_assign_set_lhs (*stmt, expr);
                  gimple_assign_set_lhs (*stmt, expr);
                }
                }
            }
            }
          else if (AGGREGATE_TYPE_P (TREE_TYPE (rhs))
          else if (AGGREGATE_TYPE_P (TREE_TYPE (rhs))
                   && !access_has_children_p (racc))
                   && !access_has_children_p (racc))
            {
            {
              tree expr = rhs;
              tree expr = rhs;
              if (build_ref_for_offset (&expr, TREE_TYPE (rhs), 0,
              if (build_ref_for_offset (&expr, TREE_TYPE (rhs), 0,
                                        TREE_TYPE (lhs), false))
                                        TREE_TYPE (lhs), false))
                rhs = expr;
                rhs = expr;
            }
            }
          if (!useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs)))
          if (!useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs)))
            {
            {
              rhs = fold_build1_loc (loc, VIEW_CONVERT_EXPR, TREE_TYPE (lhs), rhs);
              rhs = fold_build1_loc (loc, VIEW_CONVERT_EXPR, TREE_TYPE (lhs), rhs);
              if (is_gimple_reg_type (TREE_TYPE (lhs))
              if (is_gimple_reg_type (TREE_TYPE (lhs))
                  && TREE_CODE (lhs) != SSA_NAME)
                  && TREE_CODE (lhs) != SSA_NAME)
                force_gimple_rhs = true;
                force_gimple_rhs = true;
            }
            }
        }
        }
    }
    }
 
 
  /* From this point on, the function deals with assignments in between
  /* From this point on, the function deals with assignments in between
     aggregates when at least one has scalar reductions of some of its
     aggregates when at least one has scalar reductions of some of its
     components.  There are three possible scenarios: Both the LHS and RHS have
     components.  There are three possible scenarios: Both the LHS and RHS have
     to-be-scalarized components, 2) only the RHS has or 3) only the LHS has.
     to-be-scalarized components, 2) only the RHS has or 3) only the LHS has.
 
 
     In the first case, we would like to load the LHS components from RHS
     In the first case, we would like to load the LHS components from RHS
     components whenever possible.  If that is not possible, we would like to
     components whenever possible.  If that is not possible, we would like to
     read it directly from the RHS (after updating it by storing in it its own
     read it directly from the RHS (after updating it by storing in it its own
     components).  If there are some necessary unscalarized data in the LHS,
     components).  If there are some necessary unscalarized data in the LHS,
     those will be loaded by the original assignment too.  If neither of these
     those will be loaded by the original assignment too.  If neither of these
     cases happen, the original statement can be removed.  Most of this is done
     cases happen, the original statement can be removed.  Most of this is done
     by load_assign_lhs_subreplacements.
     by load_assign_lhs_subreplacements.
 
 
     In the second case, we would like to store all RHS scalarized components
     In the second case, we would like to store all RHS scalarized components
     directly into LHS and if they cover the aggregate completely, remove the
     directly into LHS and if they cover the aggregate completely, remove the
     statement too.  In the third case, we want the LHS components to be loaded
     statement too.  In the third case, we want the LHS components to be loaded
     directly from the RHS (DSE will remove the original statement if it
     directly from the RHS (DSE will remove the original statement if it
     becomes redundant).
     becomes redundant).
 
 
     This is a bit complex but manageable when types match and when unions do
     This is a bit complex but manageable when types match and when unions do
     not cause confusion in a way that we cannot really load a component of LHS
     not cause confusion in a way that we cannot really load a component of LHS
     from the RHS or vice versa (the access representing this level can have
     from the RHS or vice versa (the access representing this level can have
     subaccesses that are accessible only through a different union field at a
     subaccesses that are accessible only through a different union field at a
     higher level - different from the one used in the examined expression).
     higher level - different from the one used in the examined expression).
     Unions are fun.
     Unions are fun.
 
 
     Therefore, I specially handle a fourth case, happening when there is a
     Therefore, I specially handle a fourth case, happening when there is a
     specific type cast or it is impossible to locate a scalarized subaccess on
     specific type cast or it is impossible to locate a scalarized subaccess on
     the other side of the expression.  If that happens, I simply "refresh" the
     the other side of the expression.  If that happens, I simply "refresh" the
     RHS by storing in it is scalarized components leave the original statement
     RHS by storing in it is scalarized components leave the original statement
     there to do the copying and then load the scalar replacements of the LHS.
     there to do the copying and then load the scalar replacements of the LHS.
     This is what the first branch does.  */
     This is what the first branch does.  */
 
 
  if (gimple_has_volatile_ops (*stmt)
  if (gimple_has_volatile_ops (*stmt)
      || contains_view_convert_expr_p (rhs)
      || contains_view_convert_expr_p (rhs)
      || contains_view_convert_expr_p (lhs)
      || contains_view_convert_expr_p (lhs)
      || (access_has_children_p (racc)
      || (access_has_children_p (racc)
          && !ref_expr_for_all_replacements_p (racc, lhs, racc->offset))
          && !ref_expr_for_all_replacements_p (racc, lhs, racc->offset))
      || (access_has_children_p (lacc)
      || (access_has_children_p (lacc)
          && !ref_expr_for_all_replacements_p (lacc, rhs, lacc->offset)))
          && !ref_expr_for_all_replacements_p (lacc, rhs, lacc->offset)))
    {
    {
      if (access_has_children_p (racc))
      if (access_has_children_p (racc))
        generate_subtree_copies (racc->first_child, racc->base, 0, 0, 0,
        generate_subtree_copies (racc->first_child, racc->base, 0, 0, 0,
                                 gsi, false, false);
                                 gsi, false, false);
      if (access_has_children_p (lacc))
      if (access_has_children_p (lacc))
        generate_subtree_copies (lacc->first_child, lacc->base, 0, 0, 0,
        generate_subtree_copies (lacc->first_child, lacc->base, 0, 0, 0,
                                 gsi, true, true);
                                 gsi, true, true);
      sra_stats.separate_lhs_rhs_handling++;
      sra_stats.separate_lhs_rhs_handling++;
    }
    }
  else
  else
    {
    {
      if (access_has_children_p (lacc) && access_has_children_p (racc))
      if (access_has_children_p (lacc) && access_has_children_p (racc))
        {
        {
          gimple_stmt_iterator orig_gsi = *gsi;
          gimple_stmt_iterator orig_gsi = *gsi;
          enum unscalarized_data_handling refreshed;
          enum unscalarized_data_handling refreshed;
 
 
          if (lacc->grp_read && !lacc->grp_covered)
          if (lacc->grp_read && !lacc->grp_covered)
            refreshed = handle_unscalarized_data_in_subtree (racc, lhs, gsi);
            refreshed = handle_unscalarized_data_in_subtree (racc, lhs, gsi);
          else
          else
            refreshed = SRA_UDH_NONE;
            refreshed = SRA_UDH_NONE;
 
 
          load_assign_lhs_subreplacements (lacc->first_child, racc,
          load_assign_lhs_subreplacements (lacc->first_child, racc,
                                           lacc->offset, racc->offset,
                                           lacc->offset, racc->offset,
                                           &orig_gsi, gsi, &refreshed, lhs);
                                           &orig_gsi, gsi, &refreshed, lhs);
          if (refreshed != SRA_UDH_RIGHT)
          if (refreshed != SRA_UDH_RIGHT)
            {
            {
              gsi_next (gsi);
              gsi_next (gsi);
              unlink_stmt_vdef (*stmt);
              unlink_stmt_vdef (*stmt);
              gsi_remove (&orig_gsi, true);
              gsi_remove (&orig_gsi, true);
              sra_stats.deleted++;
              sra_stats.deleted++;
              return SRA_SA_REMOVED;
              return SRA_SA_REMOVED;
            }
            }
        }
        }
      else
      else
        {
        {
          if (racc)
          if (racc)
            {
            {
              if (!racc->grp_to_be_replaced && !racc->grp_unscalarized_data)
              if (!racc->grp_to_be_replaced && !racc->grp_unscalarized_data)
                {
                {
                  if (racc->first_child)
                  if (racc->first_child)
                    generate_subtree_copies (racc->first_child, lhs,
                    generate_subtree_copies (racc->first_child, lhs,
                                             racc->offset, 0, 0, gsi,
                                             racc->offset, 0, 0, gsi,
                                             false, false);
                                             false, false);
                  gcc_assert (*stmt == gsi_stmt (*gsi));
                  gcc_assert (*stmt == gsi_stmt (*gsi));
                  if (TREE_CODE (lhs) == SSA_NAME)
                  if (TREE_CODE (lhs) == SSA_NAME)
                    replace_uses_with_default_def_ssa_name (lhs, racc);
                    replace_uses_with_default_def_ssa_name (lhs, racc);
 
 
                  unlink_stmt_vdef (*stmt);
                  unlink_stmt_vdef (*stmt);
                  gsi_remove (gsi, true);
                  gsi_remove (gsi, true);
                  sra_stats.deleted++;
                  sra_stats.deleted++;
                  return SRA_SA_REMOVED;
                  return SRA_SA_REMOVED;
                }
                }
              else if (racc->first_child)
              else if (racc->first_child)
                generate_subtree_copies (racc->first_child, lhs,
                generate_subtree_copies (racc->first_child, lhs,
                                         racc->offset, 0, 0, gsi, false, true);
                                         racc->offset, 0, 0, gsi, false, true);
            }
            }
          if (access_has_children_p (lacc))
          if (access_has_children_p (lacc))
            generate_subtree_copies (lacc->first_child, rhs, lacc->offset,
            generate_subtree_copies (lacc->first_child, rhs, lacc->offset,
                                     0, 0, gsi, true, true);
                                     0, 0, gsi, true, true);
        }
        }
    }
    }
 
 
  /* This gimplification must be done after generate_subtree_copies, lest we
  /* This gimplification must be done after generate_subtree_copies, lest we
     insert the subtree copies in the middle of the gimplified sequence.  */
     insert the subtree copies in the middle of the gimplified sequence.  */
  if (force_gimple_rhs)
  if (force_gimple_rhs)
    rhs = force_gimple_operand_gsi (&orig_gsi, rhs, true, NULL_TREE,
    rhs = force_gimple_operand_gsi (&orig_gsi, rhs, true, NULL_TREE,
                                    true, GSI_SAME_STMT);
                                    true, GSI_SAME_STMT);
  if (gimple_assign_rhs1 (*stmt) != rhs)
  if (gimple_assign_rhs1 (*stmt) != rhs)
    {
    {
      gimple_assign_set_rhs_from_tree (&orig_gsi, rhs);
      gimple_assign_set_rhs_from_tree (&orig_gsi, rhs);
      gcc_assert (*stmt == gsi_stmt (orig_gsi));
      gcc_assert (*stmt == gsi_stmt (orig_gsi));
    }
    }
 
 
  return modify_this_stmt ? SRA_SA_PROCESSED : SRA_SA_NONE;
  return modify_this_stmt ? SRA_SA_PROCESSED : SRA_SA_NONE;
}
}
 
 
/* Generate statements initializing scalar replacements of parts of function
/* Generate statements initializing scalar replacements of parts of function
   parameters.  */
   parameters.  */
 
 
static void
static void
initialize_parameter_reductions (void)
initialize_parameter_reductions (void)
{
{
  gimple_stmt_iterator gsi;
  gimple_stmt_iterator gsi;
  gimple_seq seq = NULL;
  gimple_seq seq = NULL;
  tree parm;
  tree parm;
 
 
  for (parm = DECL_ARGUMENTS (current_function_decl);
  for (parm = DECL_ARGUMENTS (current_function_decl);
       parm;
       parm;
       parm = TREE_CHAIN (parm))
       parm = TREE_CHAIN (parm))
    {
    {
      VEC (access_p, heap) *access_vec;
      VEC (access_p, heap) *access_vec;
      struct access *access;
      struct access *access;
 
 
      if (!bitmap_bit_p (candidate_bitmap, DECL_UID (parm)))
      if (!bitmap_bit_p (candidate_bitmap, DECL_UID (parm)))
        continue;
        continue;
      access_vec = get_base_access_vector (parm);
      access_vec = get_base_access_vector (parm);
      if (!access_vec)
      if (!access_vec)
        continue;
        continue;
 
 
      if (!seq)
      if (!seq)
        {
        {
          seq = gimple_seq_alloc ();
          seq = gimple_seq_alloc ();
          gsi = gsi_start (seq);
          gsi = gsi_start (seq);
        }
        }
 
 
      for (access = VEC_index (access_p, access_vec, 0);
      for (access = VEC_index (access_p, access_vec, 0);
           access;
           access;
           access = access->next_grp)
           access = access->next_grp)
        generate_subtree_copies (access, parm, 0, 0, 0, &gsi, true, true);
        generate_subtree_copies (access, parm, 0, 0, 0, &gsi, true, true);
    }
    }
 
 
  if (seq)
  if (seq)
    gsi_insert_seq_on_edge_immediate (single_succ_edge (ENTRY_BLOCK_PTR), seq);
    gsi_insert_seq_on_edge_immediate (single_succ_edge (ENTRY_BLOCK_PTR), seq);
}
}
 
 
/* The "main" function of intraprocedural SRA passes.  Runs the analysis and if
/* The "main" function of intraprocedural SRA passes.  Runs the analysis and if
   it reveals there are components of some aggregates to be scalarized, it runs
   it reveals there are components of some aggregates to be scalarized, it runs
   the required transformations.  */
   the required transformations.  */
static unsigned int
static unsigned int
perform_intra_sra (void)
perform_intra_sra (void)
{
{
  int ret = 0;
  int ret = 0;
  sra_initialize ();
  sra_initialize ();
 
 
  if (!find_var_candidates ())
  if (!find_var_candidates ())
    goto out;
    goto out;
 
 
  if (!scan_function (build_access_from_expr, build_accesses_from_assign, NULL,
  if (!scan_function (build_access_from_expr, build_accesses_from_assign, NULL,
                      true, NULL))
                      true, NULL))
    goto out;
    goto out;
 
 
  if (!analyze_all_variable_accesses ())
  if (!analyze_all_variable_accesses ())
    goto out;
    goto out;
 
 
  scan_function (sra_modify_expr, sra_modify_assign, NULL, false, NULL);
  scan_function (sra_modify_expr, sra_modify_assign, NULL, false, NULL);
  initialize_parameter_reductions ();
  initialize_parameter_reductions ();
 
 
  statistics_counter_event (cfun, "Scalar replacements created",
  statistics_counter_event (cfun, "Scalar replacements created",
                            sra_stats.replacements);
                            sra_stats.replacements);
  statistics_counter_event (cfun, "Modified expressions", sra_stats.exprs);
  statistics_counter_event (cfun, "Modified expressions", sra_stats.exprs);
  statistics_counter_event (cfun, "Subtree copy stmts",
  statistics_counter_event (cfun, "Subtree copy stmts",
                            sra_stats.subtree_copies);
                            sra_stats.subtree_copies);
  statistics_counter_event (cfun, "Subreplacement stmts",
  statistics_counter_event (cfun, "Subreplacement stmts",
                            sra_stats.subreplacements);
                            sra_stats.subreplacements);
  statistics_counter_event (cfun, "Deleted stmts", sra_stats.deleted);
  statistics_counter_event (cfun, "Deleted stmts", sra_stats.deleted);
  statistics_counter_event (cfun, "Separate LHS and RHS handling",
  statistics_counter_event (cfun, "Separate LHS and RHS handling",
                            sra_stats.separate_lhs_rhs_handling);
                            sra_stats.separate_lhs_rhs_handling);
 
 
  ret = TODO_update_ssa;
  ret = TODO_update_ssa;
 
 
 out:
 out:
  sra_deinitialize ();
  sra_deinitialize ();
  return ret;
  return ret;
}
}
 
 
/* Perform early intraprocedural SRA.  */
/* Perform early intraprocedural SRA.  */
static unsigned int
static unsigned int
early_intra_sra (void)
early_intra_sra (void)
{
{
  sra_mode = SRA_MODE_EARLY_INTRA;
  sra_mode = SRA_MODE_EARLY_INTRA;
  return perform_intra_sra ();
  return perform_intra_sra ();
}
}
 
 
/* Perform "late" intraprocedural SRA.  */
/* Perform "late" intraprocedural SRA.  */
static unsigned int
static unsigned int
late_intra_sra (void)
late_intra_sra (void)
{
{
  sra_mode = SRA_MODE_INTRA;
  sra_mode = SRA_MODE_INTRA;
  return perform_intra_sra ();
  return perform_intra_sra ();
}
}
 
 
 
 
static bool
static bool
gate_intra_sra (void)
gate_intra_sra (void)
{
{
  return flag_tree_sra != 0;
  return flag_tree_sra != 0;
}
}
 
 
 
 
struct gimple_opt_pass pass_sra_early =
struct gimple_opt_pass pass_sra_early =
{
{
 {
 {
  GIMPLE_PASS,
  GIMPLE_PASS,
  "esra",                               /* name */
  "esra",                               /* name */
  gate_intra_sra,                       /* gate */
  gate_intra_sra,                       /* gate */
  early_intra_sra,                      /* execute */
  early_intra_sra,                      /* execute */
  NULL,                                 /* sub */
  NULL,                                 /* sub */
  NULL,                                 /* next */
  NULL,                                 /* next */
  0,                                     /* static_pass_number */
  0,                                     /* static_pass_number */
  TV_TREE_SRA,                          /* tv_id */
  TV_TREE_SRA,                          /* tv_id */
  PROP_cfg | PROP_ssa,                  /* properties_required */
  PROP_cfg | PROP_ssa,                  /* properties_required */
  0,                                     /* properties_provided */
  0,                                     /* properties_provided */
  0,                                     /* properties_destroyed */
  0,                                     /* properties_destroyed */
  0,                                     /* todo_flags_start */
  0,                                     /* todo_flags_start */
  TODO_dump_func
  TODO_dump_func
  | TODO_update_ssa
  | TODO_update_ssa
  | TODO_ggc_collect
  | TODO_ggc_collect
  | TODO_verify_ssa                     /* todo_flags_finish */
  | TODO_verify_ssa                     /* todo_flags_finish */
 }
 }
};
};
 
 
struct gimple_opt_pass pass_sra =
struct gimple_opt_pass pass_sra =
{
{
 {
 {
  GIMPLE_PASS,
  GIMPLE_PASS,
  "sra",                                /* name */
  "sra",                                /* name */
  gate_intra_sra,                       /* gate */
  gate_intra_sra,                       /* gate */
  late_intra_sra,                       /* execute */
  late_intra_sra,                       /* execute */
  NULL,                                 /* sub */
  NULL,                                 /* sub */
  NULL,                                 /* next */
  NULL,                                 /* next */
  0,                                     /* static_pass_number */
  0,                                     /* static_pass_number */
  TV_TREE_SRA,                          /* tv_id */
  TV_TREE_SRA,                          /* tv_id */
  PROP_cfg | PROP_ssa,                  /* properties_required */
  PROP_cfg | PROP_ssa,                  /* properties_required */
  0,                                     /* properties_provided */
  0,                                     /* properties_provided */
  0,                                     /* properties_destroyed */
  0,                                     /* properties_destroyed */
  TODO_update_address_taken,            /* todo_flags_start */
  TODO_update_address_taken,            /* todo_flags_start */
  TODO_dump_func
  TODO_dump_func
  | TODO_update_ssa
  | TODO_update_ssa
  | TODO_ggc_collect
  | TODO_ggc_collect
  | TODO_verify_ssa                     /* todo_flags_finish */
  | TODO_verify_ssa                     /* todo_flags_finish */
 }
 }
};
};
 
 
 
 
/* Return true iff PARM (which must be a parm_decl) is an unused scalar
/* Return true iff PARM (which must be a parm_decl) is an unused scalar
   parameter.  */
   parameter.  */
 
 
static bool
static bool
is_unused_scalar_param (tree parm)
is_unused_scalar_param (tree parm)
{
{
  tree name;
  tree name;
  return (is_gimple_reg (parm)
  return (is_gimple_reg (parm)
          && (!(name = gimple_default_def (cfun, parm))
          && (!(name = gimple_default_def (cfun, parm))
              || has_zero_uses (name)));
              || has_zero_uses (name)));
}
}
 
 
/* Scan immediate uses of a default definition SSA name of a parameter PARM and
/* Scan immediate uses of a default definition SSA name of a parameter PARM and
   examine whether there are any direct or otherwise infeasible ones.  If so,
   examine whether there are any direct or otherwise infeasible ones.  If so,
   return true, otherwise return false.  PARM must be a gimple register with a
   return true, otherwise return false.  PARM must be a gimple register with a
   non-NULL default definition.  */
   non-NULL default definition.  */
 
 
static bool
static bool
ptr_parm_has_direct_uses (tree parm)
ptr_parm_has_direct_uses (tree parm)
{
{
  imm_use_iterator ui;
  imm_use_iterator ui;
  gimple stmt;
  gimple stmt;
  tree name = gimple_default_def (cfun, parm);
  tree name = gimple_default_def (cfun, parm);
  bool ret = false;
  bool ret = false;
 
 
  FOR_EACH_IMM_USE_STMT (stmt, ui, name)
  FOR_EACH_IMM_USE_STMT (stmt, ui, name)
    {
    {
      int uses_ok = 0;
      int uses_ok = 0;
      use_operand_p use_p;
      use_operand_p use_p;
 
 
      if (is_gimple_debug (stmt))
      if (is_gimple_debug (stmt))
        continue;
        continue;
 
 
      /* Valid uses include dereferences on the lhs and the rhs.  */
      /* Valid uses include dereferences on the lhs and the rhs.  */
      if (gimple_has_lhs (stmt))
      if (gimple_has_lhs (stmt))
        {
        {
          tree lhs = gimple_get_lhs (stmt);
          tree lhs = gimple_get_lhs (stmt);
          while (handled_component_p (lhs))
          while (handled_component_p (lhs))
            lhs = TREE_OPERAND (lhs, 0);
            lhs = TREE_OPERAND (lhs, 0);
          if (INDIRECT_REF_P (lhs)
          if (INDIRECT_REF_P (lhs)
              && TREE_OPERAND (lhs, 0) == name)
              && TREE_OPERAND (lhs, 0) == name)
            uses_ok++;
            uses_ok++;
        }
        }
      if (gimple_assign_single_p (stmt))
      if (gimple_assign_single_p (stmt))
        {
        {
          tree rhs = gimple_assign_rhs1 (stmt);
          tree rhs = gimple_assign_rhs1 (stmt);
          while (handled_component_p (rhs))
          while (handled_component_p (rhs))
            rhs = TREE_OPERAND (rhs, 0);
            rhs = TREE_OPERAND (rhs, 0);
          if (INDIRECT_REF_P (rhs)
          if (INDIRECT_REF_P (rhs)
              && TREE_OPERAND (rhs, 0) == name)
              && TREE_OPERAND (rhs, 0) == name)
            uses_ok++;
            uses_ok++;
        }
        }
      else if (is_gimple_call (stmt))
      else if (is_gimple_call (stmt))
        {
        {
          unsigned i;
          unsigned i;
          for (i = 0; i < gimple_call_num_args (stmt); ++i)
          for (i = 0; i < gimple_call_num_args (stmt); ++i)
            {
            {
              tree arg = gimple_call_arg (stmt, i);
              tree arg = gimple_call_arg (stmt, i);
              while (handled_component_p (arg))
              while (handled_component_p (arg))
                arg = TREE_OPERAND (arg, 0);
                arg = TREE_OPERAND (arg, 0);
              if (INDIRECT_REF_P (arg)
              if (INDIRECT_REF_P (arg)
                  && TREE_OPERAND (arg, 0) == name)
                  && TREE_OPERAND (arg, 0) == name)
                uses_ok++;
                uses_ok++;
            }
            }
        }
        }
 
 
      /* If the number of valid uses does not match the number of
      /* If the number of valid uses does not match the number of
         uses in this stmt there is an unhandled use.  */
         uses in this stmt there is an unhandled use.  */
      FOR_EACH_IMM_USE_ON_STMT (use_p, ui)
      FOR_EACH_IMM_USE_ON_STMT (use_p, ui)
        --uses_ok;
        --uses_ok;
 
 
      if (uses_ok != 0)
      if (uses_ok != 0)
        ret = true;
        ret = true;
 
 
      if (ret)
      if (ret)
        BREAK_FROM_IMM_USE_STMT (ui);
        BREAK_FROM_IMM_USE_STMT (ui);
    }
    }
 
 
  return ret;
  return ret;
}
}
 
 
/* Identify candidates for reduction for IPA-SRA based on their type and mark
/* Identify candidates for reduction for IPA-SRA based on their type and mark
   them in candidate_bitmap.  Note that these do not necessarily include
   them in candidate_bitmap.  Note that these do not necessarily include
   parameter which are unused and thus can be removed.  Return true iff any
   parameter which are unused and thus can be removed.  Return true iff any
   such candidate has been found.  */
   such candidate has been found.  */
 
 
static bool
static bool
find_param_candidates (void)
find_param_candidates (void)
{
{
  tree parm;
  tree parm;
  int count = 0;
  int count = 0;
  bool ret = false;
  bool ret = false;
 
 
  for (parm = DECL_ARGUMENTS (current_function_decl);
  for (parm = DECL_ARGUMENTS (current_function_decl);
       parm;
       parm;
       parm = TREE_CHAIN (parm))
       parm = TREE_CHAIN (parm))
    {
    {
      tree type = TREE_TYPE (parm);
      tree type = TREE_TYPE (parm);
 
 
      count++;
      count++;
 
 
      if (TREE_THIS_VOLATILE (parm)
      if (TREE_THIS_VOLATILE (parm)
          || TREE_ADDRESSABLE (parm)
          || TREE_ADDRESSABLE (parm)
          || is_va_list_type (type))
          || is_va_list_type (type))
        continue;
        continue;
 
 
      if (is_unused_scalar_param (parm))
      if (is_unused_scalar_param (parm))
        {
        {
          ret = true;
          ret = true;
          continue;
          continue;
        }
        }
 
 
      if (POINTER_TYPE_P (type))
      if (POINTER_TYPE_P (type))
        {
        {
          type = TREE_TYPE (type);
          type = TREE_TYPE (type);
 
 
          if (TREE_CODE (type) == FUNCTION_TYPE
          if (TREE_CODE (type) == FUNCTION_TYPE
              || TYPE_VOLATILE (type)
              || TYPE_VOLATILE (type)
              || !is_gimple_reg (parm)
              || !is_gimple_reg (parm)
              || is_va_list_type (type)
              || is_va_list_type (type)
              || ptr_parm_has_direct_uses (parm))
              || ptr_parm_has_direct_uses (parm))
            continue;
            continue;
        }
        }
      else if (!AGGREGATE_TYPE_P (type))
      else if (!AGGREGATE_TYPE_P (type))
        continue;
        continue;
 
 
      if (!COMPLETE_TYPE_P (type)
      if (!COMPLETE_TYPE_P (type)
          || !host_integerp (TYPE_SIZE (type), 1)
          || !host_integerp (TYPE_SIZE (type), 1)
          || tree_low_cst (TYPE_SIZE (type), 1) == 0
          || tree_low_cst (TYPE_SIZE (type), 1) == 0
          || (AGGREGATE_TYPE_P (type)
          || (AGGREGATE_TYPE_P (type)
              && type_internals_preclude_sra_p (type)))
              && type_internals_preclude_sra_p (type)))
        continue;
        continue;
 
 
      bitmap_set_bit (candidate_bitmap, DECL_UID (parm));
      bitmap_set_bit (candidate_bitmap, DECL_UID (parm));
      ret = true;
      ret = true;
      if (dump_file && (dump_flags & TDF_DETAILS))
      if (dump_file && (dump_flags & TDF_DETAILS))
        {
        {
          fprintf (dump_file, "Candidate (%d): ", DECL_UID (parm));
          fprintf (dump_file, "Candidate (%d): ", DECL_UID (parm));
          print_generic_expr (dump_file, parm, 0);
          print_generic_expr (dump_file, parm, 0);
          fprintf (dump_file, "\n");
          fprintf (dump_file, "\n");
        }
        }
    }
    }
 
 
  func_param_count = count;
  func_param_count = count;
  return ret;
  return ret;
}
}
 
 
/* Callback of walk_aliased_vdefs, marks the access passed as DATA as
/* Callback of walk_aliased_vdefs, marks the access passed as DATA as
   maybe_modified. */
   maybe_modified. */
 
 
static bool
static bool
mark_maybe_modified (ao_ref *ao ATTRIBUTE_UNUSED, tree vdef ATTRIBUTE_UNUSED,
mark_maybe_modified (ao_ref *ao ATTRIBUTE_UNUSED, tree vdef ATTRIBUTE_UNUSED,
                     void *data)
                     void *data)
{
{
  struct access *repr = (struct access *) data;
  struct access *repr = (struct access *) data;
 
 
  repr->grp_maybe_modified = 1;
  repr->grp_maybe_modified = 1;
  return true;
  return true;
}
}
 
 
/* Analyze what representatives (in linked lists accessible from
/* Analyze what representatives (in linked lists accessible from
   REPRESENTATIVES) can be modified by side effects of statements in the
   REPRESENTATIVES) can be modified by side effects of statements in the
   current function.  */
   current function.  */
 
 
static void
static void
analyze_modified_params (VEC (access_p, heap) *representatives)
analyze_modified_params (VEC (access_p, heap) *representatives)
{
{
  int i;
  int i;
 
 
  for (i = 0; i < func_param_count; i++)
  for (i = 0; i < func_param_count; i++)
    {
    {
      struct access *repr;
      struct access *repr;
 
 
      for (repr = VEC_index (access_p, representatives, i);
      for (repr = VEC_index (access_p, representatives, i);
           repr;
           repr;
           repr = repr->next_grp)
           repr = repr->next_grp)
        {
        {
          struct access *access;
          struct access *access;
          bitmap visited;
          bitmap visited;
          ao_ref ar;
          ao_ref ar;
 
 
          if (no_accesses_p (repr))
          if (no_accesses_p (repr))
            continue;
            continue;
          if (!POINTER_TYPE_P (TREE_TYPE (repr->base))
          if (!POINTER_TYPE_P (TREE_TYPE (repr->base))
              || repr->grp_maybe_modified)
              || repr->grp_maybe_modified)
            continue;
            continue;
 
 
          ao_ref_init (&ar, repr->expr);
          ao_ref_init (&ar, repr->expr);
          visited = BITMAP_ALLOC (NULL);
          visited = BITMAP_ALLOC (NULL);
          for (access = repr; access; access = access->next_sibling)
          for (access = repr; access; access = access->next_sibling)
            {
            {
              /* All accesses are read ones, otherwise grp_maybe_modified would
              /* All accesses are read ones, otherwise grp_maybe_modified would
                 be trivially set.  */
                 be trivially set.  */
              walk_aliased_vdefs (&ar, gimple_vuse (access->stmt),
              walk_aliased_vdefs (&ar, gimple_vuse (access->stmt),
                                  mark_maybe_modified, repr, &visited);
                                  mark_maybe_modified, repr, &visited);
              if (repr->grp_maybe_modified)
              if (repr->grp_maybe_modified)
                break;
                break;
            }
            }
          BITMAP_FREE (visited);
          BITMAP_FREE (visited);
        }
        }
    }
    }
}
}
 
 
/* Propagate distances in bb_dereferences in the opposite direction than the
/* Propagate distances in bb_dereferences in the opposite direction than the
   control flow edges, in each step storing the maximum of the current value
   control flow edges, in each step storing the maximum of the current value
   and the minimum of all successors.  These steps are repeated until the table
   and the minimum of all successors.  These steps are repeated until the table
   stabilizes.  Note that BBs which might terminate the functions (according to
   stabilizes.  Note that BBs which might terminate the functions (according to
   final_bbs bitmap) never updated in this way.  */
   final_bbs bitmap) never updated in this way.  */
 
 
static void
static void
propagate_dereference_distances (void)
propagate_dereference_distances (void)
{
{
  VEC (basic_block, heap) *queue;
  VEC (basic_block, heap) *queue;
  basic_block bb;
  basic_block bb;
 
 
  queue = VEC_alloc (basic_block, heap, last_basic_block_for_function (cfun));
  queue = VEC_alloc (basic_block, heap, last_basic_block_for_function (cfun));
  VEC_quick_push (basic_block, queue, ENTRY_BLOCK_PTR);
  VEC_quick_push (basic_block, queue, ENTRY_BLOCK_PTR);
  FOR_EACH_BB (bb)
  FOR_EACH_BB (bb)
    {
    {
      VEC_quick_push (basic_block, queue, bb);
      VEC_quick_push (basic_block, queue, bb);
      bb->aux = bb;
      bb->aux = bb;
    }
    }
 
 
  while (!VEC_empty (basic_block, queue))
  while (!VEC_empty (basic_block, queue))
    {
    {
      edge_iterator ei;
      edge_iterator ei;
      edge e;
      edge e;
      bool change = false;
      bool change = false;
      int i;
      int i;
 
 
      bb = VEC_pop (basic_block, queue);
      bb = VEC_pop (basic_block, queue);
      bb->aux = NULL;
      bb->aux = NULL;
 
 
      if (bitmap_bit_p (final_bbs, bb->index))
      if (bitmap_bit_p (final_bbs, bb->index))
        continue;
        continue;
 
 
      for (i = 0; i < func_param_count; i++)
      for (i = 0; i < func_param_count; i++)
        {
        {
          int idx = bb->index * func_param_count + i;
          int idx = bb->index * func_param_count + i;
          bool first = true;
          bool first = true;
          HOST_WIDE_INT inh = 0;
          HOST_WIDE_INT inh = 0;
 
 
          FOR_EACH_EDGE (e, ei, bb->succs)
          FOR_EACH_EDGE (e, ei, bb->succs)
          {
          {
            int succ_idx = e->dest->index * func_param_count + i;
            int succ_idx = e->dest->index * func_param_count + i;
 
 
            if (e->src == EXIT_BLOCK_PTR)
            if (e->src == EXIT_BLOCK_PTR)
              continue;
              continue;
 
 
            if (first)
            if (first)
              {
              {
                first = false;
                first = false;
                inh = bb_dereferences [succ_idx];
                inh = bb_dereferences [succ_idx];
              }
              }
            else if (bb_dereferences [succ_idx] < inh)
            else if (bb_dereferences [succ_idx] < inh)
              inh = bb_dereferences [succ_idx];
              inh = bb_dereferences [succ_idx];
          }
          }
 
 
          if (!first && bb_dereferences[idx] < inh)
          if (!first && bb_dereferences[idx] < inh)
            {
            {
              bb_dereferences[idx] = inh;
              bb_dereferences[idx] = inh;
              change = true;
              change = true;
            }
            }
        }
        }
 
 
      if (change && !bitmap_bit_p (final_bbs, bb->index))
      if (change && !bitmap_bit_p (final_bbs, bb->index))
        FOR_EACH_EDGE (e, ei, bb->preds)
        FOR_EACH_EDGE (e, ei, bb->preds)
          {
          {
            if (e->src->aux)
            if (e->src->aux)
              continue;
              continue;
 
 
            e->src->aux = e->src;
            e->src->aux = e->src;
            VEC_quick_push (basic_block, queue, e->src);
            VEC_quick_push (basic_block, queue, e->src);
          }
          }
    }
    }
 
 
  VEC_free (basic_block, heap, queue);
  VEC_free (basic_block, heap, queue);
}
}
 
 
/* Dump a dereferences TABLE with heading STR to file F.  */
/* Dump a dereferences TABLE with heading STR to file F.  */
 
 
static void
static void
dump_dereferences_table (FILE *f, const char *str, HOST_WIDE_INT *table)
dump_dereferences_table (FILE *f, const char *str, HOST_WIDE_INT *table)
{
{
  basic_block bb;
  basic_block bb;
 
 
  fprintf (dump_file, str);
  fprintf (dump_file, str);
  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
    {
    {
      fprintf (f, "%4i  %i   ", bb->index, bitmap_bit_p (final_bbs, bb->index));
      fprintf (f, "%4i  %i   ", bb->index, bitmap_bit_p (final_bbs, bb->index));
      if (bb != EXIT_BLOCK_PTR)
      if (bb != EXIT_BLOCK_PTR)
        {
        {
          int i;
          int i;
          for (i = 0; i < func_param_count; i++)
          for (i = 0; i < func_param_count; i++)
            {
            {
              int idx = bb->index * func_param_count + i;
              int idx = bb->index * func_param_count + i;
              fprintf (f, " %4" HOST_WIDE_INT_PRINT "d", table[idx]);
              fprintf (f, " %4" HOST_WIDE_INT_PRINT "d", table[idx]);
            }
            }
        }
        }
      fprintf (f, "\n");
      fprintf (f, "\n");
    }
    }
  fprintf (dump_file, "\n");
  fprintf (dump_file, "\n");
}
}
 
 
/* Determine what (parts of) parameters passed by reference that are not
/* Determine what (parts of) parameters passed by reference that are not
   assigned to are not certainly dereferenced in this function and thus the
   assigned to are not certainly dereferenced in this function and thus the
   dereferencing cannot be safely moved to the caller without potentially
   dereferencing cannot be safely moved to the caller without potentially
   introducing a segfault.  Mark such REPRESENTATIVES as
   introducing a segfault.  Mark such REPRESENTATIVES as
   grp_not_necessarilly_dereferenced.
   grp_not_necessarilly_dereferenced.
 
 
   The dereferenced maximum "distance," i.e. the offset + size of the accessed
   The dereferenced maximum "distance," i.e. the offset + size of the accessed
   part is calculated rather than simple booleans are calculated for each
   part is calculated rather than simple booleans are calculated for each
   pointer parameter to handle cases when only a fraction of the whole
   pointer parameter to handle cases when only a fraction of the whole
   aggregate is allocated (see testsuite/gcc.c-torture/execute/ipa-sra-2.c for
   aggregate is allocated (see testsuite/gcc.c-torture/execute/ipa-sra-2.c for
   an example).
   an example).
 
 
   The maximum dereference distances for each pointer parameter and BB are
   The maximum dereference distances for each pointer parameter and BB are
   already stored in bb_dereference.  This routine simply propagates these
   already stored in bb_dereference.  This routine simply propagates these
   values upwards by propagate_dereference_distances and then compares the
   values upwards by propagate_dereference_distances and then compares the
   distances of individual parameters in the ENTRY BB to the equivalent
   distances of individual parameters in the ENTRY BB to the equivalent
   distances of each representative of a (fraction of a) parameter.  */
   distances of each representative of a (fraction of a) parameter.  */
 
 
static void
static void
analyze_caller_dereference_legality (VEC (access_p, heap) *representatives)
analyze_caller_dereference_legality (VEC (access_p, heap) *representatives)
{
{
  int i;
  int i;
 
 
  if (dump_file && (dump_flags & TDF_DETAILS))
  if (dump_file && (dump_flags & TDF_DETAILS))
    dump_dereferences_table (dump_file,
    dump_dereferences_table (dump_file,
                             "Dereference table before propagation:\n",
                             "Dereference table before propagation:\n",
                             bb_dereferences);
                             bb_dereferences);
 
 
  propagate_dereference_distances ();
  propagate_dereference_distances ();
 
 
  if (dump_file && (dump_flags & TDF_DETAILS))
  if (dump_file && (dump_flags & TDF_DETAILS))
    dump_dereferences_table (dump_file,
    dump_dereferences_table (dump_file,
                             "Dereference table after propagation:\n",
                             "Dereference table after propagation:\n",
                             bb_dereferences);
                             bb_dereferences);
 
 
  for (i = 0; i < func_param_count; i++)
  for (i = 0; i < func_param_count; i++)
    {
    {
      struct access *repr = VEC_index (access_p, representatives, i);
      struct access *repr = VEC_index (access_p, representatives, i);
      int idx = ENTRY_BLOCK_PTR->index * func_param_count + i;
      int idx = ENTRY_BLOCK_PTR->index * func_param_count + i;
 
 
      if (!repr || no_accesses_p (repr))
      if (!repr || no_accesses_p (repr))
        continue;
        continue;
 
 
      do
      do
        {
        {
          if ((repr->offset + repr->size) > bb_dereferences[idx])
          if ((repr->offset + repr->size) > bb_dereferences[idx])
            repr->grp_not_necessarilly_dereferenced = 1;
            repr->grp_not_necessarilly_dereferenced = 1;
          repr = repr->next_grp;
          repr = repr->next_grp;
        }
        }
      while (repr);
      while (repr);
    }
    }
}
}
 
 
/* Return the representative access for the parameter declaration PARM if it is
/* Return the representative access for the parameter declaration PARM if it is
   a scalar passed by reference which is not written to and the pointer value
   a scalar passed by reference which is not written to and the pointer value
   is not used directly.  Thus, if it is legal to dereference it in the caller
   is not used directly.  Thus, if it is legal to dereference it in the caller
   and we can rule out modifications through aliases, such parameter should be
   and we can rule out modifications through aliases, such parameter should be
   turned into one passed by value.  Return NULL otherwise.  */
   turned into one passed by value.  Return NULL otherwise.  */
 
 
static struct access *
static struct access *
unmodified_by_ref_scalar_representative (tree parm)
unmodified_by_ref_scalar_representative (tree parm)
{
{
  int i, access_count;
  int i, access_count;
  struct access *repr;
  struct access *repr;
  VEC (access_p, heap) *access_vec;
  VEC (access_p, heap) *access_vec;
 
 
  access_vec = get_base_access_vector (parm);
  access_vec = get_base_access_vector (parm);
  gcc_assert (access_vec);
  gcc_assert (access_vec);
  repr = VEC_index (access_p, access_vec, 0);
  repr = VEC_index (access_p, access_vec, 0);
  if (repr->write)
  if (repr->write)
    return NULL;
    return NULL;
  repr->group_representative = repr;
  repr->group_representative = repr;
 
 
  access_count = VEC_length (access_p, access_vec);
  access_count = VEC_length (access_p, access_vec);
  for (i = 1; i < access_count; i++)
  for (i = 1; i < access_count; i++)
    {
    {
      struct access *access = VEC_index (access_p, access_vec, i);
      struct access *access = VEC_index (access_p, access_vec, i);
      if (access->write)
      if (access->write)
        return NULL;
        return NULL;
      access->group_representative = repr;
      access->group_representative = repr;
      access->next_sibling = repr->next_sibling;
      access->next_sibling = repr->next_sibling;
      repr->next_sibling = access;
      repr->next_sibling = access;
    }
    }
 
 
  repr->grp_read = 1;
  repr->grp_read = 1;
  repr->grp_scalar_ptr = 1;
  repr->grp_scalar_ptr = 1;
  return repr;
  return repr;
}
}
 
 
/* Return true iff this access precludes IPA-SRA of the parameter it is
/* Return true iff this access precludes IPA-SRA of the parameter it is
   associated with. */
   associated with. */
 
 
static bool
static bool
access_precludes_ipa_sra_p (struct access *access)
access_precludes_ipa_sra_p (struct access *access)
{
{
  /* Avoid issues such as the second simple testcase in PR 42025.  The problem
  /* Avoid issues such as the second simple testcase in PR 42025.  The problem
     is incompatible assign in a call statement (and possibly even in asm
     is incompatible assign in a call statement (and possibly even in asm
     statements).  This can be relaxed by using a new temporary but only for
     statements).  This can be relaxed by using a new temporary but only for
     non-TREE_ADDRESSABLE types and is probably not worth the complexity. (In
     non-TREE_ADDRESSABLE types and is probably not worth the complexity. (In
     intraprocedural SRA we deal with this by keeping the old aggregate around,
     intraprocedural SRA we deal with this by keeping the old aggregate around,
     something we cannot do in IPA-SRA.)  */
     something we cannot do in IPA-SRA.)  */
  if (access->write
  if (access->write
      && (is_gimple_call (access->stmt)
      && (is_gimple_call (access->stmt)
          || gimple_code (access->stmt) == GIMPLE_ASM))
          || gimple_code (access->stmt) == GIMPLE_ASM))
    return true;
    return true;
 
 
  return false;
  return false;
}
}
 
 
 
 
/* Sort collected accesses for parameter PARM, identify representatives for
/* Sort collected accesses for parameter PARM, identify representatives for
   each accessed region and link them together.  Return NULL if there are
   each accessed region and link them together.  Return NULL if there are
   different but overlapping accesses, return the special ptr value meaning
   different but overlapping accesses, return the special ptr value meaning
   there are no accesses for this parameter if that is the case and return the
   there are no accesses for this parameter if that is the case and return the
   first representative otherwise.  Set *RO_GRP if there is a group of accesses
   first representative otherwise.  Set *RO_GRP if there is a group of accesses
   with only read (i.e. no write) accesses.  */
   with only read (i.e. no write) accesses.  */
 
 
static struct access *
static struct access *
splice_param_accesses (tree parm, bool *ro_grp)
splice_param_accesses (tree parm, bool *ro_grp)
{
{
  int i, j, access_count, group_count;
  int i, j, access_count, group_count;
  int agg_size, total_size = 0;
  int agg_size, total_size = 0;
  struct access *access, *res, **prev_acc_ptr = &res;
  struct access *access, *res, **prev_acc_ptr = &res;
  VEC (access_p, heap) *access_vec;
  VEC (access_p, heap) *access_vec;
 
 
  access_vec = get_base_access_vector (parm);
  access_vec = get_base_access_vector (parm);
  if (!access_vec)
  if (!access_vec)
    return &no_accesses_representant;
    return &no_accesses_representant;
  access_count = VEC_length (access_p, access_vec);
  access_count = VEC_length (access_p, access_vec);
 
 
  qsort (VEC_address (access_p, access_vec), access_count, sizeof (access_p),
  qsort (VEC_address (access_p, access_vec), access_count, sizeof (access_p),
         compare_access_positions);
         compare_access_positions);
 
 
  i = 0;
  i = 0;
  total_size = 0;
  total_size = 0;
  group_count = 0;
  group_count = 0;
  while (i < access_count)
  while (i < access_count)
    {
    {
      bool modification;
      bool modification;
      access = VEC_index (access_p, access_vec, i);
      access = VEC_index (access_p, access_vec, i);
      modification = access->write;
      modification = access->write;
      if (access_precludes_ipa_sra_p (access))
      if (access_precludes_ipa_sra_p (access))
        return NULL;
        return NULL;
 
 
      /* Access is about to become group representative unless we find some
      /* Access is about to become group representative unless we find some
         nasty overlap which would preclude us from breaking this parameter
         nasty overlap which would preclude us from breaking this parameter
         apart. */
         apart. */
 
 
      j = i + 1;
      j = i + 1;
      while (j < access_count)
      while (j < access_count)
        {
        {
          struct access *ac2 = VEC_index (access_p, access_vec, j);
          struct access *ac2 = VEC_index (access_p, access_vec, j);
          if (ac2->offset != access->offset)
          if (ac2->offset != access->offset)
            {
            {
              /* All or nothing law for parameters. */
              /* All or nothing law for parameters. */
              if (access->offset + access->size > ac2->offset)
              if (access->offset + access->size > ac2->offset)
                return NULL;
                return NULL;
              else
              else
                break;
                break;
            }
            }
          else if (ac2->size != access->size)
          else if (ac2->size != access->size)
            return NULL;
            return NULL;
 
 
          if (access_precludes_ipa_sra_p (ac2))
          if (access_precludes_ipa_sra_p (ac2))
            return NULL;
            return NULL;
 
 
          modification |= ac2->write;
          modification |= ac2->write;
          ac2->group_representative = access;
          ac2->group_representative = access;
          ac2->next_sibling = access->next_sibling;
          ac2->next_sibling = access->next_sibling;
          access->next_sibling = ac2;
          access->next_sibling = ac2;
          j++;
          j++;
        }
        }
 
 
      group_count++;
      group_count++;
      access->grp_maybe_modified = modification;
      access->grp_maybe_modified = modification;
      if (!modification)
      if (!modification)
        *ro_grp = true;
        *ro_grp = true;
      *prev_acc_ptr = access;
      *prev_acc_ptr = access;
      prev_acc_ptr = &access->next_grp;
      prev_acc_ptr = &access->next_grp;
      total_size += access->size;
      total_size += access->size;
      i = j;
      i = j;
    }
    }
 
 
  if (POINTER_TYPE_P (TREE_TYPE (parm)))
  if (POINTER_TYPE_P (TREE_TYPE (parm)))
    agg_size = tree_low_cst (TYPE_SIZE (TREE_TYPE (TREE_TYPE (parm))), 1);
    agg_size = tree_low_cst (TYPE_SIZE (TREE_TYPE (TREE_TYPE (parm))), 1);
  else
  else
    agg_size = tree_low_cst (TYPE_SIZE (TREE_TYPE (parm)), 1);
    agg_size = tree_low_cst (TYPE_SIZE (TREE_TYPE (parm)), 1);
  if (total_size >= agg_size)
  if (total_size >= agg_size)
    return NULL;
    return NULL;
 
 
  gcc_assert (group_count > 0);
  gcc_assert (group_count > 0);
  return res;
  return res;
}
}
 
 
/* Decide whether parameters with representative accesses given by REPR should
/* Decide whether parameters with representative accesses given by REPR should
   be reduced into components.  */
   be reduced into components.  */
 
 
static int
static int
decide_one_param_reduction (struct access *repr)
decide_one_param_reduction (struct access *repr)
{
{
  int total_size, cur_parm_size, agg_size, new_param_count, parm_size_limit;
  int total_size, cur_parm_size, agg_size, new_param_count, parm_size_limit;
  bool by_ref;
  bool by_ref;
  tree parm;
  tree parm;
 
 
  parm = repr->base;
  parm = repr->base;
  cur_parm_size = tree_low_cst (TYPE_SIZE (TREE_TYPE (parm)), 1);
  cur_parm_size = tree_low_cst (TYPE_SIZE (TREE_TYPE (parm)), 1);
  gcc_assert (cur_parm_size > 0);
  gcc_assert (cur_parm_size > 0);
 
 
  if (POINTER_TYPE_P (TREE_TYPE (parm)))
  if (POINTER_TYPE_P (TREE_TYPE (parm)))
    {
    {
      by_ref = true;
      by_ref = true;
      agg_size = tree_low_cst (TYPE_SIZE (TREE_TYPE (TREE_TYPE (parm))), 1);
      agg_size = tree_low_cst (TYPE_SIZE (TREE_TYPE (TREE_TYPE (parm))), 1);
    }
    }
  else
  else
    {
    {
      by_ref = false;
      by_ref = false;
      agg_size = cur_parm_size;
      agg_size = cur_parm_size;
    }
    }
 
 
  if (dump_file)
  if (dump_file)
    {
    {
      struct access *acc;
      struct access *acc;
      fprintf (dump_file, "Evaluating PARAM group sizes for ");
      fprintf (dump_file, "Evaluating PARAM group sizes for ");
      print_generic_expr (dump_file, parm, 0);
      print_generic_expr (dump_file, parm, 0);
      fprintf (dump_file, " (UID: %u): \n", DECL_UID (parm));
      fprintf (dump_file, " (UID: %u): \n", DECL_UID (parm));
      for (acc = repr; acc; acc = acc->next_grp)
      for (acc = repr; acc; acc = acc->next_grp)
        dump_access (dump_file, acc, true);
        dump_access (dump_file, acc, true);
    }
    }
 
 
  total_size = 0;
  total_size = 0;
  new_param_count = 0;
  new_param_count = 0;
 
 
  for (; repr; repr = repr->next_grp)
  for (; repr; repr = repr->next_grp)
    {
    {
      gcc_assert (parm == repr->base);
      gcc_assert (parm == repr->base);
      new_param_count++;
      new_param_count++;
 
 
      if (!by_ref || (!repr->grp_maybe_modified
      if (!by_ref || (!repr->grp_maybe_modified
                      && !repr->grp_not_necessarilly_dereferenced))
                      && !repr->grp_not_necessarilly_dereferenced))
        total_size += repr->size;
        total_size += repr->size;
      else
      else
        total_size += cur_parm_size;
        total_size += cur_parm_size;
    }
    }
 
 
  gcc_assert (new_param_count > 0);
  gcc_assert (new_param_count > 0);
 
 
  if (optimize_function_for_size_p (cfun))
  if (optimize_function_for_size_p (cfun))
    parm_size_limit = cur_parm_size;
    parm_size_limit = cur_parm_size;
  else
  else
    parm_size_limit = (PARAM_VALUE (PARAM_IPA_SRA_PTR_GROWTH_FACTOR)
    parm_size_limit = (PARAM_VALUE (PARAM_IPA_SRA_PTR_GROWTH_FACTOR)
                       * cur_parm_size);
                       * cur_parm_size);
 
 
  if (total_size < agg_size
  if (total_size < agg_size
      && total_size <= parm_size_limit)
      && total_size <= parm_size_limit)
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "    ....will be split into %i components\n",
        fprintf (dump_file, "    ....will be split into %i components\n",
                 new_param_count);
                 new_param_count);
      return new_param_count;
      return new_param_count;
    }
    }
  else
  else
    return 0;
    return 0;
}
}
 
 
/* The order of the following enums is important, we need to do extra work for
/* The order of the following enums is important, we need to do extra work for
   UNUSED_PARAMS, BY_VAL_ACCESSES and UNMODIF_BY_REF_ACCESSES.  */
   UNUSED_PARAMS, BY_VAL_ACCESSES and UNMODIF_BY_REF_ACCESSES.  */
enum ipa_splicing_result { NO_GOOD_ACCESS, UNUSED_PARAMS, BY_VAL_ACCESSES,
enum ipa_splicing_result { NO_GOOD_ACCESS, UNUSED_PARAMS, BY_VAL_ACCESSES,
                          MODIF_BY_REF_ACCESSES, UNMODIF_BY_REF_ACCESSES };
                          MODIF_BY_REF_ACCESSES, UNMODIF_BY_REF_ACCESSES };
 
 
/* Identify representatives of all accesses to all candidate parameters for
/* Identify representatives of all accesses to all candidate parameters for
   IPA-SRA.  Return result based on what representatives have been found. */
   IPA-SRA.  Return result based on what representatives have been found. */
 
 
static enum ipa_splicing_result
static enum ipa_splicing_result
splice_all_param_accesses (VEC (access_p, heap) **representatives)
splice_all_param_accesses (VEC (access_p, heap) **representatives)
{
{
  enum ipa_splicing_result result = NO_GOOD_ACCESS;
  enum ipa_splicing_result result = NO_GOOD_ACCESS;
  tree parm;
  tree parm;
  struct access *repr;
  struct access *repr;
 
 
  *representatives = VEC_alloc (access_p, heap, func_param_count);
  *representatives = VEC_alloc (access_p, heap, func_param_count);
 
 
  for (parm = DECL_ARGUMENTS (current_function_decl);
  for (parm = DECL_ARGUMENTS (current_function_decl);
       parm;
       parm;
       parm = TREE_CHAIN (parm))
       parm = TREE_CHAIN (parm))
    {
    {
      if (is_unused_scalar_param (parm))
      if (is_unused_scalar_param (parm))
        {
        {
          VEC_quick_push (access_p, *representatives,
          VEC_quick_push (access_p, *representatives,
                          &no_accesses_representant);
                          &no_accesses_representant);
          if (result == NO_GOOD_ACCESS)
          if (result == NO_GOOD_ACCESS)
            result = UNUSED_PARAMS;
            result = UNUSED_PARAMS;
        }
        }
      else if (POINTER_TYPE_P (TREE_TYPE (parm))
      else if (POINTER_TYPE_P (TREE_TYPE (parm))
               && is_gimple_reg_type (TREE_TYPE (TREE_TYPE (parm)))
               && is_gimple_reg_type (TREE_TYPE (TREE_TYPE (parm)))
               && bitmap_bit_p (candidate_bitmap, DECL_UID (parm)))
               && bitmap_bit_p (candidate_bitmap, DECL_UID (parm)))
        {
        {
          repr = unmodified_by_ref_scalar_representative (parm);
          repr = unmodified_by_ref_scalar_representative (parm);
          VEC_quick_push (access_p, *representatives, repr);
          VEC_quick_push (access_p, *representatives, repr);
          if (repr)
          if (repr)
            result = UNMODIF_BY_REF_ACCESSES;
            result = UNMODIF_BY_REF_ACCESSES;
        }
        }
      else if (bitmap_bit_p (candidate_bitmap, DECL_UID (parm)))
      else if (bitmap_bit_p (candidate_bitmap, DECL_UID (parm)))
        {
        {
          bool ro_grp = false;
          bool ro_grp = false;
          repr = splice_param_accesses (parm, &ro_grp);
          repr = splice_param_accesses (parm, &ro_grp);
          VEC_quick_push (access_p, *representatives, repr);
          VEC_quick_push (access_p, *representatives, repr);
 
 
          if (repr && !no_accesses_p (repr))
          if (repr && !no_accesses_p (repr))
            {
            {
              if (POINTER_TYPE_P (TREE_TYPE (parm)))
              if (POINTER_TYPE_P (TREE_TYPE (parm)))
                {
                {
                  if (ro_grp)
                  if (ro_grp)
                    result = UNMODIF_BY_REF_ACCESSES;
                    result = UNMODIF_BY_REF_ACCESSES;
                  else if (result < MODIF_BY_REF_ACCESSES)
                  else if (result < MODIF_BY_REF_ACCESSES)
                    result = MODIF_BY_REF_ACCESSES;
                    result = MODIF_BY_REF_ACCESSES;
                }
                }
              else if (result < BY_VAL_ACCESSES)
              else if (result < BY_VAL_ACCESSES)
                result = BY_VAL_ACCESSES;
                result = BY_VAL_ACCESSES;
            }
            }
          else if (no_accesses_p (repr) && (result == NO_GOOD_ACCESS))
          else if (no_accesses_p (repr) && (result == NO_GOOD_ACCESS))
            result = UNUSED_PARAMS;
            result = UNUSED_PARAMS;
        }
        }
      else
      else
        VEC_quick_push (access_p, *representatives, NULL);
        VEC_quick_push (access_p, *representatives, NULL);
    }
    }
 
 
  if (result == NO_GOOD_ACCESS)
  if (result == NO_GOOD_ACCESS)
    {
    {
      VEC_free (access_p, heap, *representatives);
      VEC_free (access_p, heap, *representatives);
      *representatives = NULL;
      *representatives = NULL;
      return NO_GOOD_ACCESS;
      return NO_GOOD_ACCESS;
    }
    }
 
 
  return result;
  return result;
}
}
 
 
/* Return the index of BASE in PARMS.  Abort if it is not found.  */
/* Return the index of BASE in PARMS.  Abort if it is not found.  */
 
 
static inline int
static inline int
get_param_index (tree base, VEC(tree, heap) *parms)
get_param_index (tree base, VEC(tree, heap) *parms)
{
{
  int i, len;
  int i, len;
 
 
  len = VEC_length (tree, parms);
  len = VEC_length (tree, parms);
  for (i = 0; i < len; i++)
  for (i = 0; i < len; i++)
    if (VEC_index (tree, parms, i) == base)
    if (VEC_index (tree, parms, i) == base)
      return i;
      return i;
  gcc_unreachable ();
  gcc_unreachable ();
}
}
 
 
/* Convert the decisions made at the representative level into compact
/* Convert the decisions made at the representative level into compact
   parameter adjustments.  REPRESENTATIVES are pointers to first
   parameter adjustments.  REPRESENTATIVES are pointers to first
   representatives of each param accesses, ADJUSTMENTS_COUNT is the expected
   representatives of each param accesses, ADJUSTMENTS_COUNT is the expected
   final number of adjustments.  */
   final number of adjustments.  */
 
 
static ipa_parm_adjustment_vec
static ipa_parm_adjustment_vec
turn_representatives_into_adjustments (VEC (access_p, heap) *representatives,
turn_representatives_into_adjustments (VEC (access_p, heap) *representatives,
                                       int adjustments_count)
                                       int adjustments_count)
{
{
  VEC (tree, heap) *parms;
  VEC (tree, heap) *parms;
  ipa_parm_adjustment_vec adjustments;
  ipa_parm_adjustment_vec adjustments;
  tree parm;
  tree parm;
  int i;
  int i;
 
 
  gcc_assert (adjustments_count > 0);
  gcc_assert (adjustments_count > 0);
  parms = ipa_get_vector_of_formal_parms (current_function_decl);
  parms = ipa_get_vector_of_formal_parms (current_function_decl);
  adjustments = VEC_alloc (ipa_parm_adjustment_t, heap, adjustments_count);
  adjustments = VEC_alloc (ipa_parm_adjustment_t, heap, adjustments_count);
  parm = DECL_ARGUMENTS (current_function_decl);
  parm = DECL_ARGUMENTS (current_function_decl);
  for (i = 0; i < func_param_count; i++, parm = TREE_CHAIN (parm))
  for (i = 0; i < func_param_count; i++, parm = TREE_CHAIN (parm))
    {
    {
      struct access *repr = VEC_index (access_p, representatives, i);
      struct access *repr = VEC_index (access_p, representatives, i);
 
 
      if (!repr || no_accesses_p (repr))
      if (!repr || no_accesses_p (repr))
        {
        {
          struct ipa_parm_adjustment *adj;
          struct ipa_parm_adjustment *adj;
 
 
          adj = VEC_quick_push (ipa_parm_adjustment_t, adjustments, NULL);
          adj = VEC_quick_push (ipa_parm_adjustment_t, adjustments, NULL);
          memset (adj, 0, sizeof (*adj));
          memset (adj, 0, sizeof (*adj));
          adj->base_index = get_param_index (parm, parms);
          adj->base_index = get_param_index (parm, parms);
          adj->base = parm;
          adj->base = parm;
          if (!repr)
          if (!repr)
            adj->copy_param = 1;
            adj->copy_param = 1;
          else
          else
            adj->remove_param = 1;
            adj->remove_param = 1;
        }
        }
      else
      else
        {
        {
          struct ipa_parm_adjustment *adj;
          struct ipa_parm_adjustment *adj;
          int index = get_param_index (parm, parms);
          int index = get_param_index (parm, parms);
 
 
          for (; repr; repr = repr->next_grp)
          for (; repr; repr = repr->next_grp)
            {
            {
              adj = VEC_quick_push (ipa_parm_adjustment_t, adjustments, NULL);
              adj = VEC_quick_push (ipa_parm_adjustment_t, adjustments, NULL);
              memset (adj, 0, sizeof (*adj));
              memset (adj, 0, sizeof (*adj));
              gcc_assert (repr->base == parm);
              gcc_assert (repr->base == parm);
              adj->base_index = index;
              adj->base_index = index;
              adj->base = repr->base;
              adj->base = repr->base;
              adj->type = repr->type;
              adj->type = repr->type;
              adj->offset = repr->offset;
              adj->offset = repr->offset;
              adj->by_ref = (POINTER_TYPE_P (TREE_TYPE (repr->base))
              adj->by_ref = (POINTER_TYPE_P (TREE_TYPE (repr->base))
                             && (repr->grp_maybe_modified
                             && (repr->grp_maybe_modified
                                 || repr->grp_not_necessarilly_dereferenced));
                                 || repr->grp_not_necessarilly_dereferenced));
 
 
            }
            }
        }
        }
    }
    }
  VEC_free (tree, heap, parms);
  VEC_free (tree, heap, parms);
  return adjustments;
  return adjustments;
}
}
 
 
/* Analyze the collected accesses and produce a plan what to do with the
/* Analyze the collected accesses and produce a plan what to do with the
   parameters in the form of adjustments, NULL meaning nothing.  */
   parameters in the form of adjustments, NULL meaning nothing.  */
 
 
static ipa_parm_adjustment_vec
static ipa_parm_adjustment_vec
analyze_all_param_acesses (void)
analyze_all_param_acesses (void)
{
{
  enum ipa_splicing_result repr_state;
  enum ipa_splicing_result repr_state;
  bool proceed = false;
  bool proceed = false;
  int i, adjustments_count = 0;
  int i, adjustments_count = 0;
  VEC (access_p, heap) *representatives;
  VEC (access_p, heap) *representatives;
  ipa_parm_adjustment_vec adjustments;
  ipa_parm_adjustment_vec adjustments;
 
 
  repr_state = splice_all_param_accesses (&representatives);
  repr_state = splice_all_param_accesses (&representatives);
  if (repr_state == NO_GOOD_ACCESS)
  if (repr_state == NO_GOOD_ACCESS)
    return NULL;
    return NULL;
 
 
  /* If there are any parameters passed by reference which are not modified
  /* If there are any parameters passed by reference which are not modified
     directly, we need to check whether they can be modified indirectly.  */
     directly, we need to check whether they can be modified indirectly.  */
  if (repr_state == UNMODIF_BY_REF_ACCESSES)
  if (repr_state == UNMODIF_BY_REF_ACCESSES)
    {
    {
      analyze_caller_dereference_legality (representatives);
      analyze_caller_dereference_legality (representatives);
      analyze_modified_params (representatives);
      analyze_modified_params (representatives);
    }
    }
 
 
  for (i = 0; i < func_param_count; i++)
  for (i = 0; i < func_param_count; i++)
    {
    {
      struct access *repr = VEC_index (access_p, representatives, i);
      struct access *repr = VEC_index (access_p, representatives, i);
 
 
      if (repr && !no_accesses_p (repr))
      if (repr && !no_accesses_p (repr))
        {
        {
          if (repr->grp_scalar_ptr)
          if (repr->grp_scalar_ptr)
            {
            {
              adjustments_count++;
              adjustments_count++;
              if (repr->grp_not_necessarilly_dereferenced
              if (repr->grp_not_necessarilly_dereferenced
                  || repr->grp_maybe_modified)
                  || repr->grp_maybe_modified)
                VEC_replace (access_p, representatives, i, NULL);
                VEC_replace (access_p, representatives, i, NULL);
              else
              else
                {
                {
                  proceed = true;
                  proceed = true;
                  sra_stats.scalar_by_ref_to_by_val++;
                  sra_stats.scalar_by_ref_to_by_val++;
                }
                }
            }
            }
          else
          else
            {
            {
              int new_components = decide_one_param_reduction (repr);
              int new_components = decide_one_param_reduction (repr);
 
 
              if (new_components == 0)
              if (new_components == 0)
                {
                {
                  VEC_replace (access_p, representatives, i, NULL);
                  VEC_replace (access_p, representatives, i, NULL);
                  adjustments_count++;
                  adjustments_count++;
                }
                }
              else
              else
                {
                {
                  adjustments_count += new_components;
                  adjustments_count += new_components;
                  sra_stats.aggregate_params_reduced++;
                  sra_stats.aggregate_params_reduced++;
                  sra_stats.param_reductions_created += new_components;
                  sra_stats.param_reductions_created += new_components;
                  proceed = true;
                  proceed = true;
                }
                }
            }
            }
        }
        }
      else
      else
        {
        {
          if (no_accesses_p (repr))
          if (no_accesses_p (repr))
            {
            {
              proceed = true;
              proceed = true;
              sra_stats.deleted_unused_parameters++;
              sra_stats.deleted_unused_parameters++;
            }
            }
          adjustments_count++;
          adjustments_count++;
        }
        }
    }
    }
 
 
  if (!proceed && dump_file)
  if (!proceed && dump_file)
    fprintf (dump_file, "NOT proceeding to change params.\n");
    fprintf (dump_file, "NOT proceeding to change params.\n");
 
 
  if (proceed)
  if (proceed)
    adjustments = turn_representatives_into_adjustments (representatives,
    adjustments = turn_representatives_into_adjustments (representatives,
                                                         adjustments_count);
                                                         adjustments_count);
  else
  else
    adjustments = NULL;
    adjustments = NULL;
 
 
  VEC_free (access_p, heap, representatives);
  VEC_free (access_p, heap, representatives);
  return adjustments;
  return adjustments;
}
}
 
 
/* If a parameter replacement identified by ADJ does not yet exist in the form
/* If a parameter replacement identified by ADJ does not yet exist in the form
   of declaration, create it and record it, otherwise return the previously
   of declaration, create it and record it, otherwise return the previously
   created one.  */
   created one.  */
 
 
static tree
static tree
get_replaced_param_substitute (struct ipa_parm_adjustment *adj)
get_replaced_param_substitute (struct ipa_parm_adjustment *adj)
{
{
  tree repl;
  tree repl;
  if (!adj->new_ssa_base)
  if (!adj->new_ssa_base)
    {
    {
      char *pretty_name = make_fancy_name (adj->base);
      char *pretty_name = make_fancy_name (adj->base);
 
 
      repl = create_tmp_var (TREE_TYPE (adj->base), "ISR");
      repl = create_tmp_var (TREE_TYPE (adj->base), "ISR");
      if (TREE_CODE (TREE_TYPE (repl)) == COMPLEX_TYPE
      if (TREE_CODE (TREE_TYPE (repl)) == COMPLEX_TYPE
          || TREE_CODE (TREE_TYPE (repl)) == VECTOR_TYPE)
          || TREE_CODE (TREE_TYPE (repl)) == VECTOR_TYPE)
        DECL_GIMPLE_REG_P (repl) = 1;
        DECL_GIMPLE_REG_P (repl) = 1;
      DECL_NAME (repl) = get_identifier (pretty_name);
      DECL_NAME (repl) = get_identifier (pretty_name);
      obstack_free (&name_obstack, pretty_name);
      obstack_free (&name_obstack, pretty_name);
 
 
      get_var_ann (repl);
      get_var_ann (repl);
      add_referenced_var (repl);
      add_referenced_var (repl);
      adj->new_ssa_base = repl;
      adj->new_ssa_base = repl;
    }
    }
  else
  else
    repl = adj->new_ssa_base;
    repl = adj->new_ssa_base;
  return repl;
  return repl;
}
}
 
 
/* Find the first adjustment for a particular parameter BASE in a vector of
/* Find the first adjustment for a particular parameter BASE in a vector of
   ADJUSTMENTS which is not a copy_param.  Return NULL if there is no such
   ADJUSTMENTS which is not a copy_param.  Return NULL if there is no such
   adjustment. */
   adjustment. */
 
 
static struct ipa_parm_adjustment *
static struct ipa_parm_adjustment *
get_adjustment_for_base (ipa_parm_adjustment_vec adjustments, tree base)
get_adjustment_for_base (ipa_parm_adjustment_vec adjustments, tree base)
{
{
  int i, len;
  int i, len;
 
 
  len = VEC_length (ipa_parm_adjustment_t, adjustments);
  len = VEC_length (ipa_parm_adjustment_t, adjustments);
  for (i = 0; i < len; i++)
  for (i = 0; i < len; i++)
    {
    {
      struct ipa_parm_adjustment *adj;
      struct ipa_parm_adjustment *adj;
 
 
      adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
      adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
      if (!adj->copy_param && adj->base == base)
      if (!adj->copy_param && adj->base == base)
        return adj;
        return adj;
    }
    }
 
 
  return NULL;
  return NULL;
}
}
 
 
/* Callback for scan_function.  If the statement STMT defines an SSA_NAME of a
/* Callback for scan_function.  If the statement STMT defines an SSA_NAME of a
   parameter which is to be removed because its value is not used, replace the
   parameter which is to be removed because its value is not used, replace the
   SSA_NAME with a one relating to a created VAR_DECL and replace all of its
   SSA_NAME with a one relating to a created VAR_DECL and replace all of its
   uses too and return true (update_stmt is then issued for the statement by
   uses too and return true (update_stmt is then issued for the statement by
   the caller).  DATA is a pointer to an adjustments vector.  */
   the caller).  DATA is a pointer to an adjustments vector.  */
 
 
static bool
static bool
replace_removed_params_ssa_names (gimple stmt, void *data)
replace_removed_params_ssa_names (gimple stmt, void *data)
{
{
  VEC (ipa_parm_adjustment_t, heap) *adjustments;
  VEC (ipa_parm_adjustment_t, heap) *adjustments;
  struct ipa_parm_adjustment *adj;
  struct ipa_parm_adjustment *adj;
  tree lhs, decl, repl, name;
  tree lhs, decl, repl, name;
 
 
  adjustments = (VEC (ipa_parm_adjustment_t, heap) *) data;
  adjustments = (VEC (ipa_parm_adjustment_t, heap) *) data;
  if (gimple_code (stmt) == GIMPLE_PHI)
  if (gimple_code (stmt) == GIMPLE_PHI)
    lhs = gimple_phi_result (stmt);
    lhs = gimple_phi_result (stmt);
  else if (is_gimple_assign (stmt))
  else if (is_gimple_assign (stmt))
    lhs = gimple_assign_lhs (stmt);
    lhs = gimple_assign_lhs (stmt);
  else if (is_gimple_call (stmt))
  else if (is_gimple_call (stmt))
    lhs = gimple_call_lhs (stmt);
    lhs = gimple_call_lhs (stmt);
  else
  else
    gcc_unreachable ();
    gcc_unreachable ();
 
 
  if (TREE_CODE (lhs) != SSA_NAME)
  if (TREE_CODE (lhs) != SSA_NAME)
    return false;
    return false;
  decl = SSA_NAME_VAR (lhs);
  decl = SSA_NAME_VAR (lhs);
  if (TREE_CODE (decl) != PARM_DECL)
  if (TREE_CODE (decl) != PARM_DECL)
    return false;
    return false;
 
 
  adj = get_adjustment_for_base (adjustments, decl);
  adj = get_adjustment_for_base (adjustments, decl);
  if (!adj)
  if (!adj)
    return false;
    return false;
 
 
  repl = get_replaced_param_substitute (adj);
  repl = get_replaced_param_substitute (adj);
  name = make_ssa_name (repl, stmt);
  name = make_ssa_name (repl, stmt);
 
 
  if (dump_file)
  if (dump_file)
    {
    {
      fprintf (dump_file, "replacing an SSA name of a removed param ");
      fprintf (dump_file, "replacing an SSA name of a removed param ");
      print_generic_expr (dump_file, lhs, 0);
      print_generic_expr (dump_file, lhs, 0);
      fprintf (dump_file, " with ");
      fprintf (dump_file, " with ");
      print_generic_expr (dump_file, name, 0);
      print_generic_expr (dump_file, name, 0);
      fprintf (dump_file, "\n");
      fprintf (dump_file, "\n");
    }
    }
 
 
  if (is_gimple_assign (stmt))
  if (is_gimple_assign (stmt))
    gimple_assign_set_lhs (stmt, name);
    gimple_assign_set_lhs (stmt, name);
  else if (is_gimple_call (stmt))
  else if (is_gimple_call (stmt))
    gimple_call_set_lhs (stmt, name);
    gimple_call_set_lhs (stmt, name);
  else
  else
    gimple_phi_set_result (stmt, name);
    gimple_phi_set_result (stmt, name);
 
 
  replace_uses_by (lhs, name);
  replace_uses_by (lhs, name);
  release_ssa_name (lhs);
  release_ssa_name (lhs);
  return true;
  return true;
}
}
 
 
/* Callback for scan_function and helper to sra_ipa_modify_assign.  If the
/* Callback for scan_function and helper to sra_ipa_modify_assign.  If the
   expression *EXPR should be replaced by a reduction of a parameter, do so.
   expression *EXPR should be replaced by a reduction of a parameter, do so.
   DATA is a pointer to a vector of adjustments.  DONT_CONVERT specifies
   DATA is a pointer to a vector of adjustments.  DONT_CONVERT specifies
   whether the function should care about type incompatibility the current and
   whether the function should care about type incompatibility the current and
   new expressions.  If it is true, the function will leave incompatibility
   new expressions.  If it is true, the function will leave incompatibility
   issues to the caller.
   issues to the caller.
 
 
   When called directly by scan_function, DONT_CONVERT is true when the EXPR is
   When called directly by scan_function, DONT_CONVERT is true when the EXPR is
   a write (LHS) expression.  */
   a write (LHS) expression.  */
 
 
static bool
static bool
sra_ipa_modify_expr (tree *expr, gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
sra_ipa_modify_expr (tree *expr, gimple_stmt_iterator *gsi ATTRIBUTE_UNUSED,
                     bool dont_convert, void *data)
                     bool dont_convert, void *data)
{
{
  ipa_parm_adjustment_vec adjustments;
  ipa_parm_adjustment_vec adjustments;
  int i, len;
  int i, len;
  struct ipa_parm_adjustment *adj, *cand = NULL;
  struct ipa_parm_adjustment *adj, *cand = NULL;
  HOST_WIDE_INT offset, size, max_size;
  HOST_WIDE_INT offset, size, max_size;
  tree base, src;
  tree base, src;
 
 
  adjustments = (VEC (ipa_parm_adjustment_t, heap) *) data;
  adjustments = (VEC (ipa_parm_adjustment_t, heap) *) data;
  len = VEC_length (ipa_parm_adjustment_t, adjustments);
  len = VEC_length (ipa_parm_adjustment_t, adjustments);
 
 
  if (TREE_CODE (*expr) == BIT_FIELD_REF
  if (TREE_CODE (*expr) == BIT_FIELD_REF
      || TREE_CODE (*expr) == IMAGPART_EXPR
      || TREE_CODE (*expr) == IMAGPART_EXPR
      || TREE_CODE (*expr) == REALPART_EXPR)
      || TREE_CODE (*expr) == REALPART_EXPR)
    {
    {
      expr = &TREE_OPERAND (*expr, 0);
      expr = &TREE_OPERAND (*expr, 0);
      dont_convert = false;
      dont_convert = false;
    }
    }
 
 
  base = get_ref_base_and_extent (*expr, &offset, &size, &max_size);
  base = get_ref_base_and_extent (*expr, &offset, &size, &max_size);
  if (!base || size == -1 || max_size == -1)
  if (!base || size == -1 || max_size == -1)
    return false;
    return false;
 
 
  if (INDIRECT_REF_P (base))
  if (INDIRECT_REF_P (base))
    base = TREE_OPERAND (base, 0);
    base = TREE_OPERAND (base, 0);
 
 
  base = get_ssa_base_param (base);
  base = get_ssa_base_param (base);
  if (!base || TREE_CODE (base) != PARM_DECL)
  if (!base || TREE_CODE (base) != PARM_DECL)
    return false;
    return false;
 
 
  for (i = 0; i < len; i++)
  for (i = 0; i < len; i++)
    {
    {
      adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
      adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
 
 
      if (adj->base == base &&
      if (adj->base == base &&
          (adj->offset == offset || adj->remove_param))
          (adj->offset == offset || adj->remove_param))
        {
        {
          cand = adj;
          cand = adj;
          break;
          break;
        }
        }
    }
    }
  if (!cand || cand->copy_param || cand->remove_param)
  if (!cand || cand->copy_param || cand->remove_param)
    return false;
    return false;
 
 
  if (cand->by_ref)
  if (cand->by_ref)
    {
    {
      tree folded;
      tree folded;
      src = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (cand->reduction)),
      src = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (cand->reduction)),
                    cand->reduction);
                    cand->reduction);
      folded = gimple_fold_indirect_ref (src);
      folded = gimple_fold_indirect_ref (src);
      if (folded)
      if (folded)
        src = folded;
        src = folded;
    }
    }
  else
  else
    src = cand->reduction;
    src = cand->reduction;
 
 
  if (dump_file && (dump_flags & TDF_DETAILS))
  if (dump_file && (dump_flags & TDF_DETAILS))
    {
    {
      fprintf (dump_file, "About to replace expr ");
      fprintf (dump_file, "About to replace expr ");
      print_generic_expr (dump_file, *expr, 0);
      print_generic_expr (dump_file, *expr, 0);
      fprintf (dump_file, " with ");
      fprintf (dump_file, " with ");
      print_generic_expr (dump_file, src, 0);
      print_generic_expr (dump_file, src, 0);
      fprintf (dump_file, "\n");
      fprintf (dump_file, "\n");
    }
    }
 
 
  if (!dont_convert
  if (!dont_convert
      && !useless_type_conversion_p (TREE_TYPE (*expr), cand->type))
      && !useless_type_conversion_p (TREE_TYPE (*expr), cand->type))
    {
    {
      tree vce = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (*expr), src);
      tree vce = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (*expr), src);
      *expr = vce;
      *expr = vce;
    }
    }
  else
  else
    *expr = src;
    *expr = src;
  return true;
  return true;
}
}
 
 
/* Callback for scan_function to process assign statements.  Performs
/* Callback for scan_function to process assign statements.  Performs
   essentially the same function like sra_ipa_modify_expr.  */
   essentially the same function like sra_ipa_modify_expr.  */
 
 
static enum scan_assign_result
static enum scan_assign_result
sra_ipa_modify_assign (gimple *stmt_ptr, gimple_stmt_iterator *gsi, void *data)
sra_ipa_modify_assign (gimple *stmt_ptr, gimple_stmt_iterator *gsi, void *data)
{
{
  gimple stmt = *stmt_ptr;
  gimple stmt = *stmt_ptr;
  tree *lhs_p, *rhs_p;
  tree *lhs_p, *rhs_p;
  bool any;
  bool any;
 
 
  if (!gimple_assign_single_p (stmt))
  if (!gimple_assign_single_p (stmt))
    return SRA_SA_NONE;
    return SRA_SA_NONE;
 
 
  rhs_p = gimple_assign_rhs1_ptr (stmt);
  rhs_p = gimple_assign_rhs1_ptr (stmt);
  lhs_p = gimple_assign_lhs_ptr (stmt);
  lhs_p = gimple_assign_lhs_ptr (stmt);
 
 
  any = sra_ipa_modify_expr (rhs_p, gsi, true, data);
  any = sra_ipa_modify_expr (rhs_p, gsi, true, data);
  any |= sra_ipa_modify_expr (lhs_p, gsi, true, data);
  any |= sra_ipa_modify_expr (lhs_p, gsi, true, data);
  if (any)
  if (any)
    {
    {
      tree new_rhs = NULL_TREE;
      tree new_rhs = NULL_TREE;
 
 
      if (!useless_type_conversion_p (TREE_TYPE (*lhs_p), TREE_TYPE (*rhs_p)))
      if (!useless_type_conversion_p (TREE_TYPE (*lhs_p), TREE_TYPE (*rhs_p)))
        {
        {
          if (TREE_CODE (*rhs_p) == CONSTRUCTOR)
          if (TREE_CODE (*rhs_p) == CONSTRUCTOR)
            {
            {
              /* V_C_Es of constructors can cause trouble (PR 42714).  */
              /* V_C_Es of constructors can cause trouble (PR 42714).  */
              if (is_gimple_reg_type (TREE_TYPE (*lhs_p)))
              if (is_gimple_reg_type (TREE_TYPE (*lhs_p)))
                *rhs_p = fold_convert (TREE_TYPE (*lhs_p), integer_zero_node);
                *rhs_p = fold_convert (TREE_TYPE (*lhs_p), integer_zero_node);
              else
              else
                *rhs_p = build_constructor (TREE_TYPE (*lhs_p), 0);
                *rhs_p = build_constructor (TREE_TYPE (*lhs_p), 0);
            }
            }
          else
          else
            new_rhs = fold_build1_loc (gimple_location (stmt),
            new_rhs = fold_build1_loc (gimple_location (stmt),
                                       VIEW_CONVERT_EXPR, TREE_TYPE (*lhs_p),
                                       VIEW_CONVERT_EXPR, TREE_TYPE (*lhs_p),
                                       *rhs_p);
                                       *rhs_p);
        }
        }
      else if (REFERENCE_CLASS_P (*rhs_p)
      else if (REFERENCE_CLASS_P (*rhs_p)
               && is_gimple_reg_type (TREE_TYPE (*lhs_p))
               && is_gimple_reg_type (TREE_TYPE (*lhs_p))
               && !is_gimple_reg (*lhs_p))
               && !is_gimple_reg (*lhs_p))
        /* This can happen when an assignment in between two single field
        /* This can happen when an assignment in between two single field
           structures is turned into an assignment in between two pointers to
           structures is turned into an assignment in between two pointers to
           scalars (PR 42237).  */
           scalars (PR 42237).  */
        new_rhs = *rhs_p;
        new_rhs = *rhs_p;
 
 
      if (new_rhs)
      if (new_rhs)
        {
        {
          tree tmp = force_gimple_operand_gsi (gsi, new_rhs, true, NULL_TREE,
          tree tmp = force_gimple_operand_gsi (gsi, new_rhs, true, NULL_TREE,
                                               true, GSI_SAME_STMT);
                                               true, GSI_SAME_STMT);
 
 
          gimple_assign_set_rhs_from_tree (gsi, tmp);
          gimple_assign_set_rhs_from_tree (gsi, tmp);
        }
        }
 
 
      return SRA_SA_PROCESSED;
      return SRA_SA_PROCESSED;
    }
    }
 
 
  return SRA_SA_NONE;
  return SRA_SA_NONE;
}
}
 
 
/* Call gimple_debug_bind_reset_value on all debug statements describing
/* Call gimple_debug_bind_reset_value on all debug statements describing
   gimple register parameters that are being removed or replaced.  */
   gimple register parameters that are being removed or replaced.  */
 
 
static void
static void
sra_ipa_reset_debug_stmts (ipa_parm_adjustment_vec adjustments)
sra_ipa_reset_debug_stmts (ipa_parm_adjustment_vec adjustments)
{
{
  int i, len;
  int i, len;
 
 
  len = VEC_length (ipa_parm_adjustment_t, adjustments);
  len = VEC_length (ipa_parm_adjustment_t, adjustments);
  for (i = 0; i < len; i++)
  for (i = 0; i < len; i++)
    {
    {
      struct ipa_parm_adjustment *adj;
      struct ipa_parm_adjustment *adj;
      imm_use_iterator ui;
      imm_use_iterator ui;
      gimple stmt;
      gimple stmt;
      tree name;
      tree name;
 
 
      adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
      adj = VEC_index (ipa_parm_adjustment_t, adjustments, i);
      if (adj->copy_param || !is_gimple_reg (adj->base))
      if (adj->copy_param || !is_gimple_reg (adj->base))
        continue;
        continue;
      name = gimple_default_def (cfun, adj->base);
      name = gimple_default_def (cfun, adj->base);
      if (!name)
      if (!name)
        continue;
        continue;
      FOR_EACH_IMM_USE_STMT (stmt, ui, name)
      FOR_EACH_IMM_USE_STMT (stmt, ui, name)
        {
        {
          /* All other users must have been removed by scan_function.  */
          /* All other users must have been removed by scan_function.  */
          gcc_assert (is_gimple_debug (stmt));
          gcc_assert (is_gimple_debug (stmt));
          gimple_debug_bind_reset_value (stmt);
          gimple_debug_bind_reset_value (stmt);
          update_stmt (stmt);
          update_stmt (stmt);
        }
        }
    }
    }
}
}
 
 
/* Return true iff all callers have at least as many actual arguments as there
/* Return true iff all callers have at least as many actual arguments as there
   are formal parameters in the current function.  */
   are formal parameters in the current function.  */
 
 
static bool
static bool
all_callers_have_enough_arguments_p (struct cgraph_node *node)
all_callers_have_enough_arguments_p (struct cgraph_node *node)
{
{
  struct cgraph_edge *cs;
  struct cgraph_edge *cs;
  for (cs = node->callers; cs; cs = cs->next_caller)
  for (cs = node->callers; cs; cs = cs->next_caller)
    if (!callsite_has_enough_arguments_p (cs->call_stmt))
    if (!callsite_has_enough_arguments_p (cs->call_stmt))
      return false;
      return false;
 
 
  return true;
  return true;
}
}
 
 
 
 
/* Convert all callers of NODE to pass parameters as given in ADJUSTMENTS.  */
/* Convert all callers of NODE to pass parameters as given in ADJUSTMENTS.  */
 
 
static void
static void
convert_callers (struct cgraph_node *node, ipa_parm_adjustment_vec adjustments)
convert_callers (struct cgraph_node *node, ipa_parm_adjustment_vec adjustments)
{
{
  tree old_cur_fndecl = current_function_decl;
  tree old_cur_fndecl = current_function_decl;
  struct cgraph_edge *cs;
  struct cgraph_edge *cs;
  bitmap recomputed_callers = BITMAP_ALLOC (NULL);
  bitmap recomputed_callers = BITMAP_ALLOC (NULL);
 
 
  for (cs = node->callers; cs; cs = cs->next_caller)
  for (cs = node->callers; cs; cs = cs->next_caller)
    {
    {
      current_function_decl = cs->caller->decl;
      current_function_decl = cs->caller->decl;
      push_cfun (DECL_STRUCT_FUNCTION (cs->caller->decl));
      push_cfun (DECL_STRUCT_FUNCTION (cs->caller->decl));
 
 
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "Adjusting call (%i -> %i) %s -> %s\n",
        fprintf (dump_file, "Adjusting call (%i -> %i) %s -> %s\n",
                 cs->caller->uid, cs->callee->uid,
                 cs->caller->uid, cs->callee->uid,
                 cgraph_node_name (cs->caller),
                 cgraph_node_name (cs->caller),
                 cgraph_node_name (cs->callee));
                 cgraph_node_name (cs->callee));
 
 
      ipa_modify_call_arguments (cs, cs->call_stmt, adjustments);
      ipa_modify_call_arguments (cs, cs->call_stmt, adjustments);
 
 
      pop_cfun ();
      pop_cfun ();
    }
    }
 
 
  for (cs = node->callers; cs; cs = cs->next_caller)
  for (cs = node->callers; cs; cs = cs->next_caller)
    if (cs->caller != node
    if (cs->caller != node
        && !bitmap_bit_p (recomputed_callers, cs->caller->uid))
        && !bitmap_bit_p (recomputed_callers, cs->caller->uid))
      {
      {
        compute_inline_parameters (cs->caller);
        compute_inline_parameters (cs->caller);
        bitmap_set_bit (recomputed_callers, cs->caller->uid);
        bitmap_set_bit (recomputed_callers, cs->caller->uid);
      }
      }
  BITMAP_FREE (recomputed_callers);
  BITMAP_FREE (recomputed_callers);
 
 
  current_function_decl = old_cur_fndecl;
  current_function_decl = old_cur_fndecl;
  return;
  return;
}
}
 
 
/* Perform all the modification required in IPA-SRA for NODE to have parameters
/* Perform all the modification required in IPA-SRA for NODE to have parameters
   as given in ADJUSTMENTS.  */
   as given in ADJUSTMENTS.  */
 
 
static void
static void
modify_function (struct cgraph_node *node, ipa_parm_adjustment_vec adjustments)
modify_function (struct cgraph_node *node, ipa_parm_adjustment_vec adjustments)
{
{
  struct cgraph_node *new_node;
  struct cgraph_node *new_node;
  struct cgraph_edge *cs;
  struct cgraph_edge *cs;
  VEC (cgraph_edge_p, heap) * redirect_callers;
  VEC (cgraph_edge_p, heap) * redirect_callers;
  int node_callers;
  int node_callers;
 
 
  node_callers = 0;
  node_callers = 0;
  for (cs = node->callers; cs != NULL; cs = cs->next_caller)
  for (cs = node->callers; cs != NULL; cs = cs->next_caller)
    node_callers++;
    node_callers++;
  redirect_callers = VEC_alloc (cgraph_edge_p, heap, node_callers);
  redirect_callers = VEC_alloc (cgraph_edge_p, heap, node_callers);
  for (cs = node->callers; cs != NULL; cs = cs->next_caller)
  for (cs = node->callers; cs != NULL; cs = cs->next_caller)
    VEC_quick_push (cgraph_edge_p, redirect_callers, cs);
    VEC_quick_push (cgraph_edge_p, redirect_callers, cs);
 
 
  rebuild_cgraph_edges ();
  rebuild_cgraph_edges ();
  pop_cfun ();
  pop_cfun ();
  current_function_decl = NULL_TREE;
  current_function_decl = NULL_TREE;
 
 
  new_node = cgraph_function_versioning (node, redirect_callers, NULL, NULL);
  new_node = cgraph_function_versioning (node, redirect_callers, NULL, NULL);
  current_function_decl = new_node->decl;
  current_function_decl = new_node->decl;
  push_cfun (DECL_STRUCT_FUNCTION (new_node->decl));
  push_cfun (DECL_STRUCT_FUNCTION (new_node->decl));
 
 
  ipa_modify_formal_parameters (current_function_decl, adjustments, "ISRA");
  ipa_modify_formal_parameters (current_function_decl, adjustments, "ISRA");
  scan_function (sra_ipa_modify_expr, sra_ipa_modify_assign,
  scan_function (sra_ipa_modify_expr, sra_ipa_modify_assign,
                 replace_removed_params_ssa_names, false, adjustments);
                 replace_removed_params_ssa_names, false, adjustments);
  sra_ipa_reset_debug_stmts (adjustments);
  sra_ipa_reset_debug_stmts (adjustments);
  convert_callers (new_node, adjustments);
  convert_callers (new_node, adjustments);
  cgraph_make_node_local (new_node);
  cgraph_make_node_local (new_node);
  return;
  return;
}
}
 
 
/* Return false the function is apparently unsuitable for IPA-SRA based on it's
/* Return false the function is apparently unsuitable for IPA-SRA based on it's
   attributes, return true otherwise.  NODE is the cgraph node of the current
   attributes, return true otherwise.  NODE is the cgraph node of the current
   function.  */
   function.  */
 
 
static bool
static bool
ipa_sra_preliminary_function_checks (struct cgraph_node *node)
ipa_sra_preliminary_function_checks (struct cgraph_node *node)
{
{
  if (!cgraph_node_can_be_local_p (node))
  if (!cgraph_node_can_be_local_p (node))
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "Function not local to this compilation unit.\n");
        fprintf (dump_file, "Function not local to this compilation unit.\n");
      return false;
      return false;
    }
    }
 
 
  if (!tree_versionable_function_p (node->decl))
  if (!tree_versionable_function_p (node->decl))
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "Function not local to this compilation unit.\n");
        fprintf (dump_file, "Function not local to this compilation unit.\n");
      return false;
      return false;
    }
    }
 
 
  if (DECL_VIRTUAL_P (current_function_decl))
  if (DECL_VIRTUAL_P (current_function_decl))
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "Function is a virtual method.\n");
        fprintf (dump_file, "Function is a virtual method.\n");
      return false;
      return false;
    }
    }
 
 
  if ((DECL_COMDAT (node->decl) || DECL_EXTERNAL (node->decl))
  if ((DECL_COMDAT (node->decl) || DECL_EXTERNAL (node->decl))
      && node->global.size >= MAX_INLINE_INSNS_AUTO)
      && node->global.size >= MAX_INLINE_INSNS_AUTO)
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "Function too big to be made truly local.\n");
        fprintf (dump_file, "Function too big to be made truly local.\n");
      return false;
      return false;
    }
    }
 
 
  if (!node->callers)
  if (!node->callers)
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file,
        fprintf (dump_file,
                 "Function has no callers in this compilation unit.\n");
                 "Function has no callers in this compilation unit.\n");
      return false;
      return false;
    }
    }
 
 
  if (cfun->stdarg)
  if (cfun->stdarg)
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "Function uses stdarg. \n");
        fprintf (dump_file, "Function uses stdarg. \n");
      return false;
      return false;
    }
    }
 
 
  if (TYPE_ATTRIBUTES (TREE_TYPE (node->decl)))
  if (TYPE_ATTRIBUTES (TREE_TYPE (node->decl)))
    return false;
    return false;
 
 
  return true;
  return true;
}
}
 
 
/* Perform early interprocedural SRA.  */
/* Perform early interprocedural SRA.  */
 
 
static unsigned int
static unsigned int
ipa_early_sra (void)
ipa_early_sra (void)
{
{
  struct cgraph_node *node = cgraph_node (current_function_decl);
  struct cgraph_node *node = cgraph_node (current_function_decl);
  ipa_parm_adjustment_vec adjustments;
  ipa_parm_adjustment_vec adjustments;
  int ret = 0;
  int ret = 0;
 
 
  if (!ipa_sra_preliminary_function_checks (node))
  if (!ipa_sra_preliminary_function_checks (node))
    return 0;
    return 0;
 
 
  sra_initialize ();
  sra_initialize ();
  sra_mode = SRA_MODE_EARLY_IPA;
  sra_mode = SRA_MODE_EARLY_IPA;
 
 
  if (!find_param_candidates ())
  if (!find_param_candidates ())
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "Function has no IPA-SRA candidates.\n");
        fprintf (dump_file, "Function has no IPA-SRA candidates.\n");
      goto simple_out;
      goto simple_out;
    }
    }
 
 
  if (!all_callers_have_enough_arguments_p (node))
  if (!all_callers_have_enough_arguments_p (node))
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "There are callers with insufficient number of "
        fprintf (dump_file, "There are callers with insufficient number of "
                 "arguments.\n");
                 "arguments.\n");
      goto simple_out;
      goto simple_out;
    }
    }
 
 
  bb_dereferences = XCNEWVEC (HOST_WIDE_INT,
  bb_dereferences = XCNEWVEC (HOST_WIDE_INT,
                                 func_param_count
                                 func_param_count
                                 * last_basic_block_for_function (cfun));
                                 * last_basic_block_for_function (cfun));
  final_bbs = BITMAP_ALLOC (NULL);
  final_bbs = BITMAP_ALLOC (NULL);
 
 
  scan_function (build_access_from_expr, build_accesses_from_assign,
  scan_function (build_access_from_expr, build_accesses_from_assign,
                 NULL, true, NULL);
                 NULL, true, NULL);
  if (encountered_apply_args)
  if (encountered_apply_args)
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "Function calls  __builtin_apply_args().\n");
        fprintf (dump_file, "Function calls  __builtin_apply_args().\n");
      goto out;
      goto out;
    }
    }
 
 
  if (encountered_unchangable_recursive_call)
  if (encountered_unchangable_recursive_call)
    {
    {
      if (dump_file)
      if (dump_file)
        fprintf (dump_file, "Function calls itself with insufficient "
        fprintf (dump_file, "Function calls itself with insufficient "
                 "number of arguments.\n");
                 "number of arguments.\n");
      goto out;
      goto out;
    }
    }
 
 
  adjustments = analyze_all_param_acesses ();
  adjustments = analyze_all_param_acesses ();
  if (!adjustments)
  if (!adjustments)
    goto out;
    goto out;
  if (dump_file)
  if (dump_file)
    ipa_dump_param_adjustments (dump_file, adjustments, current_function_decl);
    ipa_dump_param_adjustments (dump_file, adjustments, current_function_decl);
 
 
  modify_function (node, adjustments);
  modify_function (node, adjustments);
  VEC_free (ipa_parm_adjustment_t, heap, adjustments);
  VEC_free (ipa_parm_adjustment_t, heap, adjustments);
  ret = TODO_update_ssa;
  ret = TODO_update_ssa;
 
 
  statistics_counter_event (cfun, "Unused parameters deleted",
  statistics_counter_event (cfun, "Unused parameters deleted",
                            sra_stats.deleted_unused_parameters);
                            sra_stats.deleted_unused_parameters);
  statistics_counter_event (cfun, "Scalar parameters converted to by-value",
  statistics_counter_event (cfun, "Scalar parameters converted to by-value",
                            sra_stats.scalar_by_ref_to_by_val);
                            sra_stats.scalar_by_ref_to_by_val);
  statistics_counter_event (cfun, "Aggregate parameters broken up",
  statistics_counter_event (cfun, "Aggregate parameters broken up",
                            sra_stats.aggregate_params_reduced);
                            sra_stats.aggregate_params_reduced);
  statistics_counter_event (cfun, "Aggregate parameter components created",
  statistics_counter_event (cfun, "Aggregate parameter components created",
                            sra_stats.param_reductions_created);
                            sra_stats.param_reductions_created);
 
 
 out:
 out:
  BITMAP_FREE (final_bbs);
  BITMAP_FREE (final_bbs);
  free (bb_dereferences);
  free (bb_dereferences);
 simple_out:
 simple_out:
  sra_deinitialize ();
  sra_deinitialize ();
  return ret;
  return ret;
}
}
 
 
/* Return if early ipa sra shall be performed.  */
/* Return if early ipa sra shall be performed.  */
static bool
static bool
ipa_early_sra_gate (void)
ipa_early_sra_gate (void)
{
{
  return flag_ipa_sra;
  return flag_ipa_sra;
}
}
 
 
struct gimple_opt_pass pass_early_ipa_sra =
struct gimple_opt_pass pass_early_ipa_sra =
{
{
 {
 {
  GIMPLE_PASS,
  GIMPLE_PASS,
  "eipa_sra",                           /* name */
  "eipa_sra",                           /* name */
  ipa_early_sra_gate,                   /* gate */
  ipa_early_sra_gate,                   /* gate */
  ipa_early_sra,                        /* execute */
  ipa_early_sra,                        /* execute */
  NULL,                                 /* sub */
  NULL,                                 /* sub */
  NULL,                                 /* next */
  NULL,                                 /* next */
  0,                                     /* static_pass_number */
  0,                                     /* static_pass_number */
  TV_IPA_SRA,                           /* tv_id */
  TV_IPA_SRA,                           /* tv_id */
  0,                                     /* properties_required */
  0,                                     /* properties_required */
  0,                                     /* properties_provided */
  0,                                     /* properties_provided */
  0,                                     /* properties_destroyed */
  0,                                     /* properties_destroyed */
  0,                                     /* todo_flags_start */
  0,                                     /* todo_flags_start */
  TODO_dump_func | TODO_dump_cgraph     /* todo_flags_finish */
  TODO_dump_func | TODO_dump_cgraph     /* todo_flags_finish */
 }
 }
};
};
 
 
 
 
 
 

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.